CN117135471A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN117135471A
CN117135471A CN202310222752.1A CN202310222752A CN117135471A CN 117135471 A CN117135471 A CN 117135471A CN 202310222752 A CN202310222752 A CN 202310222752A CN 117135471 A CN117135471 A CN 117135471A
Authority
CN
China
Prior art keywords
color space
exposure frame
lab color
lab
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310222752.1A
Other languages
Chinese (zh)
Inventor
黄庭刚
冯天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310222752.1A priority Critical patent/CN117135471A/en
Publication of CN117135471A publication Critical patent/CN117135471A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • H04N23/16Optical arrangements associated therewith, e.g. for beam-splitting or for colour correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals

Abstract

An image processing method and electronic equipment are applied to the technical field of image processing. The method comprises the following steps: the method comprises the steps of obtaining a multi-exposure frame, wherein the multi-exposure frame at least comprises a long exposure frame, a medium exposure frame and a short exposure frame, then converting the short exposure frame and the medium exposure frame into Lab color spaces respectively, and fusing by utilizing the Lab color space corresponding to the short exposure frame and the Lab color space corresponding to the medium exposure frame to obtain a first Lab color space; then, performing color correction on a second Lab color space by using the first Lab color space to obtain a third Lab color space, wherein the second Lab color space is obtained by converting an RGB image to be corrected through the color space; finally, the third Lab color space is converted back to the RGB domain, the corrected RGB image is output, the color reduction degree of the photo or video is improved, the original color under the real scene is reduced, the visual effect of the image is improved, and the shooting experience of a user can be improved.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of image processing technology, and in particular, to an image processing method and an electronic device.
Background
With the increasing development of intelligent terminals, shooting functions become necessary functions of the intelligent terminals. The user's need to take pictures (photograph and/or camera) and experience with intelligent terminals is also constantly increasing. At present, under a scene with a high dynamic range, the color restoration effect of a photo or a video shot by a user is poor, the look and feel of a real scene is difficult to accurately restore, the user is given a color distortion feeling, and the shooting experience of the user is seriously influenced.
Disclosure of Invention
In view of the above, the present application provides an image processing method, an electronic device, a computer readable storage medium and a computer program product, which are helpful for restoring original colors in a real scene, improving visual effects of images, and improving shooting experience of users.
In a first aspect, there is provided an image processing method including:
acquiring a multi-exposure frame, wherein the multi-exposure frame at least comprises a first exposure frame, a second exposure frame and a third exposure frame, wherein the exposure time length of the first exposure frame is longer than the exposure time length of the second exposure frame, and the exposure time length of the second exposure frame is longer than the exposure time length of the third exposure frame; for example, the first exposure frame is a long exposure frame, the second exposure frame is a medium exposure frame, and the third exposure frame is a short exposure frame;
Determining a first Lab color space based on the third exposure frame and the second exposure frame;
generating an RGB image to be corrected based on the multi-exposure frame;
performing color space conversion processing on the RGB image to be corrected to obtain a second Lab color space;
correcting the second Lab color space by using the first Lab color space to obtain a third Lab color space;
and performing RGB conversion processing on the third Lab color space, and outputting a corrected RGB image.
The above method may be performed by the terminal device or a chip in the terminal device. Based on the scheme, the short exposure frame is adopted for the bright area, the medium exposure frame is adopted for the dark area, and the short exposure frame contains more original color information, namely, the short exposure frame is closer to the real color, so that the second Lab color space can be well corrected based on the obtained first Lab color space, the color of the real scene can be recovered as much as possible, the color reduction degree of a photo or a video is improved, the original color of the real scene can be reduced, the visual effect of an image can be improved, and the shooting experience of a user can be improved.
In one possible implementation, determining the first Lab color space based on the third exposure frame and the second exposure frame includes:
Performing color space conversion on the second exposure frame based on a first color parameter to obtain a Lab color space corresponding to the second exposure frame, wherein the first color parameter is a color parameter corresponding to the second exposure frame;
performing color space conversion on the third exposure frame based on a second color parameter to obtain an LAB color space corresponding to the third exposure frame, wherein the second color parameter is a color parameter corresponding to the third exposure frame;
and fusing the Lab color space corresponding to the second exposure frame and the Lab color space corresponding to the third exposure frame to obtain the first Lab color space.
Illustratively, by color space conversion, the Lab color space of the short exposure frame and the Lab color space of the medium exposure frame can be obtained, so that a first Lab color space for subsequent color correction can be generated, and subsequent correction of the second Lab color space is realized, thereby recovering the color of the real scene as much as possible.
In one possible implementation manner, the fusing the Lab color space corresponding to the second exposure frame and the Lab color space corresponding to the third exposure frame includes:
based on the first fusion weight, fusing the Lab color space corresponding to the second exposure frame and the Lab color space corresponding to the third exposure frame;
The first fusion weight is determined based on a first brightness component, and the first brightness component is a brightness component of a Lab color space corresponding to the second exposure frame.
Illustratively, the first Lab color space satisfies the following formula:
Lab_merged=Lab_N*W1+Lab_S*(1-W1);
wherein Lab_merge represents the first Lab color space, W1 represents the first fusion weight, lab_N represents the Lab color space corresponding to the middle exposure frame, and Lab_S represents the Lab color space corresponding to the short exposure frame.
The reason for correcting the second Lab color space with the first Lab color space is that: the first Lab color space is obtained by fusing the Lab color space corresponding to the middle exposure frame and the Lab color space corresponding to the short exposure frame, and the fusion essence of the two is that: the short exposure frame is adopted for the bright area, the medium exposure frame is adopted for the dark area, and the short exposure frame contains more original color information, namely, the short exposure frame is closer to the real color, so that the second Lab color space can be well corrected based on the obtained first Lab color space, and the color of the real scene can be recovered as much as possible.
In one possible implementation manner, the correcting the second Lab color space with the first Lab color space to obtain a third Lab color space includes:
Correcting the second Lab color space by using the first Lab color space based on the second fusion weight to obtain a third Lab color space; wherein the second fusion weight is determined based on a spatial distance of the first Lab color space and the second Lab color space.
The method for determining the second fusion weight in the embodiment of the application is not particularly limited.
In one possible implementation, the second fusion weight satisfies the following formula:
W2=-K*d+b;
wherein W2 represents the second fusion weight; k and b are constant coefficients; d represents the spatial distance of the first LAB color space and the second LAB color space,
wherein labmered (a) represents the a-component of the first Lab color space, labmered (b) represents the b-component of the first Lab color space, lab_hdr (a) represents the a-component of the second Lab color space, and lab_hdr (b) represents the b-component of the second Lab color space.
For example, the larger d is, the larger W2 is; the closer the spatial distance between the first Lab color space and the second Lab color space, i.e., the smaller the value of d, the smaller the value of W2. That is, the larger the two spatial distances, the more the second Lab color space needs to be corrected using the first Lab color space, and thus the larger the value of W2 will be.
In one possible implementation, correcting the second Lab color space with the first Lab color space further comprises:
performing elimination processing on the ghost areas when the first Lab color space is utilized to correct the second Lab color space;
the ghost area is determined based on multi-frame fusion images, and the multi-frame fusion images are obtained by carrying out multi-frame fusion processing on the multi-exposure frames.
Because special processing such as hole filling is performed on the ghost area when multi-frame HDR fusion is performed, color recovery is not required on the ghost area, namely, when the first Lab color space is utilized to correct the second Lab color space, pixel points corresponding to the ghost area can be eliminated or rejected.
In one possible implementation, the third Lab color space satisfies the following formula:
Lab_Fusion=W2*Labmerged*(1-ghostmask)+(1-W2)*Lab_HDR*ghostmask
wherein lab_fusion represents the third Lab color space, W2 represents the second Fusion weight, labmered represents the first Lab color space, lab_hdr represents the second Lab color space, and ghostmask represents the ghost region.
In one possible implementation, performing color space conversion processing on the RGB image to be corrected to obtain a second Lab color space, including:
And converting the RGB image to be corrected from a RAW domain to a Lab domain based on a first color parameter to obtain the second Lab color space, wherein the first color parameter is a color parameter corresponding to the second exposure frame.
The embodiment of the application is mainly used for correcting the color of the bright area so that the bright area recovers the original color. Therefore, when the RGB image to be corrected is converted from the RAW domain to the Lab domain, the color parameters corresponding to the first exposure frame (mid-exposure frame) may be used. The color parameters corresponding to the middle exposure frame can ensure that the color deviation between the dark area and the middle exposure frame is not very large.
In one possible implementation, generating an RGB image to be corrected based on the multi-exposure frame includes:
performing multi-frame fusion processing on the long exposure frame, the second exposure frame and the third exposure frame to obtain multi-frame fusion images;
and performing tone mapping processing on the multi-frame fusion image to obtain the RGB image to be corrected.
Therefore, before performing color correction, a multi-frame fusion processing algorithm and a tone mapping module are combined to obtain an RGB image to be corrected.
In one possible implementation, before acquiring the multi-exposure frame, the method further includes:
And receiving a first operation of a user, wherein the first operation is a photographing operation or a video recording operation.
That is, the image processing method according to the embodiment of the present application is suitable for a photographed scene or a scene in which a video is recorded.
In a second aspect, an electronic device is provided comprising means for performing the method of any of the implementations of the first aspect. The electronic device may be a terminal or a chip in the terminal. The electronic device includes an input unit, a display unit, and a processing unit.
When the electronic device is a terminal, the processing unit may be a processor, the input unit may be a communication interface, and the display unit may be a graphic processing module and a screen; the terminal may further comprise a memory for storing computer program code which, when executed by the processor, causes the terminal to perform any of the methods of the first aspect.
When the electronic device is a chip in a terminal, the processing unit may be a logic processing unit inside the chip, the input unit may be an input interface, a pin, a circuit, or the like, and the display unit may be a graphics processing unit inside the chip; the chip may also include memory, which may be memory within the chip (e.g., registers, caches, etc.), or memory external to the chip (e.g., read-only memory, random access memory, etc.); the memory is for storing computer program code which, when executed by the processor, causes the chip to perform any of the methods of the first aspect.
In a third aspect, there is provided a computer readable storage medium storing computer program code which, when executed by an electronic device, causes the electronic device to perform any one of the methods of the first aspect.
In a fourth aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any of the methods of the first aspect.
Drawings
FIG. 1 is an exemplary diagram of an application scenario of an embodiment of the present application;
FIG. 2 is a schematic flow chart of an image processing method of an embodiment of the application;
FIG. 3 is another schematic flow chart of an image processing method of an embodiment of the present application;
FIG. 4 is a diagram of the relationship between W1 and L according to an embodiment of the present application;
FIG. 5 is an algorithmic block diagram of an image processing method of an embodiment of the present application;
FIG. 6 is an exemplary diagram of a software architecture to which embodiments of the application are applied;
fig. 7 is a schematic structural view of an electronic device suitable for use in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
In the embodiments of the present application, unless otherwise indicated, the meaning of "plurality" may be two or more.
The embodiment of the application is suitable for electronic equipment, which can be mobile phones, intelligent screens, tablet computers, wearable electronic equipment, vehicle-mounted electronic equipment, augmented reality (augmented reality, AR) equipment, virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA), projectors and the like.
The electronic equipment of the embodiment of the application is provided with an image collector (such as a camera).
The embodiment of the application does not limit the specific type of the electronic equipment. The image processing method according to the embodiment of the present application is described below by taking an example in which the electronic device is a mobile phone.
The following is an example of a scenario in connection with fig. 1. Fig. 1 is an exemplary diagram of an application scenario of an embodiment of the present application.
In the mobile phone interface shown in fig. 1 (1), the interface may display a plurality of application programs: application 1, application 2, …, application 7, and camera application. The user clicks the camera application and the mobile phone starts the camera. After the camera is operated, the mobile phone interface displays an interface as shown in (2) in fig. 1. This interface may be referred to as a shooting interface of the camera. The shooting interface may include a viewfinder 11, a shooting mode field (such as portrait, night view, shooting, video recording, and more), a shooting control, a camera rotation control, and the like. The viewfinder 11 is used to acquire an image of a shot preview, and can display the preview image in real time. The shooting mode field is used for a user to switch different shooting modes. The photographing control is used for realizing a photographing function. The camera rotation control may be used to switch cameras.
As an example, as shown in fig. 1 (2), when the user clicks the video mode, the interface displays an interface as shown in fig. 1 (3), and the interface displays a video control 12. When the user clicks on the record control 12, the handset begins recording video, for example, with an interface display as shown in fig. 1 (4).
As shown in fig. 1 (4), the recording screen of the 5 th second is displayed on the interface. The interface comprises a manual photographing control 13, a video stopping control 14 and a video pausing control 15 in video. When a user clicks the manual photographing control 13, manual snapshot of photos in the video recording process can be realized; when the user clicks the record stop control 14, the recording of video may be stopped; when the user clicks the recording pause control 15, the current recording may be paused.
It should be understood that the photographing scene in fig. 1 is only a schematic illustration of an application scene of the present application, and this is not a limitation of the embodiment of the present application. In fact, the embodiment of the application can also be applied to other scenes photographed or recorded by using a camera.
It should also be understood that (2) to (4) in fig. 1 are schematic views of an interface of a user capturing images under a vertical screen of a mobile phone, but the present application is not limited thereto. For example, a user can shoot under a mobile phone horizontal screen.
In some embodiments, the handset may employ a four-color filter array (Quadra Color Filter Array, quadra CFA) sensor (sensor) camera. Key components of the camera include an optical lens (lens) and an image sensor (sensor). After the camera is started, the sensor can map based on the acquired image signals. In the embodiment of the application, the way the sensor is used for drawing depends on the zoom magnification and the ambient illuminance of the shooting scene.
It will be appreciated that the embodiments of the present application are not particularly limited as to how the camera is activated. For example, as shown in fig. 1 (1), the user clicks on a camera application to take a video.
For ease of understanding, before describing the image processing method of the embodiment of the present application, some terms related to the embodiment of the present application will be explained first.
1. Lab is a color space, collectively referred to as CIELAB, and is sometimes also referred to as CIE Lab. In the Lab color space, each color is represented by three numbers L, a and b, and the meaning of each component is as follows:
the L component represents brightness (or brightness) and takes the value of 0-100;
the component a represents the component from green to red, and takes the value of-128 to 127;
the component b represents the component from blue to yellow, and takes the value of-128 to 127.
2. Color parameters refer to the relevant parameters used to characterize the color of an image. In an embodiment of the present application, the color parameters include, but are not limited to, one or more of the following: automatic white balance (Automatic white balance, AWB), color correction matrix (Color Correction Matrix, CCM), black level correction (black level correction, BLC) parameters, gamma parameters. The Gamma parameter can be understood as a log curve to lighten a dark area, so that details of the dark area accord with the human eye.
3. The process of converting the Raw domain to the Lab domain refers to: the Raw domain is converted to the Lab domain using color parameters. For example, the Raw domain to Lab domain conversion may include the following processes:
1) Using blc, awb, ccm, gamma will be transferred from raw domain to sRGB domain;
raw_blc=Raw-blc;
raw_awb=raw_blc*awb(rgain,bgain)
raw_gamma=raw_ccm 1/gamma
wherein Raw represents pixel values of an image frame;
2) Performing inverse Gamma transformation of standard Gamma2.4, and converting from a raw domain to a linear RGB domain:
raw_degamma=raw_gamma 2.4
3) Performing linear transformation by using the pixel value of raw_degamma, and converting into an XYZ space:
X=0.412453*R+0.357580*G+0.180423*B
Y=0.212671*R+0.715160*G+0.072169*B
Z=0.019334*R+0.119193*G+0.950227*B
the RGB value in the above formula is the pixel value of raw_degamma;
4) Taking the standard white point of the D65 light source as a reference, and normalizing the XYZ space values obtained in the step 3):
x=X/Xref_white
y=Y/Yref_white
z=Z/Zref_white;
for example, xyz_ref_white= (0.95047,1.0,1.08883).
5) Performing nonlinear transformation processing on the x, y and z values obtained in the step 4), namely substituting the x, y and z values obtained in the step 4) into the following formula:
6) Converting the value obtained in 5) to Lab domain:
L=116·y-16;
a=500·(x-y);
b=200·(y-z);
it should be understood that the above steps 1) to 6) illustrate an example of converting an image frame from the Raw domain to the Lab domain, and the embodiment of the present application is not limited thereto.
It should be noted that, in the embodiment of the present application, when the mid-exposure frame is converted to the Lab color space or the short-exposure frame is converted to the Lab color space, or when the RGB image to be corrected is converted from the Raw domain to the Lab domain, the conversion of the color space may be completed by referring to the above steps.
It should be understood that in the embodiment of the present application, when the mid-exposure frame is converted to the Lab color space, the color parameter corresponding to the mid-exposure frame is utilized; when the short exposure frame is converted into Lab color space, the color parameters corresponding to the short exposure frame are utilized; when converting the RGB image to be corrected from the Raw domain to the Lab domain, the color parameters corresponding to the mid-exposure frame are utilized.
4. The Lab domain to RGB domain conversion process can be understood as an inverse transformation of the Raw domain to Lab domain conversion process described in 3 above. For example, the steps a to a may be as follows:
step a, conversion from Lab domain to XYZ, may specifically include the following transformation process:
(1) Linear transformation: y= (l+16)/116; x=a/500+y; z=y-b/200;
Wherein L represents a luminance component, a represents a component from green to red, and b represents a component from blue to yellow;
(2) Nonlinear transformation:namely substituting xyz in (1) into t;
(3) Inverse normalization:
X=x*Xref_white
Y=y*Yref_white
Z=z*Zref_white;
wherein "×" represents the multiplication.
Step b, converting from XYZ to sRGB domain, may specifically comprise the following transformation procedure:
(1) Linear transformation:
wherein,the matrix is the matrix converted from rgb to xyz space in the step 3) of the process of converting the Raw domain into the Lab domain in the step 3);
(2) Gamma transformation:
substituting RGB obtained in the step (1) into t;
wherein,
(3) Inverse normalization:
r=R*255
g=G*255
b=B*255
it will be appreciated that the above-described process of inverse normalization is described by way of example only with respect to a common 8-bit (bit) wide image (corresponding to a normalized maximum of 255), and embodiments of the present application are not limited thereto. In fact, the normalized maximum value can be determined according to the true bit width of the image in the practical application process.
The conversion from Lab domain to XYZ can be achieved by the steps a and b described above.
It should be understood that the above steps a to b illustrate examples of converting an image frame from the Lab domain to the RGB domain, and embodiments of the present application are not limited thereto.
At present, when shooting or recording video in a high dynamic range HDR scene, the obtained photo can possibly have the problem of color distortion. For example, for Raw data in a highlight region, color distortion is more likely to occur after tone mapping processing. In view of this, an embodiment of the present application provides an image processing method and an electronic device, which determine a fused color space by using a short exposure frame and a medium exposure frame, and perform color correction on an image to be corrected (or an image with color distortion) based on the fused color space, so as to obtain a corrected image, which is conducive to color recovery.
Fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application. As shown in fig. 2, the method includes:
step 210, acquiring a multi-exposure frame, wherein the multi-exposure frame at least comprises a first exposure frame, a second exposure frame and a third exposure frame, and the exposure time length of the first exposure frame is longer than the exposure time length of the second exposure frame, and the exposure time length of the second exposure frame is longer than the exposure time length of the third exposure frame.
The embodiment of the application does not limit the specific triggering conditions for acquiring the multi-exposure frames. For example, the mobile phone receives a photographing operation of a user, and triggers a camera sensor (sensor) to emit a multi-exposure frame based on the photographing operation. For another example, the mobile phone receives a video recording operation of the user, and triggers the sensor to output a multi-exposure frame based on the video recording operation.
Alternatively, the above multi-exposure frame may be obtained by capturing or grabbing the raw RAW (Raw image format) data by a camera sensor (sensor) of the camera.
Illustratively, the multi-exposure frames include a long exposure frame (corresponding to a first exposure frame), a medium exposure frame (corresponding to a second exposure frame), and a short exposure frame (corresponding to a third exposure frame).
Generally, to obtain a high dynamic range (high dynamic range, HDR) image, multiple exposure frames with different exposure durations may be obtained for the same photographic subject, and the multiple exposure frames may be fused to obtain the HDR image. For example, the multi-exposure frames of different exposure durations include a long-exposure frame, a medium-exposure frame, and a short-exposure frame. The corresponding exposure time of the long exposure frame is t1, the corresponding exposure time of the medium exposure frame is t2, and the corresponding exposure time of the short exposure frame is t3, wherein t1> t2> t3.
Illustratively, the long (long) exposure frame may be represented as an L frame; the (normal) exposure frame may be represented as N frames; a short (short) exposure frame may be denoted as an S frame. The medium exposure frame may also be referred to as a normal exposure reference frame.
Alternatively, the multi-exposure frames may also include a very long (ver long) exposure frame, a very short (ver short) exposure frame. The exposure time length of the ultra-long exposure frame is longer than the exposure time length of the long exposure frame. The exposure time length of the ultra-short exposure frame is smaller than that of the short exposure frame.
It should be noted that, for easy understanding or description, the embodiments of the present application are described by taking the multi-exposure frame including a long exposure frame, a medium exposure frame, and a short exposure frame as an example. In fact, the technical solution of the embodiment of the present application can be extended to the case where the multi-exposure frame includes more exposure frames. For example, the method can be extended to the case that the multi-exposure frame comprises a long exposure frame, a medium exposure frame, a short exposure frame and an ultra-short exposure frame, the case that the multi-exposure frame comprises an ultra-long exposure frame, a medium exposure frame and a short exposure frame, and the case that the multi-exposure frame comprises an ultra-long exposure frame, a medium exposure frame, a short exposure frame and an ultra-short exposure frame.
Step 220, determining a first Lab color space based on the short exposure frame (corresponding to the third exposure frame) and the medium exposure frame (corresponding to the second exposure frame).
In the embodiment of the application, the short exposure frame and the medium exposure frame can be respectively converted into Lab color spaces to respectively obtain the Lab color space corresponding to the medium exposure frame and the Lab color space corresponding to the short exposure frame, and then the first Lab color space is generated according to the Lab color space corresponding to the medium exposure frame and the Lab color space corresponding to the short exposure frame.
The first Lab color space is used for carrying out color correction or color recovery on the Lab color space corresponding to the RGB image to be corrected.
The process of generating the first Lab color space is described below in connection with FIG. 3.
Alternatively, as one embodiment, a specific implementation of step 220 may refer to the steps shown in FIG. 3. As shown in fig. 3, determining the first Lab color space includes the steps of:
step 220-1, performing color space conversion on the middle exposure frame based on a first color parameter, to obtain a Lab color space corresponding to the middle exposure frame, where the first color parameter is a color parameter corresponding to the middle exposure frame.
And 220-2, performing color space conversion on the short exposure frame based on a second color parameter to obtain a Lab color space corresponding to the short exposure frame, wherein the second color parameter is the color parameter corresponding to the short exposure frame.
And 220-3, fusing the Lab color space corresponding to the middle exposure frame and the Lab color space corresponding to the short exposure frame to obtain the first Lab color space.
Optionally, the first color parameter comprises one or more of the following parameters: awb _n, gamma_n, ccm_n, blc_n. The first color parameter refers to the color parameter of the sensor when exposing the frame in output.
Optionally, the second color parameter comprises one or more of the following parameters: awb _s, gamma_s, ccm_s, blc_s. The second color parameter refers to the color parameter of sensor when outputting the short exposure frame.
Illustratively, color parameters corresponding to the middle exposure frame and the short exposure frame are adopted, the middle exposure frame and the short exposure frame are respectively converted into Lab color spaces, and fusion processing (mered) is carried out on the basis of the Lab color spaces of the obtained middle exposure frame and the short exposure frame, so that a first Lab color space is obtained. The first Lab color space may also be referred to as a fused Lab color space.
It will be appreciated that the color parameters corresponding to each image frame may be automatically obtained from a graphics processor or image signal processor (e.g., ISP). For example, for each image frame, the ISP may output a corresponding color parameter. Optionally, as an embodiment, step 220-3 includes:
Based on the first fusion weight, fusing the Lab color space corresponding to the middle exposure frame and the Lab color space corresponding to the short exposure frame;
the first fusion weight is determined based on a first brightness component, and the first brightness component is a brightness component of the Lab color space corresponding to the middle exposure frame. The first brightness component, namely the L value corresponding to each pixel point after the middle exposure frame is converted into the Lab color space.
In one possible implementation, the value of the first fusion weight depends on the value of the luminance component L of the Lab color space corresponding to the mid-exposure frame. Fig. 4 shows a graph of the first fusion weight versus L. As shown in fig. 4, the abscissa is L, and the ordinate represents W1. If the value of L is larger, W1 is smaller; if L is smaller, W1 is larger.
Illustratively, the first Lab color space satisfies the following formula:
Lab_merged=Lab_N*W1+Lab_S*(1-W1);
wherein Lab_merge represents the first Lab color space, W1 represents the first fusion weight, lab_N represents the Lab color space corresponding to the middle exposure frame, and Lab_S represents the Lab color space corresponding to the short exposure frame.
It should be appreciated that the above formula for the first LAB color space is merely an example, and embodiments of the present application are not limited thereto.
It should also be understood that the foregoing description is given by taking, as an example, that the value of the first fusion weight depends on the value of the luminance component L of the Lab color space corresponding to the mid-exposure frame, and the embodiment of the present application is not limited thereto. For example, optionally, the first fusion weight may refer to a weight of the Raw domain, for example, the first fusion weight may be a weight of a weight value of the middle exposure frame in the Raw domain, that is, a weight value corresponding to the middle exposure frame when performing multi-frame HDR fusion on the middle exposure frame and the short exposure frame.
It should be noted that, the essence of performing color correction in the first LAB color space is that: the original color of the short exposure frame can be used for the bright area, so that the bright area of the corrected image can achieve the purpose of recovering the original color.
In step 230, an RGB image to be corrected is generated based on the multi-exposure frame.
It should be understood that the time sequence of the step 230 and the foregoing steps 210 and 220 is not limited in the embodiment of the present application. Illustratively, step 230 may be performed concurrently with the previous steps (including step 210 and step 220).
The embodiment of the application is not limited to a specific way of generating the RGB image to be corrected based on the multi-exposure frame. In particular, may depend on the image frames comprised by the obtained multi-exposure frame.
Illustratively, when the multi-exposure frame includes a long exposure frame, a medium exposure frame, and a short exposure frame, an RGB image to be corrected is generated based on how long the exposure frame, the medium exposure frame, and the short exposure frame are.
Illustratively, when the multi-exposure frame includes a long exposure frame, a medium exposure frame, a short exposure frame, and an ultra-short exposure frame, an RGB image to be corrected is generated based on the multi-long exposure frame, the medium exposure frame, the short exposure frame, and the ultra-short exposure frame.
It should be appreciated that the specific fusion algorithm for generating the RGB image to be corrected based on the multi-exposure frame is not particularly limited in the embodiments of the present application.
Optionally, as an embodiment, step 230 includes:
carrying out multi-frame fusion processing on the long exposure frame, the medium exposure frame and the short exposure frame to obtain multi-frame fusion images;
and performing tone mapping processing on the multi-frame fusion image to obtain the RGB image to be corrected. The RGB image to be corrected is the image obtained after tone mapping processing.
Illustratively, the HDR multi-exposure frame fusion can be achieved by the following equation:
hdr(i,j)=N(i,j)*weight(N(i,j))+S(i,j)*expo_ratio*(1-weight(N(i,j)))
wherein hdr (i, j) represents a multi-frame fused image, weight is a fusion weight set according to the brightness of the middle exposure frame; wherein the lower the brightness, the greater the weight; the higher the brightness, the smaller the weight; (N frames for dark fusion and short frames for bright fusion), the exp_ratio is the exposure ratio of the medium exposure frame to the short exposure frame;
For example, expo_ratio= (expoime_n x iso_n)/(expoime_s x iso_s);
ExpoTime_n represents the exposure amount of the middle exposure frame, and ExpoTime_s represents the exposure amount of the short exposure frame.
The embodiment of the present application is not limited to a specific manner of tone mapping processing. Illustratively, the tone mapping process may be implemented by a tone mapping module (tone mapping) in the ISP. The tone mapping module is used for calculating the average brightness of the scene according to the current scene, selecting a proper brightness domain according to the average brightness, and mapping the whole scene to the brightness domain to obtain a correct result. The tone mapping module includes a global tone mapping (global tone mapping) and a local tone mapping (local tone mapping).
It should be noted that the tone mapping module is configured to perform a tone mapping algorithm. During actual shooting, the tone mapping algorithm is tunable. For example, the tone mapping algorithm 1 adopted by the mobile phone in the current shooting is used for mapping, the tone mapping algorithm 2 is adopted by the mobile phone in the next shooting for image processing, and the image processing algorithm of the embodiment of the application is applicable no matter how the tone mapping algorithm changes, that is, the first Lab color space can perform color correction on the Lab color space (that is, the second Lab color space) corresponding to the RGB image after the tone mapping processing.
Optionally, the embodiment of the application can also perform other image processing on the multi-frame fused image or the image after tone mapping processing, such as noise reduction processing and the like.
Illustratively, the multi-frame fused image can be subjected to dead point removal, spatial domain noise reduction, time domain noise reduction and the like, so as to further improve the image quality. Regarding the dead point removal process, the spatial domain noise reduction process, and the specific manner of the time domain noise reduction process can refer to the current image processing technology, which is not described herein.
Step 240, performing color space conversion processing on the RGB image to be corrected, to obtain a second Lab color space. The second Lab color space is used to characterize the color space when the RGB image to be corrected is converted to Lab color space.
The color space conversion process may be understood as converting an RGB image into the Lab color space or from the RGB domain into the Lab domain, and may refer to the process of converting the Raw domain into the Lab domain described in the foregoing 3. This is because, in order to facilitate color recovery under the Lab color space, the RGB image to be corrected needs to be first turned into the Lab color space as well. The embodiment of the application does not limit the specific mode of the color space conversion processing.
Optionally, as an embodiment, step 240 includes:
And converting the RGB image to be corrected from a RAW domain to a Lab domain based on a first color parameter to obtain the second Lab color space, wherein the first color parameter is a color parameter corresponding to the medium exposure frame.
Illustratively, the RGB image to be corrected needs to be converted from the RAW domain to the Lab domain using the color parameters corresponding to the mid-exposure frame, i.e., the first color parameters, in order to obtain a representation of the RGB image to be corrected in the Lab color space.
It should be noted that, the embodiment of the present application is mainly for correcting the color of the bright area, so that the bright area recovers the original color. Therefore, when the RGB image to be corrected is converted from the RAW domain to the Lab domain, the color parameters corresponding to the first exposure frame (mid-exposure frame) may be used. The color parameters corresponding to the middle exposure frame can ensure that the color deviation between the dark area and the middle exposure frame is not very large.
Optionally, the first color parameter includes awb _n, gamma_n, ccm_n, blc_n.
And step 250, correcting the second Lab color space by using the first Lab color space to obtain a third Lab color space. The third Lab color space is used for characterizing the Lab color space obtained after correction.
The reason for correcting the second Lab color space with the first Lab color space is that: the first Lab color space is obtained by fusing the Lab color space corresponding to the middle exposure frame and the Lab color space corresponding to the short exposure frame, and the fusion essence of the two is that: the short exposure frame is adopted for the bright area, the medium exposure frame is adopted for the dark area, and the short exposure frame contains more original color information, namely, the short exposure frame is closer to the real color, so that the second Lab color space can be well corrected based on the obtained first Lab color space, and the color of the real scene can be recovered as much as possible.
Optionally, as an embodiment, step 250 includes:
correcting the second Lab color space by using the first Lab color space based on the second fusion weight to obtain a third Lab color space; wherein the second fusion weight is determined based on a spatial distance of the first Lab color space and the second Lab color space.
Or, fusing (fusion) the first Lab color space and the second Lab color space to obtain a third Lab color space. The third Lab color space is a Lab color space obtained after color correction or color restoration.
Optionally, the second fusion weight satisfies the following formula:
W2=-K*d+b;
wherein W2 represents the second fusion weight; k and b are adjustable constant coefficients; d represents the spatial distance (or spatial Euclidean distance) of the first Lab color space and the second Lab color space,
wherein labmered (a) represents the a-component of the first Lab color space, labmered (b) represents the b-component of the first Lab color space, lab_hdr (a) represents the a-component of the second Lab color space, and lab_hdr (b) represents the b-component of the second Lab color space.
Illustratively, the farther the first Lab color space and the second Lab color space are, i.e., the larger the value of d, the larger the value of W2; the closer the spatial distance between the first Lab color space and the second Lab color space, i.e., the smaller the value of d, the smaller the value of W2. That is, the larger the two spatial distances, the more the second Lab color space needs to be corrected using the first Lab color space, and thus the larger the value of W2 will be.
The embodiment of the application can carry out color recovery no matter whether the ghost exists or not. In some scenes, there may be moving shooting objects, because the fusion is performed based on multi-frame images with continuous exposure, and for moving shooting objects, the multi-exposure frame fusion can bring about the problem of ghosting, so that the ghosting area needs to be considered when the image processing method of the embodiment of the application is adopted for color correction.
Optionally, as an embodiment, step 250 includes:
performing elimination processing on the ghost areas when the first Lab color space is utilized to correct the second Lab color space;
the ghost area is determined based on multi-frame fusion images, and the multi-frame fusion images are obtained by carrying out multi-frame fusion processing on the multi-exposure frames.
For example, the ghost areas may not come from either the mid-exposure frame or the short-exposure frame, not the original pixel points. Because special processing such as hole filling is performed on the ghost area when multi-frame HDR fusion is performed, color recovery is not required on the ghost area, namely, when the first Lab color space is utilized to correct the second Lab color space, pixel points corresponding to the ghost area can be eliminated or rejected. The elimination or elimination of the ghost area means that the pixel value corresponding to the ghost area is not subjected to color restoration processing.
The embodiment of the application does not limit the specific mode how to obtain the ghost area based on the multi-frame fusion image. Illustratively, the mask of the ghost area can be obtained by selecting a reference frame and calculating the difference between the other frames and the reference frame pixel by pixel.
For example, a middle exposure frame may be selected as a reference frame, the pixel value of a certain position on the reference frame is 100, and then the theoretical pixel value of the short exposure frame at the position can be calculated by the exposure ratio (the exposure ratio is determined when the exposure ratio is more than one exposure frame), and assuming that the exposure ratio N/S is 2, the pixel value of the short exposure frame at the same position should be 50 theoretically. If the actual pixel value of the short exposure frame differs too much from 50, the difference exceeds a certain floating interval (a certain floating interval is allowed in practical application), at this time, the pixel point can be considered to belong to the ghost area.
Illustratively, the third Lab color space satisfies the following formula:
Lab_Fusion=W2*Labmerged*(1-ghostmask)+(1-W2)*Lab_HDR*ghostmask
wherein lab_fusion represents the third Lab color space, W2 represents the second Fusion weight, labmered represents the first Lab color space, lab_hdr represents the second Lab color space, and ghostmask represents the ghost region.
And 260, performing RGB conversion processing on the third Lab color space, and outputting a corrected RGB image.
After obtaining the corrected Lab color space, the corrected Lab color space may be converted back to the RGB domain to output a color corrected HDR RGB image.
The embodiment of the application is not limited to a specific way of converting from Lab color space to RGB domain. Reference may be made in particular to the process of converting the Lab domain into the RGB domain shown in the foregoing 4.
In the embodiment of the application, a multi-exposure frame is obtained, wherein the multi-exposure frame at least comprises a long exposure frame, a medium exposure frame and a short exposure frame, then the short exposure frame and the medium exposure frame are respectively converted into Lab color spaces, and fusion is carried out by utilizing the Lab color space corresponding to the short exposure frame and the Lab color space corresponding to the medium exposure frame to obtain a first Lab color space; then, performing color correction on a second Lab color space by using the first Lab color space to obtain a third Lab color space, wherein the second Lab color space is obtained by converting an RGB image to be corrected through the color space; finally, the third Lab color space is converted back to the RGB domain, and the corrected RGB image is output. The embodiment of the application can well correct the second Lab color space based on the obtained first Lab color space, thereby recovering the color of the real scene as much as possible, improving the color rendition of the photo or video, being beneficial to the rendition of the original color in the real scene, improving the visual effect of the image and improving the shooting experience of the user.
An image processing method according to an embodiment of the present application is described below with reference to fig. 5.
Fig. 5 shows an exemplary algorithm structure of an embodiment of the present application. As shown in fig. 5, assuming that a photographing instruction or a video instruction is received, the sensor outputs multiple exposure frames, which are denoted as raw L (corresponding to a long exposure frame), raw N (corresponding to a medium exposure frame), and raw S (corresponding to a short exposure frame), respectively. Then obtaining color parameters awb _s, gamma_s, ccm_s and blc_s corresponding to the raw S, and converting the raw S to Lab S by using awb _s, gamma_s, ccm_s and blc_s; color parameters awb _n, gamma_n, ccm_n and blc_n corresponding to raw N are obtained, and the raw N is converted to Lab N by awb _n, gamma_n, ccm_n and blc_n. And (3) carrying out fusion based on the Lab S and the Lab N to obtain Lab merge (corresponding to the first LAB color space).
In addition, carrying out multi-frame fusion processing on raw L, raw N and raw S to obtain HDR raw; and then, performing tone mapping processing on the HDR raw, and performing LAB domain conversion on the image subjected to the color mapping processing by using awb _n, gamma_n, ccm_n and blc_n to obtain Lab_HDR (corresponding to a second Lab color space). Next, the lab_hdr was corrected by the Lab merge obtained above, to obtain a corrected lab_hdr. Finally, lab_HDR is converted back to the RGB domain, and the HDR RGB image after color correction is output. So far, the HDR RGB image after color recovery is obtained.
Optionally, after obtaining the HDR raw, a ghost region may also be generated. When Lab_HDR is corrected by using Lab merge obtained before, the ghost area can be removed, and the color recovery is not carried out on the ghost area.
It should be understood that the example in fig. 5 is merely an algorithm flow for facilitating understanding of the embodiments of the present application by those skilled in the art, and is not limiting of the embodiments of the present application.
It should be noted that, the image processing method in the embodiment of the present application is suitable for shooting scenes or recording video scenes. Such as HDR photograph scenes. Also, for example, HDR records video scenes.
It can be appreciated that, for a scene in which video is recorded, the process of performing color correction on an image frame may also be applied to the image processing method according to the embodiment of the present application. Stated another way, reference may be made to the image processing method of the embodiment of the present application for the process of color correction of the individual image frames that make up the video. For example, a recorded video may be generated based on the plurality of corrected RGB images.
The image processing method provided by the embodiment of the application is described in detail above with reference to fig. 1 to 5. An embodiment of the device of the present application will be described below with reference to fig. 6 and 7. It should be understood that the apparatus according to the embodiment of the present application may perform the image processing method according to the foregoing embodiment of the present application, that is, specific working procedures of various products below may refer to corresponding procedures in the foregoing method embodiment.
The following describes a software system and a hardware architecture applied by an embodiment of the present application with reference to fig. 6 and 7.
Fig. 6 is a schematic diagram of an architecture (including software systems and portions of hardware) in which embodiments of the application are employed. As shown in fig. 6, the application architecture is divided into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the application architecture may be divided into five layers, from top to bottom, an application layer, an application framework layer, a hardware abstraction layer HAL, a driver layer, and a hardware layer, respectively.
As shown in fig. 6, the application layer includes a camera and a gallery. It will be appreciated that some applications are shown in fig. 6, and that in fact, the application layer may include other applications, as the application is not limited in this respect. Such as applications including information, alarm clocks, weather, stopwatches, compasses, timers, flashlights, calendars, payment devices, etc.
As shown in fig. 6, the application framework layer includes a camera access interface. Camera management and camera devices are included in the camera access interface. The hardware abstraction layer includes a camera hardware abstraction layer and a camera algorithm library. Wherein the camera hardware abstraction layer comprises a plurality of camera devices. The camera algorithm library comprises a post-processing algorithm module and a color recovery module. The color recovery module is used for executing the image processing method of the embodiment of the application. The color recovery module is used for fusing the Lab color space corresponding to the short exposure frame and the Lab color space corresponding to the medium exposure frame to obtain a first Lab color space; then, performing color correction on a second Lab color space by using the first Lab color space to obtain a third Lab color space, wherein the second Lab color space is obtained by converting an RGB image to be corrected through the color space; finally, the third Lab color space is converted back to the RGB domain, and the corrected RGB image is output.
The driving layer is used for driving hardware resources. The driving layer may include a plurality of driving modules therein. As shown in fig. 6, the driving layer includes a camera device driver, a digital signal processor driver, a graphic processor driver, and the like.
The hardware layer includes a sensor, an image signal processor, a digital signal processor, and a graphics processor. The sensor comprises a plurality of sensors, a TOF camera and a multispectral sensor. The image signal processor comprises an ISP first module, an ISP second module and an ISP third module.
For example, the user may click on the camera application. When a user clicks a camera to take a picture, a shooting instruction can be issued to a camera hardware abstraction layer through a camera access interface. The camera hardware abstraction layer invokes the camera device driver and invokes the camera algorithm library. The camera hardware abstraction layer issues photographing parameters (such as scene parameters) to the camera device driver. The camera device driver sends photographing parameters issued based on a camera hardware abstraction layer to a hardware layer, for example, sends a sensor graph mode to a sensor, and sends parameter configuration of each ISP module to an image signal processor. The sensor performs the map based on the sensor map. The image signal processor performs corresponding processing based on the parameter configuration of each ISP module. The camera algorithm library is also used for sending digital signals to the digital signal processor driver in the driving layer so that the digital signal processor driver calls the digital signal processor in the hardware layer to process the digital signals. The digital signal processor may drive the processed digital signal back to the camera algorithm library through the digital signal processor. The camera algorithm library is also used for sending digital signals to the graphic signal processor driver in the driving layer so that the graphic signal processor driver calls the graphic processor in the hardware layer to process the digital signals. The graphics processor may return the processed graphics data to the camera algorithm library via the graphics processor driver. The color recovery module in the camera algorithm library may obtain the corresponding color parameters (such as the first color parameter and the second color parameter mentioned above) of the image frame from the graphics processor driver, and perform the related calibration process based on the color parameters. The color recovery module sends the corrected image to the camera hardware abstraction layer. The camera hardware abstract layer returns the corrected image to the camera through the framework layer.
The color recovery module is used for executing the image processing method of the embodiment of the application. The color recovery module is used for performing color space conversion on the middle exposure frame based on a first color parameter to obtain a Lab color space corresponding to the middle exposure frame, wherein the first color parameter is a color parameter corresponding to the middle exposure frame; performing color space conversion on the short exposure frame based on a second color parameter to obtain a Lab color space corresponding to the short exposure frame, wherein the second color parameter is a color parameter corresponding to the short exposure frame; and fusing the Lab color space corresponding to the middle exposure frame and the Lab color space corresponding to the short exposure frame to obtain the first Lab color space.
Illustratively, the color recovery module is configured to perform color space conversion processing on the RGB image to be corrected, so as to obtain a second Lab color space; and correcting the second Lab color space by using the first Lab color space to obtain a third Lab color space.
In addition, the image output by the image signal processor may be transmitted to the camera device driver. The camera device driver may send the image output by the image signal processor to the camera hardware abstraction layer. The camera hardware abstraction layer can send the image into a post-processing algorithm module for further processing, and can also send the image into a camera access interface. The camera access interface may send the image returned by the camera hardware abstraction layer to the camera.
The software system to which the embodiments of the present application are applied is described in detail above. The hardware system of the electronic device 1000 is described below in conjunction with fig. 7.
Fig. 7 shows a schematic structural diagram of an electronic device 1000 suitable for use in the present application.
The electronic device 1000 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 7 does not constitute a specific limitation on the electronic apparatus 1000. In other embodiments of the application, electronic device 1000 may include more or fewer components than those shown in FIG. 7, or electronic device 1000 may include a combination of some of the components shown in FIG. 7, or electronic device 1000 may include sub-components of some of the components shown in FIG. 7. For example, the proximity light sensor 180G shown in fig. 7 may be optional. The components shown in fig. 7 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 is configured to obtain a multi-exposure frame, where the multi-exposure frame includes at least a first exposure frame, a second exposure frame, and a third exposure frame, where the first exposure frame has an exposure time longer than an exposure time of the second exposure frame, and the second exposure frame has an exposure time longer than an exposure time of the third exposure frame; determining a first Lab color space based on the third exposure frame and the second exposure frame; generating an RGB image to be corrected based on the multi-exposure frame; performing color space conversion processing on the RGB image to be corrected to obtain a second Lab color space; correcting the second Lab color space by using the first Lab color space to obtain a third Lab color space; and performing RGB conversion processing on the third Lab color space, and outputting a corrected RGB image.
In some embodiments, the processor 110 may include one or more interfaces. For example, the processor 110 may include at least one of the following interfaces: inter-integrated circuit, I2C) interfaces, inter-integrated circuit audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM interfaces, USB interfaces.
The connection relationships between the modules shown in fig. 7 are merely illustrative, and do not limit the connection relationships between the modules of the electronic device 1000. Alternatively, the modules of the electronic device 1000 may also employ a combination of the various connection manners in the foregoing embodiments.
The charge management module 140 is used to receive power from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive the current of the wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive electromagnetic waves (current path shown in dashed lines) through a wireless charging coil of the electronic device 1000. The charging management module 140 may also supply power to the electronic device 1000 through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle times, and battery state of health (e.g., leakage, impedance). Alternatively, the power management module 141 may be provided in the processor 110, or the power management module 141 and the charge management module 140 may be provided in the same device.
The wireless communication function of the electronic device 1000 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 1000 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 may be used to display images or video. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini light-emitting diode (Mini LED), a Micro light-emitting diode (Micro LED), a Micro OLED (Micro OLED), or a quantum dot LED (quantum dot light emitting diodes, QLED). In some embodiments, the electronic device 1000 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 1000 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, etc. format image signal. In some embodiments, electronic device 1000 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 1000 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 1000 may support one or more video codecs. In this way, the electronic device 1000 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The NPU is a processor which refers to the biological neural network structure, for example, refers to the transmission mode among human brain neurons to rapidly process input information, and can also be continuously self-learned. The NPU may implement functions such as intelligent cognition of the electronic device 1000, for example: image recognition, face recognition, speech recognition, and text understanding.
The electronic device 1000 may implement audio functions such as music playing and recording through the audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, and application processor, etc.
The distance sensor 180F is used to measure a distance. The electronic device 1000 may measure distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 1000 may range using the distance sensor 180F to achieve quick focus.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 1000 may adaptively adjust the brightness of the display 194 based on perceived ambient light levels. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect if electronic device 1000 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 1000 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
The touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 1000 and at a different location than the display 194.
The keys 190 include a power-on key and an volume key. The keys 190 may be mechanical keys or touch keys. The electronic device 1000 may receive a key input signal and implement a function associated with the case input signal.
The motor 191 may generate vibration. The motor 191 may be used for incoming call alerting as well as for touch feedback. The motor 191 may generate different vibration feedback effects for touch operations acting on different applications. The motor 191 may also produce different vibration feedback effects for touch operations acting on different areas of the display screen 194. Different application scenarios (e.g., time alert, receipt message, alarm clock, and game) may correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
It can be understood that the image processing method according to the embodiment of the present application may be applied to the electronic device shown in fig. 7, and specific implementation steps may refer to the description of the foregoing method embodiment, which is not repeated herein.
The application also provides a computer program product which, when executed by a processor, implements the method of any of the method embodiments of the application.
The computer program product may be stored in a memory and eventually converted to an executable object file that can be executed by a processor through preprocessing, compiling, assembling, and linking.
The application also provides a computer readable storage medium having stored thereon a computer program which when executed by a computer implements the method according to any of the method embodiments of the application. The computer program may be a high-level language program or an executable object program.
The computer readable storage medium may be volatile memory or nonvolatile memory, or may include both volatile memory and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and technical effects of the apparatus and device described above may refer to corresponding processes and technical effects in the foregoing method embodiments, which are not described in detail herein.
In the several embodiments provided by the present application, the disclosed systems, devices, and methods may be implemented in other manners. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described apparatus embodiments are merely illustrative, the division of units is merely a logical function division, and there may be additional divisions in actual implementation, and multiple units or components may be combined or integrated into another system. In addition, the coupling between the elements or the coupling between the elements may be direct or indirect, including electrical, mechanical, or other forms of connection.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely one association relationship describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The terms "first," "second," …, etc. appearing in embodiments of the present application are for descriptive purposes only and are merely for distinguishing between different objects, such as different "color parameters," etc., and are not to be construed as indicating or implying a relative importance or an implicit indication of the number of technical features indicated. Thus, features defining "first", "second", …, etc., may include one or more features, either explicitly or implicitly. In the description of embodiments of the application, "at least one (an item)" means one or more. The meaning of "plurality" is two or more. "at least one of (an) or the like" below means any combination of these items, including any combination of a single (an) or a plurality (an) of items.
For example, items appearing similar to "in embodiments of the application include at least one of: the meaning of the expressions a, B, and C "generally means that the item may be any one of the following unless otherwise specified: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a, B and C; a and A; a, A and A; a, A and B; a, a and C, a, B and B; a, C and C; b and B, B and C, C and C; c, C and C, and other combinations of a, B and C. The above is an optional entry for the item exemplified by 3 elements a, B and C, when expressed as "the item includes at least one of the following: a, B, … …, and X ", i.e. when there are more elements in the expression, then the entry to which the item is applicable can also be obtained according to the rules described above.
In summary, the foregoing description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. An image processing method, comprising:
acquiring a multi-exposure frame, wherein the multi-exposure frame at least comprises a first exposure frame, a second exposure frame and a third exposure frame, wherein the exposure time length of the first exposure frame is longer than the exposure time length of the second exposure frame, and the exposure time length of the second exposure frame is longer than the exposure time length of the third exposure frame;
Determining a first Lab color space based on the third exposure frame and the second exposure frame;
generating an RGB image to be corrected based on the multi-exposure frame;
performing color space conversion processing on the RGB image to be corrected to obtain a second Lab color space;
correcting the second Lab color space by using the first Lab color space to obtain a third Lab color space;
and performing RGB conversion processing on the third Lab color space, and outputting a corrected RGB image.
2. The method of claim 1, wherein the determining a first Lab color space based on the third exposure frame and the second exposure frame comprises:
performing color space conversion on the second exposure frame based on a first color parameter to obtain a Lab color space corresponding to the second exposure frame, wherein the first color parameter is a color parameter corresponding to the second exposure frame;
performing color space conversion on the third exposure frame based on a second color parameter to obtain a Lab color space corresponding to the third exposure frame, wherein the second color parameter is a color parameter corresponding to the third exposure frame;
and fusing the Lab color space corresponding to the second exposure frame and the Lab color space corresponding to the third exposure frame to obtain the first Lab color space.
3. The method according to claim 1 or 2, wherein the fusing the Lab color space corresponding to the second exposure frame and the Lab color space corresponding to the third exposure frame comprises:
based on the first fusion weight, fusing the Lab color space corresponding to the second exposure frame and the Lab color space corresponding to the third exposure frame;
the first fusion weight is determined based on a first brightness component, and the first brightness component is a brightness component of a Lab color space corresponding to the second exposure frame.
4. A method according to any one of claims 1 to 3, wherein said correcting said second Lab color space with said first Lab color space to obtain a third Lab color space comprises:
correcting the second Lab color space by using the first Lab color space based on the second fusion weight to obtain a third Lab color space; wherein the second fusion weight is determined based on a spatial distance of the first Lab color space and the second Lab color space.
5. The method of claim 4, wherein the second fusion weight satisfies the following equation:
W2=-K*d+b;
Wherein W2 represents the second fusion weight; k and b are constant coefficients; d represents the spatial distance of the first Lab color space and the second Lab color space,
wherein labmered (a) represents the a-component of the first Lab color space, labmered (b) represents the b-component of the first Lab color space, lab_hdr (a) represents the a-component of the second Lab color space, and lab_hdr (b) represents the b-component of the second Lab color space.
6. The method of any one of claims 1 to 5, wherein said correcting said second Lab color space with said first Lab color space further comprises:
performing elimination processing on the ghost areas when the first Lab color space is utilized to correct the second Lab color space;
the ghost area is determined based on multi-frame fusion images, and the multi-frame fusion images are obtained by carrying out multi-frame fusion processing on the multi-exposure frames.
7. The method of claim 6, wherein the third Lab color space satisfies the following equation:
Lab_Fusion=W2*Labmerged*(1-ghostmask)+(1-W2)*Lab_HDR*ghostmask
wherein lab_fusion represents the third Lab color space, W2 represents the second Fusion weight, labmered represents the first Lab color space, lab_hdr represents the second Lab color space, and ghostmask represents the ghost region.
8. The method according to any one of claims 1 to 7, wherein the performing color space conversion processing on the RGB image to be corrected to obtain a second Lab color space includes:
and converting the RGB image to be corrected from a RAW domain to a Lab domain based on a first color parameter to obtain the second Lab color space, wherein the first color parameter is a color parameter corresponding to the second exposure frame.
9. The method according to any one of claims 1 to 8, wherein the generating an RGB image to be corrected based on the multi-exposure frame comprises:
performing multi-frame fusion processing on the first exposure frame, the second exposure frame and the third exposure frame to obtain multi-frame fusion images;
and performing tone mapping processing on the multi-frame fusion image to obtain the RGB image to be corrected.
10. The method according to any one of claims 1 to 9, wherein prior to acquiring the multi-exposure frame, the method further comprises:
receiving a first operation of a user, wherein the first operation is a photographing operation; alternatively, the first operation is a video recording operation.
11. An electronic device comprising a processor and a memory, the processor and the memory being coupled, the memory being for storing a computer program that, when executed by the processor, causes the electronic device to perform the method of any one of claims 1 to 10.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to perform the method of any of claims 1 to 10.
CN202310222752.1A 2023-02-27 2023-02-27 Image processing method and electronic equipment Pending CN117135471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310222752.1A CN117135471A (en) 2023-02-27 2023-02-27 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310222752.1A CN117135471A (en) 2023-02-27 2023-02-27 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117135471A true CN117135471A (en) 2023-11-28

Family

ID=88849704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310222752.1A Pending CN117135471A (en) 2023-02-27 2023-02-27 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117135471A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440253A (en) * 2023-12-22 2024-01-23 荣耀终端有限公司 Image processing method and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140347521A1 (en) * 2013-05-24 2014-11-27 Google Inc. Simulating High Dynamic Range Imaging with Virtual Long-Exposure Images
CN106920221A (en) * 2017-03-10 2017-07-04 重庆邮电大学 Take into account the exposure fusion method that Luminance Distribution and details are presented
US20200106942A1 (en) * 2017-06-08 2020-04-02 Zhejiang Dahua Technology Co., Ltd. Methods and devices for processing images of a traffic light
CN112219391A (en) * 2018-06-07 2021-01-12 杜比实验室特许公司 Generating HDR images from single shot HDR color image sensors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140347521A1 (en) * 2013-05-24 2014-11-27 Google Inc. Simulating High Dynamic Range Imaging with Virtual Long-Exposure Images
CN106920221A (en) * 2017-03-10 2017-07-04 重庆邮电大学 Take into account the exposure fusion method that Luminance Distribution and details are presented
US20200106942A1 (en) * 2017-06-08 2020-04-02 Zhejiang Dahua Technology Co., Ltd. Methods and devices for processing images of a traffic light
CN112219391A (en) * 2018-06-07 2021-01-12 杜比实验室特许公司 Generating HDR images from single shot HDR color image sensors

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
詹想: "《从0到1拍星空》", vol. 1, 30 June 2020, 机械工业出版社, pages: 284 - 288 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440253A (en) * 2023-12-22 2024-01-23 荣耀终端有限公司 Image processing method and related device

Similar Documents

Publication Publication Date Title
CN112150399B (en) Image enhancement method based on wide dynamic range and electronic equipment
CN110198417A (en) Image processing method, device, storage medium and electronic equipment
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN114092364A (en) Image processing method and related device
CN116744120B (en) Image processing method and electronic device
CN115601274A (en) Image processing method and device and electronic equipment
CN116416122A (en) Image processing method and related device
CN116055890A (en) Method and electronic device for generating high dynamic range video
CN117135471A (en) Image processing method and electronic equipment
WO2022083325A1 (en) Photographic preview method, electronic device, and storage medium
CN113727085B (en) White balance processing method, electronic equipment, chip system and storage medium
CN116668862B (en) Image processing method and electronic equipment
EP4175275A1 (en) White balance processing method and electronic device
EP4195679A1 (en) Image processing method and electronic device
CN116437198B (en) Image processing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115631250B (en) Image processing method and electronic equipment
CN115767290A (en) Image processing method and electronic device
EP4344241A1 (en) Image processing method and electronic device
CN116723417B (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
CN115955611B (en) Image processing method and electronic equipment
WO2023160220A1 (en) Image processing method and electronic device
CN116048323B (en) Image processing method and electronic equipment
CN115767287B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination