CN117395495B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN117395495B
CN117395495B CN202311679551.0A CN202311679551A CN117395495B CN 117395495 B CN117395495 B CN 117395495B CN 202311679551 A CN202311679551 A CN 202311679551A CN 117395495 B CN117395495 B CN 117395495B
Authority
CN
China
Prior art keywords
image
images
user
mobile phone
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311679551.0A
Other languages
Chinese (zh)
Other versions
CN117395495A (en
Inventor
高崇军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311679551.0A priority Critical patent/CN117395495B/en
Publication of CN117395495A publication Critical patent/CN117395495A/en
Application granted granted Critical
Publication of CN117395495B publication Critical patent/CN117395495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application is applied to the technical field of image processing and provides an image processing method and electronic equipment. And responding to the editing operation of the user on the first image, the electronic equipment acquires the HDR file corresponding to the first image, and displays a parameter adjustment control of the HDR file. And then, under the condition that the adjustment operation of the parameter adjustment control by the user is received, the electronic equipment determines the adjusted parameter corresponding to the adjustment operation. The electronic device may then display a second image, wherein the second image is an image of the HDR file generated based on the adjusted parameters. According to the application, the reduction degree of the image shooting details can be improved, and the image quality is further improved.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and an electronic device.
Background
With the development of electronic equipment (such as mobile phones), the shooting function of mobile phones has also been rapidly developed, and more users prefer to take images (such as photos or videos) with the mobile phones.
However, the image shot by the electronic device cannot completely present the real scene, and further it is difficult to present the detail features corresponding to the scene with larger brightness difference, for example, for a brighter area in the scene, the brighter area may be presented as white due to overexposure, that is, the detail features in the brighter area cannot be presented, so that the detail reduction degree in the shot scene is lower, and finally the quality of the image is reduced.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, which are used for improving the reduction degree of image shooting details so as to improve the image quality.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
In a first aspect, an image processing method is provided, in which, in response to an editing operation of a user on a first image, an electronic device acquires an HDR file corresponding to the first image, and displays a parameter adjustment control of the HDR file. And then, under the condition that the adjustment operation of the parameter adjustment control by the user is received, the electronic equipment determines the adjusted parameter corresponding to the adjustment operation. The electronic device may then display a second image, wherein the second image is an image of the HDR file generated based on the adjusted parameters.
In the application, the second image displayed by the electronic equipment is generated according to the adjusted parameters on the basis of the HDR file corresponding to the first image, and is not generated based on the JPG image generated by the electronic equipment, so that a great amount of original detail information can be prevented from being lost, the reduction degree of scene details can be improved to the greatest extent, and the visual effect is more vivid. In addition, the electronic equipment can display the parameter adjustment control of the HDR file, so that a user can adjust the first image according to the self requirement, the adjusted image can accord with the preference of the user, and scene details of a bright area or a dark area can be displayed in a targeted manner, and the visual experience of the user is improved.
In a possible implementation manner of the first aspect, after the HDR is turned on for the electronic device, the first image is an image obtained by shooting, or an image in the HDR mode downloaded by the electronic device.
In a possible implementation manner of the first aspect, the method further includes: the second image includes detail features not present in the first image and/or the first image includes detail features not present in the second image.
In the application, although the first image and the second image are both generated based on the HDR file, as the first image is generated based on the preset parameters, that is, the detail features of the first image are generated according to the preset parameters, and the second image is generated based on the adjusted parameters, that is, the detail features of the second image are generated according to the adjusted parameters, the second image is more in line with the preference of the user. Therefore, the first image is the visual experience conforming to more users, the second image can conform to the preference of the target users in a targeted manner, that is, the first image and the second image can display different scene details in a targeted manner, and the reduction degree of the scene details can be improved from different angles, so that the visual experience of the users is improved.
In a possible implementation manner of the first aspect, the file format of the first image and the file format of the second image are the same, and the file format of the first image is different from the file format of the HDR file.
In the application, the file format of the second image is the same as that of the first image, and the second image and the first image are JPG images. The file format of the first image and the second image is different from the file format of the HDR file. That is, the second image is generated according to parameters corresponding to the tone mapping curve on the basis of the HDR file, and is not generated based on the JPG image generated by the mobile phone, so that a great amount of original detail information can be prevented from being lost, the reduction degree of scene details can be improved to the greatest extent, and the visual effect is more vivid.
In a possible implementation manner of the first aspect, the process of generating the first image by the electronic device may specifically include: in the case where the electronic apparatus turns on the HDR function, the electronic apparatus acquires a plurality of images of different exposure amounts in response to a photographing operation by the user. And then, the electronic equipment generates a first image according to the plurality of images with different exposure amounts and preset parameters.
In the application, the first image is the image obtained by shooting after the HDR is started by the electronic equipment, so that the picture quality of the image can be improved, the detail characteristics of the real scene which cannot be presented due to overexposure or underexposure of the image can be reduced, and the visual experience of a user can be improved.
In one possible implementation of the first aspect, the plurality of images of different exposure amounts includes an image of EV-8, an image of EV-6, an image of EV-4, an image of EV-2, an image of EV0, an image of EV2, an image of EV4, an image of EV6, and an image of EV 8.
In the application, the more the number of images acquired by the electronic equipment, the more the detail features presented by the HDR file are, namely the degree of restoration of scene details can be greatly improved, so that the visual effect is more vivid.
In a possible implementation manner of the first aspect, the generating, by the electronic device, a first image according to the multiple images with different exposure amounts and preset parameters includes: and the electronic equipment fuses the multiple images with different exposure according to the image fusion model to obtain an HDR file. And then, the electronic equipment generates a first image according to preset parameters on the basis of the HDR file.
In the application, the electronic equipment can input a plurality of images with different exposure amounts into the image fusion model to obtain the HDR file, so that not only can the detail adjustment range of the first image be enlarged and a foundation is provided for the subsequent adjustment of the parameter adjustment control of the first image, but also the HDR file can present more detail characteristics, the detail loss can be reduced to the greatest extent, the reduction degree of the shooting details of the image is improved, and convenience is provided for the subsequent adjustment of the first image.
In addition, in the process of fusing a plurality of images with different exposure amounts, the image fusion model can perform registration error correction on the plurality of images with different exposure amounts so as to make up bright area information for a short frame image and make up dark area information for a long frame image, that is, the problem that the short frame image and the long frame image are difficult to register can be solved by performing registration error correction on the plurality of images with different exposure amounts, and a foundation is provided for subsequently improving the reduction degree of scene details.
In a possible implementation manner of the first aspect, the process of obtaining, by the electronic device, an HDR file may specifically include: the electronic equipment respectively preprocesses a plurality of images with different exposure amounts to obtain a plurality of processed images, wherein preprocessing the images means that brightness alignment and/or preliminary registration are carried out on the images. And then, the electronic equipment can respectively input the processed images into an image fusion model to obtain an HDR file.
In the application, as the brightness alignment is used for aligning the brightness values corresponding to the images with different exposure amounts, the brightness of all pixel points in the aligned images is the same, and the preliminary registration is used for adjusting the image with shift of the shooting picture caused by the shake of the hand of a user and the like, so that the registered image is the same as the shooting picture in the EV0 image, thus providing a basis for the subsequent image fusion, improving the image fusion precision of the electronic equipment and further improving the fusion quality of the HDR file.
In a possible implementation manner of the first aspect, after the electronic device obtains the HDR file, the method further includes: the electronic device saves the HDR file.
In the application, in order to facilitate the subsequent direct call of the HDR file, after the HDR file is obtained, the electronic equipment can store the HDR file, so that convenience can be provided for the subsequent direct call of the HDR file, the adjustment efficiency of the image is further improved, the adjustment time of the image is saved, and the use experience of a user is improved.
In a possible implementation manner of the first aspect, the generating, by the electronic device, a first image according to the multiple images with different exposure amounts and preset parameters includes: the electronic device obtains the ambient brightness of the shooting environment. The electronic device may then extract at least two images matching the ambient brightness from the plurality of images of different exposure amounts. And then, the electronic equipment fuses at least two images according to the first image fusion model to obtain an HDR file. And then, the electronic equipment generates a first image according to preset parameters on the basis of the HDR file.
In the application, the electronic equipment can input at least two images in the plurality of images into the image fusion model to obtain the HDR file, that is, the electronic equipment only fuses part of the images, but not all the images, so that the fusion speed of the images can be improved, the generation efficiency of the second image is further improved, and the generation time of the second image is reduced.
In a possible implementation manner of the first aspect, the first image fusion model is a model applicable to all ambient brightness.
In the application, the first image fusion model can be a model suitable for all environmental brightness, that is, the first image fusion model can fuse images with higher exposure or lower exposure, that is, the first image fusion model has stronger generalization capability and higher applicability.
In a possible implementation manner of the first aspect, the first image fusion model includes a plurality of first sub-image fusion models, and the first sub-image fusion models are respectively applicable to different environmental brightnesses.
In the application, the first image fusion model can comprise a plurality of first sub-image fusion models, and the first sub-image fusion models are respectively suitable for the first sub-image fusion models with different environmental brightnesses, that is, the different environmental brightnesses correspond to the different first sub-image fusion models, so that an HDR file obtained by carrying out image fusion on the target sub-image fusion model can be kept, more scene details can be reserved, the reduction degree of image shooting details can be improved, that is, the robustness of the first image fusion model can be improved.
In a possible implementation manner of the first aspect, the process of extracting at least two images by the electronic device may specifically include: the electronic equipment judges whether the brightness value of the ambient brightness is larger than a preset brightness value, and extracts images corresponding to at least two first exposure amounts from the plurality of images under the condition that the brightness value of the ambient brightness is larger than the preset brightness value; or under the condition that the brightness value of the ambient brightness is smaller than or equal to the preset brightness value, the electronic equipment extracts images corresponding to at least two second exposure amounts from the plurality of images; wherein the maximum of the at least two second exposures is greater than the maximum of the at least two first exposures.
In the application, if the brightness value of the ambient brightness is larger than the preset brightness value, the current shooting environment is brighter, so that the electronic equipment can extract images corresponding to at least two first exposure amounts respectively from a plurality of images, namely, select exposure images corresponding to the higher ambient brightness, and neutralize the brighter shooting environment, so that the first images generated later are images under normal exposure, and the visual experience of users is improved. If the brightness value of the ambient brightness is smaller than or equal to the preset brightness value, the current shooting environment is darker, so that the electronic equipment can extract images corresponding to at least two second exposure amounts respectively from the multiple images, namely, select exposure images corresponding to the lower ambient brightness, and therefore, the darker shooting environment can be neutralized, the first image generated later is an image under normal exposure, and the visual experience of a user is improved.
In a possible implementation manner of the first aspect, the process of generating the HDR file by the electronic device may specifically include: in response to a user editing operation for the first image, the electronic device displays a dynamic range adjustment control. And then, in response to the selection operation of the user for the dynamic range adjustment control, the electronic equipment extracts at least two target images matched with the adjusted dynamic range gear from the multiple images with different exposure. And then, the electronic equipment fuses the at least two target images according to the second image fusion model to obtain an HDR file.
In the application, in the process of generating the HDR file by the electronic equipment, at least two images in a plurality of images with different exposure amounts are fused, although the fusion speed of the images can be improved, the detail adjustment range of the first image is limited, namely, the generated second image has some detail loss, so that in order to reduce the detail loss, after the editing operation of a user on the first image is received, the electronic equipment can display the dynamic range adjustment control, so that the electronic equipment can fuse the images again according to the dynamic range gear selected by the user, thereby reducing the probability of detail loss and improving the reduction degree of shooting details of the image.
In a possible implementation manner of the first aspect, the second image fusion model is a model adapted to all dynamic range adjustment gears.
In the application, the second image fusion model can be a model suitable for all dynamic range adjustment gears, that is, the second image fusion model can fuse at least two target images in a higher dynamic range gear, and can fuse at least two target images in a lower dynamic range gear, that is, the second image fusion model has stronger generalization capability and higher applicability.
In a possible implementation manner of the first aspect, the second image fusion model may include a plurality of second sub-image fusion models, and the process of obtaining, by the electronic device, the HDR file according to the second image fusion model may specifically include: the electronic equipment calls a target sub-image fusion model from the second image fusion model according to the regulated dynamic range gear; wherein the target sub-image fusion model is one of a plurality of second sub-image fusion models. And then, the electronic equipment fuses at least two target images according to the target sub-image fusion model to obtain an HDR file.
In the application, the second image fusion model can comprise a plurality of second sub-image fusion models, and the second sub-image fusion models are respectively suitable for different dynamic range gears, that is, different dynamic range gears correspond to different second sub-image fusion models, so that the HDR file obtained by image fusion of the target sub-image fusion model can retain more scene details, the reduction degree of image shooting details is improved, that is, the robustness of the second image fusion model is improved.
In a possible implementation manner of the first aspect, the method further includes: the electronic device displays the first image.
In the application, after the first image is obtained, the electronic equipment can only display the first image, so that the simplicity of an interface can be improved, the visual experience of a user can be improved, and unnecessary resource waste can be reduced.
In a possible implementation manner of the first aspect, the method further includes: the electronic device displays parameter adjustment controls for the HDR file described above.
In the application, after the first image is obtained, the electronic device can only display the parameter adjustment control for the HDR file so as to adjust the parameter adjustment control by a user. Therefore, the method and the device not only can improve the simplicity of the interface and the visual experience of the user, but also can enable the user to further adjust the first image, so that the generated image is an image which accords with the preference of the user and the use experience of the user is improved.
In a possible implementation manner of the first aspect, the method further includes: the electronic device simultaneously displays the first image and parameter adjustment controls for the HDR file.
In the application, the electronic equipment can synchronously display the first image while displaying the parameter adjusting control. Therefore, the user can intuitively browse the adjusted first image without other operations, the operation process of the user is simplified, the adjustment efficiency is improved, and the use experience of the user is further improved.
In addition, in the process of adjusting the parameter adjustment control by the user, the electronic device can synchronously display the adjusted first image according to the adjustment operation of the user, that is, the user can browse the adjusted first image in real time, the user can conveniently judge whether the currently displayed image is a desired image, and if the currently displayed image is not the desired image, the user can continuously adjust the tone mapping curve until the displayed image is the desired image. If the currently displayed image is the desired image, the user may directly click on the save control. Therefore, the user can adjust the first image according to the preference of the user, that is, the target image generated after adjustment accords with the style of the user, the suitability of the target image is improved, and the use experience of the user is improved. In addition, in the process of adjusting the tone mapping curve, a user can browse more details of the bright area and the dark area, can pertinently edit the image according to own requirements, and improves the adjustment efficiency of the image.
In a possible implementation manner of the first aspect, the process of displaying the second image by the electronic device may specifically include: the electronic equipment can detect whether the user carries out adjustment operation on the parameter adjustment control in real time, and under the condition that the adjustment operation of the user on the parameter adjustment control is detected, the electronic equipment can generate adjusted parameters according to the adjustment operation of the user and display a second image; or in the case that the adjustment operation of the user for the parameter adjustment control is not detected, the electronic device may wait for the user to perform the adjustment operation.
According to the application, after the parameter adjustment control is displayed, the electronic equipment can detect whether the user adjusts the parameter adjustment control in real time, if the user is detected to adjust the parameter adjustment control, the electronic equipment can generate adjusted parameters according to the adjustment operation of the user and display corresponding images, so that the user can browse conveniently, and the adjustment efficiency of the images can be improved.
In a possible implementation manner of the first aspect, the process of displaying the second image by the electronic device may specifically include: the electronic equipment can detect whether the user carries out adjustment operation on the parameter adjustment control according to the preset interval time, and under the condition that the adjustment operation of the user on the parameter adjustment control is detected, the electronic equipment can generate adjusted parameters according to the adjustment operation of the user and display a second image; or in the case that the adjustment operation of the user for the parameter adjustment control is not detected, the electronic device may wait for the user to perform the adjustment operation.
According to the application, after the parameter adjustment control is displayed, the electronic equipment can detect whether a user carries out adjustment operation on the parameter adjustment control according to the preset interval time, so that unnecessary resource waste and power consumption loss can be reduced, and the utilization rate of detection resources is improved.
In a second aspect, the present application provides an electronic device comprising a display screen, a memory, and one or more processors; the display screen, the memory and the processor are coupled; the display screen is used for displaying images generated by the processor, and the memory is used for storing computer program codes, and the computer program codes comprise computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the method as described above.
In a third aspect, the application provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method as described above.
In a fourth aspect, the application provides a computer program product for, when run on an electronic device, causing the electronic device to perform the method as described above.
In a fifth aspect, there is provided a chip comprising: the device comprises an input interface, an output interface, a processor and a memory, wherein the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method.
It may be appreciated that the advantages achieved by the electronic device according to the second aspect, the computer readable storage medium according to the third aspect, the computer program product according to the fourth aspect, and the chip according to the fifth aspect may refer to the advantages of the first aspect and any one of the possible designs thereof, which are not described herein.
Drawings
Fig. 1 is an interface schematic diagram of an image shot by a mobile phone in a normal mode according to an embodiment of the present application;
Fig. 2 is an interface schematic diagram of a mobile phone for starting an HDR function according to an embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
Fig. 4 is a schematic diagram showing a front camera and a camera of a mobile phone according to an embodiment of the present application;
FIG. 5 is a flowchart of an image processing method according to an embodiment of the present application;
Fig. 6 is an interface schematic diagram of a mobile phone according to an embodiment of the present application for starting an HDR function to perform image capturing;
FIG. 7 is a schematic diagram showing tone mapping curves according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an interface for adjusting the dynamic range of a first image according to an embodiment of the present application;
FIG. 9 is an interface diagram of a down-regulation tone mapping curve according to an embodiment of the present application;
FIG. 10 is an interface diagram of an up-scaling tone mapping curve according to an embodiment of the present application;
FIG. 11 is an interface schematic of another down-regulation tone mapping curve provided by an embodiment of the present application;
FIG. 12 is an interface schematic of another up-scaling tone mapping curve provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of an interface for generating a second image according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a portion of an image processing method for capturing a photograph in a high precision mode according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a photo editing part in an image processing method in a high precision mode according to an embodiment of the present application;
FIG. 16 is a flowchart of another image processing method according to an embodiment of the present application;
FIG. 17 is a schematic diagram of an interface for selecting a dynamic range gear according to an embodiment of the present application;
FIG. 18 is a schematic diagram of an interface for selecting a dynamic range gear according to an embodiment of the present application;
FIG. 19 is a schematic diagram of an interface for generating candidate images according to an embodiment of the present application;
FIG. 20 is a schematic diagram of a portion of a photo taken in an image processing method in a high experience mode according to an embodiment of the present application;
fig. 21 is a schematic diagram of a photo editing portion in an image processing method in a high experience mode according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. In the description of the present application, unless otherwise indicated, "and/or" in the present application is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural. In addition, in order to facilitate the clear description of the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ. Meanwhile, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
In the process of shooting the current scene by the electronic equipment, the acquired image frame cannot completely present the real scene, that is, the image frame cannot be displayed aiming at some scene details, so that the detail reduction degree of the image frame is lower, and the visual experience of a user is affected. It is understood that the image frame includes a plurality of pixels, and the brightness level of each pixel is 256, that is, the brightness range of the image frame is [0, 255]. And, the electronic device uses a byte 8 bit (bit) space to store a pixel value corresponding to a pixel point in the image frame, wherein the byte is a unit of measure of the measured storage capacity, and is used for representing data types and language characters in some computer programming languages.
In some embodiments, for some shooting scenes (such as scenes with sufficient light), because the brightness of the shooting pictures may be too different, the shot images are easy to be too bright or too dark, so that details of the bright or dark are lost, and the quality of the images is affected.
In an example, for a brighter region in a photographed scene, an image frame obtained by an electronic device may be displayed in a white form due to overexposure (or referred to as overexposure), that is, the image frame cannot show details in a real scene. For example, referring to fig. 1, an area a in an image frame is the sky photographed by a mobile phone. It can be understood that the sky in the real scene is blue, and the sky obtained by shooting by the mobile phone is white, that is, the mobile phone cannot restore the details of the sky, so that the image quality is reduced.
In another example, for a darker area in a photographed scene, an image frame obtained by an electronic device may be displayed in black due to lack of exposure (or referred to as underexposure), that is, the image frame cannot show details in a real scene. For example, referring again to fig. 1, region B in the image frame is a tree photographed by the mobile phone. It can be understood that the tree in the real scene is green, and the tree shot by the mobile phone is black, that is, the mobile phone cannot restore the detail of the tree, so that the image quality is reduced.
Thus, to improve image quality, the electronic device may capture an image using a high dynamic range imaging (HIGH DYNAMIC RANGE IMAGING, HDR) function, resulting in an HDR image. The HDR is a processing technology for improving brightness and contrast of an image, and the technology can generate an image with a better dynamic range by using high-speed operation of a computer, that is, the technology can provide more dynamic range and image details, and can better reflect the visual effect in a real environment. The dynamic range refers to the ratio between the maximum and minimum of the image brightness, and is used to define how widely the camera (image sensor) can capture the tonal details of an image. It will be appreciated that the greater this dynamic range, the more scene detail the image can display, and the more realistic the visual effect.
For example, as shown in fig. 2, the user stands under the cell floor with the camera of the mobile phone facing the sky, that is, the mobile phone may display a preview image including the sky, the building, and the tree. Then, in the case that the touch operation of the user on the more control is detected, responding to the touch operation, the mobile phone can display a function control, wherein the function control can comprise a slow action control, a delay control, an HDR control and the like. Then, in the case that a touch operation of the user on the "HDR" control is detected, in response to the touch operation, the mobile phone may start the HDR function, and after the HDR function is started, the mobile phone may display the preview image. Then, in the case where a touch operation of the user with respect to the photographing control 110 is detected, in response to the touch operation, the mobile phone may employ the HDR function to photograph in the scene to generate the image C. It can be seen that the image C may present more scene detail than the image shown in fig. 1, making the visual effect more realistic.
It should be noted that, the electronic device uses a space with multiple bytes of floating point numbers to store a pixel value corresponding to one pixel in the HDR image. Illustratively, if the floating point number is a single precision floating point number, then one floating point number occupies 4 bytes; if the floating point number is a double-precision floating point number, one floating point number occupies 8 bytes.
It can be understood that the luminance values corresponding to the image displayed by the electronic device are between 0 and 255, that is, the luminance level of each pixel point in the image does not exceed 256, and the luminance levels corresponding to the generated HDR file may not only include 256, for example, the luminance values corresponding to the HDR file may be between 0 and 65535, that is, the luminance levels corresponding to the HDR file may be 65536 when the HDR function is turned on. Therefore, the luminance level of the HDR image obtained by the mobile phone through the HDR file is 256, that is, details corresponding to luminance values not between 0 and 255 in the HDR image are not displayed, that is, some content in the HDR image cannot be displayed yet, so an image obtained by linearly decoding an image obtained by shooting an image after the HDR function is turned on by the electronic device may have a part of the area that is too bright and another part of the area that is too dim. To reduce the occurrence of partial over-brightness or over-darkness in the image, the electronic device may process the HDR image through dynamic range compression (DYNAMIC RANGE compressor, DRC) or tone mapping (tone mapping) to obtain a Low Dynamic Range (LDR) image.
The dynamic range compression can reduce the brightness of the image or improve the brightness of the image, so that the dynamic range of the brightness of the image is reduced or compressed, and the detail information of the original image can be fully reserved when the HDR image is displayed, namely the brightness effect of the real scene can be better reflected. The tone mapping is used for mapping the color from an original tone to a target tone, so as to restore the brightness effect of the real scene as much as possible, wherein the original tone is a tone corresponding to the HDR image, and the target tone is a tone corresponding to the LDR image.
However, in the process of processing the HDR image, the electronic device may still cause loss of some scene details (such as the excessively bright area) in the image, so as to affect the reduction degree in the real scene, so that the visual experience of the user is lower. For example, referring to fig. 2 again, the C area in the image C is still white, that is, the scene details in the C area cannot be displayed, that is, the scene details in the C area cannot be restored, which reduces the visual effect in the shooting scene and affects the visual experience of the user.
Therefore, in order to improve the visual experience of a user, the embodiment of the application provides an image processing method. In the method, in response to editing operation of a user on a first image, electronic equipment acquires an HDR file corresponding to the first image and displays a parameter adjustment control of the HDR file. And then, under the condition that the adjustment operation of the parameter adjustment control by the user is received, the electronic equipment determines the adjusted parameter corresponding to the adjustment operation. The electronic device may then display a second image, wherein the second image is an image of the HDR file generated based on the adjusted parameters.
In the embodiment of the application, the second image is generated according to the adjusted parameters on the basis of the HDR file corresponding to the first image, and is not generated based on the JPG image generated by the electronic equipment, so that a great amount of original detail information can be prevented from being lost, the reduction degree of scene details can be improved to the greatest extent, and the visual effect is more vivid. In addition, the electronic equipment can display the parameter adjustment control of the HDR file, so that a user can adjust the first image according to the self requirement, the adjusted image can accord with the preference of the user, and scene details of a bright area or a dark area can be displayed in a targeted manner, and the visual experience of the user is improved.
In some examples, the electronic device in the embodiments of the present application may be a device including a camera, such as a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, and the specific form of the electronic device is not particularly limited.
By way of example, fig. 3 shows a schematic structural diagram of an electronic device 200. As shown in fig. 3, the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charge management module 211, a power management module 212, a battery 213, an antenna 1, an antenna 2, a mobile communication module 240, a wireless communication module 250, an audio module 270, a sensor module 280, keys 290, a motor 291, an indicator 292, cameras 1-N293, a display 294, and subscriber identity module (subscriber identification module, SIM) card interfaces 1-N295, etc.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 200. In other embodiments of the application, electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be separated, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 200, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only illustrative, and does not limit the structure of the electronic device 200. In other embodiments of the present application, the electronic device 200 may also employ different interfacing manners, or a combination of interfacing manners, as in the above embodiments.
The charge management module 211 is configured to receive a charge input from a charger. The charging management module 211 may also supply power to the electronic device through the power management module 212 while charging the battery 213.
The wireless communication function of the electronic device 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 240, the wireless communication module 250, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 240 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 200. The modem processor may include a modulator and a demodulator.
The wireless communication module 250 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., as applied to the electronic device 200.
The electronic device 200 implements display functions through a GPU, a display screen 294, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
A display screen (or screen) 294 is used to display images, videos, and the like. The display 294 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 200 may include 1 or N display screens 294, N being a positive integer greater than 1.
The electronic device 200 may implement a photographing function through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like.
The ISP is used to process the data fed back by the camera 293. For example, when an electronic device photographs, the shutter is opened, light is transmitted to a camera photosensitive element (or referred to as an image sensor) through the lens, an optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to an ISP for processing, so that the electrical signal is converted into an image visible to the naked eye. ISP can also perform algorithm optimization on noise and brightness of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 293. In some embodiments, camera 293 includes a shutter. The shutter is a device in the camera for controlling the time at which light irradiates the photosensitive element.
The camera 293 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device 200 may include 1 or N cameras 293, N being a positive integer greater than 1.
In some embodiments, camera 293 may include a lens, which is an optical component for generating images.
Illustratively, the N cameras 293 may include: one or more front cameras and one or more rear cameras. For example, referring to fig. 4, the electronic device 200 is a mobile phone. The interface (a) in fig. 4 shows a front camera, such as front camera 20. Three rear cameras, such as rear cameras 21, 22, and 23, are shown on the interface (b) in fig. 4. Of course, the number of cameras in the mobile phone includes, but is not limited to, the number described in the above embodiment.
Wherein, the N cameras 293 may include one or more of the following cameras: the camera comprises a main camera, a long-focus camera, a wide-angle camera, a super-wide-angle camera, a micro-distance camera, a fish-eye camera, an infrared camera, a depth camera and a black-and-white camera.
In this embodiment, the rear camera is a camera having an HDR function. Specifically, under the condition that the electronic device opens the HDR function, if the photographing control is clicked by a user, the rear camera of the electronic device can photograph images with different exposure. And then, the electronic equipment can perform HDR fusion on the images with different exposure values to obtain an HDR file.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 200 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 200.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 200 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 200 may implement audio functions through an audio module 270, an application processor, and the like. Such as music playing, recording, etc. The audio module 270 may include, among other things, a speaker, a receiver, a microphone, and a headset interface.
Keys 290 include a power on key, a volume key, etc. The indicator 292 may be an indicator light.
The sensor module 280 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The image processing method provided by the embodiment of the application can be used in a scene of shooting an image by electronic equipment. For example, the electronic device takes a picture through a front camera or a rear camera of the electronic device. The electronic device is taken as an example of a mobile phone, and the image processing method provided by the embodiment of the application is divided into two embodiments for description, wherein the first embodiment is used for describing a generation and editing part of a first image in a high-precision mode, the high-precision mode can restore scene details to the greatest extent, and the second embodiment is used for describing a generation and editing part of the first image in a high-experience mode, and the high-experience mode can meet the tone style of a user to the greatest extent, so that the regulated image accords with the preference of the user.
Example 1
The embodiment of the application provides an image processing method. In this embodiment, in response to a shooting operation of a user, a mobile phone acquires a plurality of images with different exposure amounts, and fuses the plurality of images with different exposure amounts to obtain an HDR file. And then, the mobile phone can generate an image which accords with the preference of the user, namely a second image according to the HDR file and the adjusted parameters corresponding to the parameter adjustment control. Specifically, as shown in fig. 5, the image processing method may include S501 to S505.
S501, responding to shooting operation of a user, and acquiring a plurality of images with different exposure amounts by the mobile phone.
In one case, the shooting operation may be an operation of clicking the shooting control by the user in the shooting mode of opening the HDR function, for example, please refer to fig. 6, if the shooting control 110 in fig. 6 is clicked by the user, which indicates that the user wants to shoot in the scene, that is, the user triggers the shooting operation, so the mobile phone may use the HDR function to shoot in the scene to obtain multiple images with different exposure amounts. In another case, the shooting operation may be an operation of clicking the recording control by the user in the recording mode of starting the HDR function.
The Exposure Value (EV) is used to represent the exposure level of the image, and if the exposure value is greater than 0, it indicates that the overall brightness of the image is brighter, that is, the greater the exposure value is, the brighter the image is; if the exposure is less than 0, the overall brightness of the image is darker, that is, the smaller the exposure, the darker the image; if the exposure is 0, the overall brightness of the image is normal brightness, that is, the exposure corresponding to the normal exposure image acquired by the mobile phone in the photographing mode is 0.
In some embodiments, the plurality of images of different exposure amounts may include at least one image having an EV value of 0. It can be understood that, because the image of EV0 is the image collected by the mobile phone under normal conditions, that is, the image is normally exposed, and the mobile phone is interfered by noise in the process of collecting the image, the more the number of the images of EV0 is, the better the effect of denoising the images is, and the clearer the image collected by the mobile phone is. Here, EV0 is a relative value, not to say that the exposure amount is 0. For example, exposure amount=exposure time×sensitivity (ISO). Assuming that the normal exposure image is photographed in the case where ISO is 200 and the exposure time is 50 ms, the EV0 actually corresponds to the exposure amount being the product of 200 and 50 ms.
In the present embodiment, the images with different exposure amounts may include an image of EV-8, an image of EV-6, an image of EV-4, an image of EV-2, an image of EV0, an image of EV2, an image of EV4, an image of EV6, and an image of EV 8. In other embodiments, the images of different exposure may also include images of other exposure, such as an EV-5 image, an EV5 image, and the like.
In some embodiments, since the larger exposure (e.g., EV 8) may cause the obtained image to be too bright, and the smaller exposure (e.g., EV-8) may cause the obtained image to be too dark, and the too bright image and the too dark image may lose more scene details, in order to avoid the image occupying too much storage space of the mobile phone, the plurality of images with different exposure may only include the image of EV-4, the image of EV-2, the image of EV0, the image of EV2, and the image of EV4, so that waste of storage resources may be avoided to the greatest extent, and the utilization rate of storage resources may be improved.
The images with different exposure amounts are original image files (raw), wherein the original image files are files which are not processed and edited by the mobile phone and contain data acquired by the image sensor. It can be understood that the storage space occupied by the original image files acquired by different image sensors is different, for example, the storage space of the original image files may be 10 bits, 12 bits, 14 bits, or the like, which is not limited in particular.
For example, the plurality of images with different exposure amounts may be acquired by a front camera or a rear camera of the mobile phone. The number of cameras used by the cell phone to capture images may be one or more.
It will be appreciated that a user may take different scenes using a camera of the mobile phone. Taking a shooting scene as an outdoor scene for example, the shooting scene can be a scene containing sky and tree forests, or can be a scene containing sky and buildings (such as office buildings), and the like; taking a shooting scene as an indoor scene for example, the shooting scene may be a scene including a lighting device (such as a desk lamp) turned on, that is, only if a light source exists in the shooting scene, and the method is not limited in detail.
S502, the mobile phone fuses the images with different exposure amounts according to the image fusion model to obtain a high dynamic range imaging file.
In some embodiments, after acquiring multiple images with different exposure amounts, the mobile phone may input the multiple images with different exposure amounts into an image fusion model to obtain a fused original image file (or referred to as an HDR file). The image fusion model is an HDR network (HDR net), which is an image enhancement algorithm based on a deep neural network, and is used for improving the dynamic range of an image, that is, the image fusion model is implemented by training a deep neural network, so that the deep neural network learns the mapping from an LDR file to an HDR file, thereby enhancing the LDR file. The image fusion model can be designed by adopting a network structure based on U-Net, so as to improve the image enhancement effect and fusion efficiency.
Specifically, after a plurality of images with different exposure amounts are obtained, the mobile phone can pre-process each image to obtain a processed image. Wherein preprocessing the image refers to brightness alignment and/or preliminary registration of the image. The brightness alignment is used for aligning brightness values corresponding to a plurality of images with different exposure amounts, so that the brightness of all pixel points in the aligned images is the same. The preliminary registration is used to adjust an image in which a shift occurs in a photographed picture due to a user's hand shake or the like so that the registered image is identical to the photographed picture in the EV0 image.
And then, the mobile phone can input a plurality of processed images into the image fusion model to obtain the HDR file. The HDR file is a file with a high bit, for example, the HDR file may be a 16-bit file, or may be a 32-bit file, which is not limited specifically. It can be appreciated that if the HDR file is 16 bits, the maximum pixel value corresponding to the HDR file is 65535.
In this embodiment, the mobile phone fuses the multiple images with different exposure amounts through the image fusion model, so that the obtained first image can keep scene details to the greatest extent, excessive detail loss is avoided, and the visual effect of the image is improved.
It should be noted that, in the process of fusing multiple images with different exposure amounts, the image fusion model can perform registration error correction on the multiple images with different exposure amounts, so as to make up bright area information for a short frame image and make up dark area information for a long frame image, that is, by performing registration error correction on the multiple images with different exposure amounts, the problem that the short frame image and the long frame image are difficult to register can be solved, and a foundation is provided for subsequently improving the reduction degree of scene details. The short frame image is obtained by shooting under the condition that the exposure of the mobile phone camera is smaller than 0 EV. That is, the exposure amount of the short frame image is less than 0EV, such as-2 EV, -4EV, etc. The long frame image is an image obtained by shooting under the condition that the exposure of the mobile phone camera is larger than 0 EV. That is, the exposure amount of the long frame image is greater than 0EV, such as 2EV,4EV, or the like.
In some embodiments, in order to facilitate subsequent direct invocation of the HDR file, after obtaining the HDR file, the handset may save the HDR file first, that is, the first image carries the HDR file. Then, in the case that an editing operation of the user for the first image is received, the mobile phone can directly call the HDR file, wherein the first image is generated based on the HDR file. Therefore, the image adjusting efficiency can be improved, the image adjusting time is saved, and the use experience of a user is improved.
In one implementation manner, the process of fusing multiple images with different exposure amounts by the mobile phone may not use an image fusion model to perform image fusion, that is, the mobile phone may use other methods to perform image fusion, for example, the mobile phone may use a pixel weighted average method to fuse multiple images with different exposure amounts, so as to obtain an HDR file. The pixel weighted average method is used for weighted average of gray values of corresponding pixels of the plurality of images so as to realize image fusion.
S503, the mobile phone generates a first image, wherein the first image is an image generated by the high dynamic range imaging file based on preset parameters.
Specifically, after obtaining the HDR file, the mobile phone may generate a visualized first image according to the preset parameters and the HDR file. The first image is an image obtained by shooting by adopting a front camera or a rear camera after the HDR function of the mobile phone is started. The first image may be an image taken by the mobile phone in a photographing mode, or may be an image taken by the mobile phone in a video mode. It can be understood that the image shot in the video mode may be an image in a video file obtained after the video shooting is completed, or may be an image acquired by a mobile phone camera in the video shooting process, which is not particularly limited.
The preset parameters are default parameters preconfigured by the mobile phone. In some embodiments, the preset parameters may be obtained through mapping information in a tone mapping curve (tone mapping), and for example, the mobile phone may configure default parameters by configuring the tone mapping curve, or the mobile phone may perform mapping information extraction and learn the obtained parameters through the tone mapping curve after user adjustment.
For example, the curve F shown in the relationship diagram shown in fig. 7 is a tone mapping curve corresponding to the curve not adjusted according to the preset parameter, and the curve E is a tone mapping curve corresponding to the curve adjusted according to the preset parameter, where the tone mapping curve is used to characterize a mapping relationship between an input pixel value and an output pixel value (or referred to as a mapped pixel value). The input pixel value refers to a pixel value corresponding to the HDR file currently, that is, a pixel value corresponding to the HDR file before the mobile phone does not adjust the HDR file. The output pixel value refers to a pixel value corresponding to the first image, that is, a pixel value corresponding to the mobile phone after the mobile phone adjusts the HDR file. Note that the HDR file corresponding to the tone mapping curve is a 16bit HDR file, that is, the pixel value range of the HDR file is [0, 65535]. And the first image is an 8bit original image file, that is, the original image file has a pixel value range of 0, 255.
It should be noted that, for the same input pixel value, the larger the output pixel value is, the brighter the adjusted image is; the smaller the output pixel value, the darker the adjusted image. For the same output pixel value, the input pixel value with small value is adjusted for the dark area (such as forest) of the HDR file; the large input pixel values are adjusted for the bright areas (e.g., sky) of the HDR file. It can be understood that, the mobile phone can highlight the dark area (such as the underexposed area) of the HDR file to increase the details of the dark area, that is, the reduction degree of the details of the dark area can be improved; the mobile phone can dim the bright area (such as the overexposed area) of the HDR file to increase the detail of the bright area, that is, the reduction degree of the detail of the bright area can be improved.
In some embodiments, after obtaining the first image, the mobile phone may save the first image to a gallery Application (APP) of the mobile phone, so as to facilitate the user to view the first image.
S504, responding to the editing operation of the user on the first image, and displaying a parameter adjusting control on the high dynamic range imaging file by the mobile phone.
Specifically, after the first image is obtained, the mobile phone may display the identifier corresponding to the first image. And then, under the condition that the clicking operation of the user for the identifier is detected, the mobile phone displays a first image according to the clicking operation so as to enable the user to browse the first image. And then, under the condition that the clicking operation of the editing control of the user for the first image is detected, the mobile phone displays the editing interface for the first image, namely, displays the parameter adjustment control corresponding to the HDR file according to the clicking operation so as to be adjusted by the user. The parameter adjustment control may be, for example, the tone mapping curve described above.
In some embodiments, the mobile phone may display the parameter adjustment control and simultaneously display the first image. Therefore, the user can intuitively browse the adjusted first image without other operations, the operation process of the user is simplified, the adjustment efficiency is improved, and the use experience of the user is further improved.
For example, as shown in fig. 8, the interface (a) is a mobile phone interface when the first image is displayed in the album APP. In case a click operation of the user on the edit control in the interface (a) is detected, the handset can enter the edit interface of the first image, i.e. display the interface (b) in fig. 8. Then, in the case that the clicking operation of the user on the adjustment control and the dynamic range control in the interface (b) is detected, the mobile phone can enter the dynamic range adjustment interface of the first image, that is, display the interface (c) in fig. 8, wherein the interface (c) includes the first image and the parameter adjustment control. And then, under the condition that the adjustment operation of the parameter adjustment control in the interface (c) by the user is detected, the mobile phone can display candidate images after parameter adjustment according to the adjusted parameters corresponding to the adjustment operation.
It can be understood that, because the editing operation is the first operation after the mobile phone generates the first image, the parameter adjustment control displayed in the mobile phone interface is the parameter adjustment control corresponding to the preset parameter, but if the editing operation is the second or more editing operation, the parameter adjustment control displayed in the mobile phone interface is the parameter adjustment control corresponding to the previous adjustment.
S505, when receiving the adjustment operation of the user for the parameter adjustment control, the mobile phone generates a second image according to the adjusted parameter corresponding to the adjustment operation on the basis of the HDR file.
Specifically, after displaying the parameter adjustment control corresponding to the preset parameter, the user may adjust the parameter adjustment control. And then, the mobile phone can generate an adjusted parameter according to the adjustment operation of the user, and generate a second image according to the adjusted parameter on the basis of the HDR file. The second image is an image generated by the high dynamic range imaging file based on the adjusted parameters.
In some embodiments, after displaying the parameter adjustment control, the mobile phone can detect whether the user adjusts the tone mapping curve in real time, and if the user is detected to adjust the tone mapping curve, the mobile phone can generate adjusted parameters according to the adjustment operation of the user, and display a corresponding image, so that the user can browse conveniently. If the user is not detected to adjust the tone mapping curve, the mobile phone can wait for the user to perform adjustment operation.
In other embodiments, after the mobile phone displays the parameter adjustment control, if the adjustment operation of the user on the parameter adjustment control is received, it is indicated that the user triggers the parameter adjustment control, so that the mobile phone can generate the adjusted parameter according to the adjustment operation of the user. That is, the mobile phone does not need to detect whether the user adjusts the tone mapping curve in real time, so that unnecessary power consumption loss can be reduced, and the utilization rate of detection resources is improved.
In one implementation, the mobile phone may generate the corresponding adjusted parameters according to the adjustment operation of the user based on the parameter adjustment control corresponding to the preset parameter, that is, based on the tone mapping curve E as shown in fig. 7.
In an example, as shown in fig. 9, if the user pulls down the position L1 in the tone mapping curve to L2, it is indicated that the user wants to dim the first image, so the mobile phone may dim the first image to obtain a dimmed image, that is, the second image. It will be appreciated that the dimmed image increases the detail features of the bright areas compared to the first image, but decreases the detail features of the dark areas, i.e. the first image comprises the detail features not present in the second image (e.g. the detail features of the bright areas) and/or the second image comprises the detail features not present in the first image (e.g. the detail features of the dark areas).
In another example, as shown in fig. 10, if the user adjusts the position P2 in the tone mapping curve up to P1, it is indicated that the user wants to lighten the first image, so the mobile phone may perform the lightening process on the first image to obtain the lightened image. I.e. the second image. It will be appreciated that the highlighted image increases the detail features of the dark areas compared to the first image, but decreases the detail features of the bright areas, i.e. the first image comprises the detail features not present in the second image (e.g. the detail features of the dark areas) and/or the second image comprises the detail features not present in the first image (e.g. the detail features of the bright areas).
In another implementation manner, the mobile phone may also generate the corresponding adjusted parameters according to the adjustment operation of the user on the basis of the initial parameter adjustment control, that is, on the basis of the tone mapping curve F as in fig. 7. The initial parameter adjustment control is a tone mapping curve corresponding to the condition that the initial parameter adjustment control is not adjusted according to the preset parameter.
In an example, as shown in fig. 11, if the user pulls down the position Q1 in the tone mapping curve to Q2, it is indicated that the user wants to dim the initial image corresponding to the HDR file, so that the mobile phone may dim the initial image to obtain a dimmed image, that is, the second image. The initial image is an image generated directly according to the HDR file, that is, an image generated without adjustment according to the preset parameters. It will be appreciated that the dimmed image increases the detail features of the bright areas compared to the initial image, but decreases the detail features of the dark areas, i.e. the initial image comprises the detail features not present in the second image (e.g. the detail features of the bright areas) and/or the second image comprises the detail features not present in the initial image (e.g. the detail features of the dark areas).
In another example, as shown in fig. 12, if the user adjusts the position X2 in the tone mapping curve up to X1, it is explained that the user wants to lighten the initial image corresponding to the HDR file, so the mobile phone may perform the lightening process on the initial image to obtain the image after the lightening. I.e. the second image. The initial image is an image generated directly according to the HDR file, that is, an image generated without adjustment according to the preset parameters. It will be appreciated that the highlighted image increases the detail features of the dark areas compared to the original image, but decreases the detail features of the bright areas, i.e. the original image comprises the detail features not present in the second image (e.g. the detail features of the dark areas) and/or the second image comprises the detail features not present in the original image (e.g. the detail features of the bright areas).
It can be understood that in the process of adjusting the tone mapping curve by the user, the mobile phone can synchronously display the adjusted first image according to the adjustment operation of the user, that is, the user can browse the adjusted first image in real time, so that the user can conveniently judge whether the currently displayed image is a desired image, and if the currently displayed image is not the desired image, the user can continuously adjust the tone mapping curve until the displayed image is the desired image. If the currently displayed image is the desired image, the user may directly click on the save control. Then, in response to a click operation of the user on the save control, the mobile phone can determine the currently displayed image as the second image. The handset may then display the second image. Therefore, the user can adjust the first image according to the preference of the user, that is, the target image generated after adjustment accords with the style of the user, the suitability of the target image is improved, and the use experience of the user is improved. In addition, in the process of adjusting the tone mapping curve, a user can browse more details of the bright area and the dark area, can pertinently edit the image according to own requirements, and improves the adjustment efficiency of the image.
For example, as shown in fig. 13, if the save control K is clicked by the user, it is indicated that the image currently displayed by the mobile phone is a desired image, that is, the image currently displayed by the mobile phone accords with the style of the user, so that the mobile phone can save the image to display in the gallery APP, thereby facilitating the subsequent browsing of the user and improving the use experience of the user.
It can be understood that the above-mentioned process S501 to S503 is a process of generating the corresponding image by the mobile phone after the user clicks the shooting control, that is, the process is a photo shooting part in the image processing method. The process of S504 to S505 is a process of adjusting the dynamic range by the user, that is, the process is an editing photo portion in the image processing method.
In some embodiments, for a specific process of capturing a photo portion in the above image processing method, refer to fig. 14, specifically, after a mobile phone acquires a plurality of images with different exposure amounts, the mobile phone may perform preprocessing on the plurality of images to obtain a plurality of processed images. And then, the mobile phone can input the processed multiple images into an image fusion model to obtain a high dynamic range imaging file. The handset may then generate a first image based on the high dynamic range imaging file in accordance with default parameters (or called preset parameters).
In other embodiments, for a specific process of editing a photo portion in the above image processing method, reference may be made to fig. 15, and specifically, the mobile phone displays the first image. Then, under the condition that the editing operation of the user for the first image is received, the mobile phone acquires the high dynamic range imaging file corresponding to the first image. And then, responding to the adjustment operation of the user for the parameter adjustment control, and displaying the image after parameter adjustment in real time by the mobile phone based on the adjustment operation.
Example two
The embodiment of the application also provides an image processing method. In this embodiment, in response to a photographing operation by a user, the mobile phone acquires a plurality of images of different exposure amounts. Then, the mobile phone can select at least two images from the plurality of images according to the ambient brightness of the shooting environment, and fuse the at least two images to obtain a first HDR file. And then, the mobile phone can adjust the gear according to the dynamic range, and at least two corresponding images are selected from the multiple images with different exposure amounts to be fused, so that a second HDR file is obtained. And then, the mobile phone can generate an image which accords with the preference of the user, namely a second image according to the second HDR file and the adjusted parameters corresponding to the parameter adjustment control. Specifically, as shown in fig. 16, the image processing method may include S1201 to S1210.
S1201, in response to a photographing operation by a user, the mobile phone acquires a plurality of images of different exposure amounts, and the ambient brightness of the photographing environment.
Specifically, when the mobile phone receives the shooting operation of the user, the mobile phone can acquire images with different exposure amounts in the shooting process, and detect the current shooting environment to obtain the environment brightness. The ambient brightness refers to the intensity of light in the shooting environment.
In some embodiments, the above-mentioned ambient brightness may be detected by a light sensor in the mobile phone, and in particular, the mobile phone may determine the ambient brightness of the shooting environment according to the light data detected by the light sensor.
S1202, the mobile phone extracts at least two images matched with the ambient brightness from a plurality of images with different exposure.
Specifically, after the ambient brightness of the shooting environment is obtained, the mobile phone can select at least two corresponding images from a plurality of images with different exposure according to the ambient brightness, so that the speed of image fusion can be improved, and convenience is provided for the follow-up generation of the first image.
In some embodiments, the number of the at least two images matching the ambient brightness is fixed, that is, the number of the at least two images is preset. In the present embodiment, the number of the at least two images is 3. In other embodiments, the number of the at least two images may be 2, 4, etc., which is not particularly limited.
In other embodiments, the number of the at least two images may be determined according to the ambient brightness. For example, if the ambient brightness is in the normal range, that is, the ambient brightness is within the preset interval, the light indicating the current shooting environment is sufficient, so the mobile phone may select a smaller number of images from the multiple images with different exposure amounts, for example, the number of at least two images may be 2. For another example, if the ambient brightness is in an abnormal range, that is, the ambient brightness is not in the preset interval, it indicates that the current shooting environment is darker or brighter, so the mobile phone may select a larger number of images from the multiple images with different exposure amounts, for example, the number of at least two images may be 4.
The exposure corresponding to the at least two images is an image preset by the mobile phone according to the ambient brightness, that is, a first image obtained by shooting the mobile phone under different ambient brightness is obtained by fusing a plurality of images (or called different exposure images) based on different exposure. It will be appreciated that if the ambient brightness is higher, the shooting environment is brighter, and therefore, the mobile phone can select an image with a lower exposure as the exposure image. For example, the at least two images may include an image of EV-2 and two images of EV 0. For another example, the at least two images may include an image of EV-4, an image of EV-2, and an image of EV 0. If the environment brightness is lower, the shooting environment is darker, so the mobile phone can select an image with higher exposure as an exposure image. For example, the at least two images may include an image of EV-2, an image of EV0, and an image of EV 2. For another example, the at least two images may include an image of EV0, an image of EV2, and an image of EV 4.
In one implementation manner, after the environmental brightness of the shooting environment is obtained, the mobile phone can determine whether the environmental brightness meets a preset condition, if the environmental brightness meets the preset condition, it is indicated that the current shooting environment is brighter, so that the mobile phone can extract images corresponding to at least two first exposure amounts from a plurality of images with different exposure amounts, namely, select exposure images corresponding to the higher environmental brightness. For example, the cell phone may select the image of EV-2 and two images of EV0 as exposure images. If the ambient brightness does not meet the preset condition, the current shooting environment is dark, so that the mobile phone can extract images corresponding to at least two second exposure amounts from the images with different exposure amounts, namely, the exposure images corresponding to the condition that the ambient brightness is low are selected, wherein the maximum value of the at least two second exposure amounts is larger than the maximum value of the at least two first exposure amounts. For example, the cell phone may select an image of EV-2, an image of EV0, and an image of EV2 as exposure images. For example, the ambient brightness satisfies the preset condition may be that a brightness value of the ambient brightness is greater than the preset brightness value.
And S1203, the mobile phone fuses the at least two images according to the first image fusion model to obtain a first high dynamic range imaging file.
Specifically, after determining at least two images matching the above ambient brightness, the mobile phone may input the at least two images into the first image fusion model to obtain a first HDR file, that is, a fused original image file.
In some embodiments, after determining the at least two images, the mobile phone may pre-process each image to obtain a processed image. And then, the mobile phone can respectively input the processed images into a first image fusion model to obtain a first HDR file. In this embodiment, the first HDR file is a 16bit file, that is, the maximum pixel value corresponding to the first HDR file is 65535.
In an example, the first image fusion model may be a model adapted to all environmental brightnesses, that is, the first image fusion model may fuse images with higher exposure or fuse images with lower exposure, that is, the first image fusion model has higher generalization capability and higher applicability.
In another example, the first image fusion model may include a plurality of first sub-image fusion models, where the first sub-image fusion models are respectively applicable to different environmental brightnesses, so that the robustness of the first image fusion model may be improved. For example, for a scene with darker ambient brightness, the mobile phone may call a first sub-image fusion model from the first image fusion model, where the first sub-image fusion model conforms to the scene with darker ambient brightness. For another example, for a scene with brighter ambient brightness, the mobile phone may call a first sub-image fusion model from the first image fusion model, where the first sub-image fusion model conforms to the scene with brighter ambient brightness.
In some embodiments, in order to facilitate subsequent direct invocation of the first HDR file, after obtaining the first HDR file, the handset may save the first HDR file first. Then, in the case that an editing operation of the user for the first image is received, the mobile phone can directly call the first HDR file, wherein the first image is generated based on the first HDR file. Therefore, the image adjusting efficiency can be improved, the image adjusting time is saved, and the use experience of a user is improved.
And S1204, the mobile phone generates a first image according to preset parameters on the basis of the first high dynamic range imaging file.
Specifically, after the first HDR file is obtained, the mobile phone may generate a visualized first image, that is, the first image, according to a preset parameter based on the first HDR file. It can be appreciated that the first image is an image generated by the first HDR file based on the preset parameters.
In some embodiments, after obtaining the first image, the mobile phone may save the first image in the gallery APP of the mobile phone, so as to facilitate the user to view the first image.
It will be appreciated that after the first image is obtained, the mobile phone may not execute the following processes S1205 to S1210, and the mobile phone may execute the above processes S504 to S505, which is not limited specifically.
S1205, responding to the editing operation of the user for the first image, and displaying a dynamic range adjusting control by the mobile phone.
Specifically, after the first image is obtained, the mobile phone may display the identifier corresponding to the first image. And then, under the condition that the clicking operation of the user for the identifier is detected, the mobile phone displays a first image according to the clicking operation so as to enable the user to browse the first image. Then, under the condition that the clicking operation of the editing control of the user for the first image is detected, the mobile phone displays the editing interface for the first image, namely the dynamic range adjusting control, according to the clicking operation, so that the user can select the corresponding dynamic range according to the preference of the user, and the use experience of the user is improved.
The dynamic range adjusting control comprises a plurality of dynamic range gears, and the dynamic range gears can be preset gears of the mobile phone. In this embodiment, the dynamic range adjustment control may include 5 dynamic range gears. Specifically, the first dynamic range gear is EV0, that is, all EV0 images acquired by the mobile phone corresponding to the first dynamic range gear. The second dynamic range gear is EV-2-EV 2, namely, an image of EV-2, at least one image of EV0 and an image of EV2 acquired by a mobile phone corresponding to the second dynamic range gear. The third dynamic range gear is EV-4-EV 4, that is, the image of EV-4, the image of EV-2, the image of at least one EV0, the image of EV2 and the image of EV4 acquired by the mobile phone corresponding to the third dynamic range gear. The fourth dynamic range gear is EV-6-EV 6, that is, the image of EV-6, the image of EV-4, the image of EV-2, the image of at least one EV0, the image of EV2, the image of EV4 and the image of EV6 acquired by the mobile phone corresponding to the fourth dynamic range gear. The fifth dynamic range gear is EV-8-EV 8, that is, the image of EV-8, the image of EV-6, the image of EV-4, the image of EV-2, the image of at least one EV0, the image of EV2, the image of EV4, the image of EV6, and the image of EV8 acquired by the cell phone corresponding to the fifth dynamic range gear.
It can be understood that the higher the level corresponding to the dynamic range gear, the more images are set by the mobile phone, that is, the number of images corresponding to the first dynamic range gear is 2, and the number of images corresponding to the fifth dynamic range gear is 10.
In other embodiments, the dynamic range adjustment control may include only 3 dynamic range gears, for example, the first dynamic range gear is EV0, the second dynamic range gear is EV-2-EV 2, and the third dynamic range gear is EV-4-EV 4. For another example, the first dynamic range gear is EV-2 to EV2, the second dynamic range gear is EV-4 to EV4, and the third dynamic range gear is EV-6 to EV6. The dynamic range adjustment control may further include 4 dynamic range gears, 6 dynamic range gears, and the like, which is not particularly limited.
In an example, as shown in fig. 17, the interface (a) is a mobile phone interface when the first image is displayed in the album APP. In case a click operation of the user on the edit control in the interface (a) is detected, the handset can enter the edit interface of the first image, i.e. display the interface (b) in fig. 17. Then, in the case that the clicking operation of the user on the adjustment control and the dynamic range control in the interface (b) is detected, the mobile phone can enter the dynamic range adjustment interface of the first image, that is, display the interface (c) in fig. 17, where the interface (c) includes the first image and the dynamic range adjustment control, and the dynamic range adjustment control includes a first gear, a second gear, a third gear, a fourth gear and a fifth gear.
In another example, as shown in fig. 18, the interface (a) is a mobile phone interface when the first image is displayed in the album APP. In case a click operation of the user on the edit control in the interface (a) is detected, the handset can enter the edit interface of the first image, i.e. display the interface (b) in fig. 18. Then, in the case that the clicking operation of the user on the adjustment control and the dynamic range control in the interface (b) is detected, the mobile phone can display the dynamic range adjustment control on the basis of the interface (b), that is, generate the interface (c), wherein the dynamic range adjustment control can be displayed in a form of a progress bar, and the progress bar is used for adjusting the dynamic range gear, that is, the dynamic range of the first image. And then, the user drags the button T on the progress bar according to the preference habit of the user, so that the adjusted gear is the gear which accords with the preference of the user.
It can be understood that, in the process of generating the first HDR file by the mobile phone, at least two images in the multiple images with different exposure amounts are fused, although the fusion speed of the images can be improved, the detail adjustment range of the first image is limited, that is, the generated second image has some detail loss, so in order to reduce the detail loss, after receiving the editing operation of the user on the first image, the mobile phone can display the dynamic range adjustment control, so that the mobile phone can perform image fusion again according to the dynamic range gear selected by the user, thereby reducing the probability of detail loss and improving the reduction degree of the shooting details of the image.
And S1206, under the condition that the selection operation of the user for the dynamic range adjustment control is received, the mobile phone extracts at least two target images matched with the adjusted dynamic range from a plurality of images with different exposure.
In some embodiments, after displaying the dynamic range adjustment control, the user may select a dynamic range gear in the dynamic range adjustment control. Then, the mobile phone can determine the corresponding target gear according to the selection operation of the user. Then, the mobile phone can extract at least two target images matched with the target gear from a plurality of images with different exposure.
In one example, referring again to fig. 17, interface (c) is a cell phone interface when adjusting the first image dynamic range gear. Then, if any dynamic range gear (such as the third gear) in the dynamic range adjustment control is clicked by the user, it is indicated that the user wants to fuse at least two images corresponding to the exposure amount in the third gear, so that in the case that the clicking operation of the user on the "generate image" control is detected, the mobile phone can extract at least two target images matched with the adjusted dynamic range gear from multiple images with different exposure amounts according to the clicking operation, that is, the at least two target images can include an image of EV-4, an image of EV-2, images of all EV0 in the multiple images, an image of EV2, and an image of EV 4.
In another example, referring again to fig. 18, interface (c) is a cell phone interface when adjusting the first image dynamic range gear. Specifically, in response to a drag operation of the user on the button T on the progress bar, the mobile phone moves the button T so that the adjusted dynamic range gear is a gear which accords with the preference of the user. Then, in the case that the clicking operation of the user on the "next" control is detected, the mobile phone may extract at least two target images matching with the adjusted dynamic range gear from the multiple images with different exposure according to the clicking operation, that is, the at least two target images may be the image of EV-2, the images of all EV0 in the multiple images, and the image of EV 2. It will be appreciated that the closer to the right the button T, the higher the dynamic range adjustment gear; the farther left the button T is, the lower the dynamic range adjustment gear is.
S1207, the mobile phone fuses the at least two target images according to the second image fusion model to obtain a second high dynamic range imaging file.
Specifically, after determining at least two target images matched with the adjusted dynamic range gear, the mobile phone may input the at least two target images into a second image fusion model to obtain a second HDR file, that is, a fused original image file.
In some embodiments, after determining the at least two target images, the mobile phone may pre-process the target image for each target image to obtain a processed target image. And then, the mobile phone can respectively input the processed target images into a second image fusion model to obtain a second HDR file. In this embodiment, the second HDR file is a 16bit file, that is, the maximum pixel value corresponding to the second HDR file is 65535.
In an example, the second image fusion model may be a model adapted to all dynamic range adjustment gears, that is, the second image fusion model may fuse at least two target images in a higher dynamic range gear, or fuse at least two target images in a lower dynamic range gear, that is, the second image fusion model has a stronger generalization capability and higher applicability.
In another example, the second image fusion model may include a plurality of second sub-image fusion models, where the second sub-image fusion models are respectively applicable to different dynamic range gears, so that the robustness of the second image fusion model may be improved.
In one implementation manner, the mobile phone may call a target sub-image fusion model from the second image fusion model according to the adjusted dynamic range gear, where the target sub-image fusion model is one of the plurality of second sub-image fusion models. And then, the mobile phone can fuse the at least two target images according to the target sub-image fusion model to obtain a second HDR file. For example, for a first dynamic range gear, the mobile phone may call a second sub-image fusion model corresponding to the first dynamic range gear from the second image fusion model. For another example, for the fifth dynamic range gear, the mobile phone may call a second sub-image fusion model corresponding to the fifth dynamic range gear from the second image fusion model.
S1208, the mobile phone generates candidate images according to preset parameters on the basis of the second high dynamic range imaging file.
Specifically, after the second HDR file is obtained, the mobile phone may generate a visualized candidate image according to a preset parameter based on the second HDR file. It is understood that the candidate image is an image generated by the second HDR file based on the preset parameters.
In some embodiments, after obtaining the candidate image, the mobile phone may save the candidate image in the gallery APP of the mobile phone, so as to facilitate the user to view the candidate image.
S1209, the mobile phone displays the candidate image and/or the parameter adjustment control for the second high dynamic range imaging file.
Specifically, after the candidate image is obtained, the mobile phone may display the candidate image and/or a parameter adjustment control for the second HDR file.
In some embodiments, after the candidate image is obtained, the mobile phone can only display the candidate image, so that not only can the simplicity of the interface be improved and the visual experience of the user be improved, but also unnecessary resource waste can be reduced. Thereafter, if the user wants to further adjust the candidate image, the user may click on the "edit" control again to further adjust the candidate image.
In other embodiments, after obtaining the candidate image, the mobile phone may only display the parameter adjustment control for the second HDR file, so that the user may adjust the parameter adjustment control. Therefore, the method and the device not only can improve the simplicity of the interface and the visual experience of the user, but also can enable the user to further adjust the candidate images, so that the generated images are images which accord with the preference of the user, and the use experience of the user is improved.
In still another embodiment, after the candidate image is obtained, the mobile phone can adjust the candidate image and the parameter adjustment control for the second HDR file, so that the adjusted candidate image can be displayed in real time while the user adjusts the parameter adjustment control, so that the user can intuitively browse the adjusted candidate image without other operations of the user, the operation process of the user is simplified, and the use experience of the user is improved.
Illustratively, as shown in fig. 19, the interface (a) is a mobile phone interface when the first image dynamic range gear is adjusted. The interface (a) includes a first image M and a plurality of dynamic range gears, wherein the plurality of dynamic range gears include a first dynamic range gear (EV 0), a second dynamic range gear (EV-2-EV 2), a third dynamic range gear (EV-4-EV 4), a fourth dynamic range gear (EV-6-EV 6), and a fifth dynamic range gear (EV-8-EV 8). Specifically, if the clicking operation of the user on any dynamic range gear in the interface (a) is detected, if the "generate image" control is clicked by the user, the mobile phone may display the interface (b), where the interface (b) includes the candidate image N and the parameter adjustment control for the second HDR file, so that the user may adjust the parameter adjustment control.
S1210, under the condition that the adjustment operation of the user for the parameter adjustment control is received, the mobile phone generates a second image according to the adjusted parameter corresponding to the adjustment operation on the basis of the second high dynamic range imaging file.
Specifically, after the mobile phone displays the parameter adjustment control, the user can adjust the parameter adjustment control. Then, the mobile phone may generate an adjusted parameter according to the adjustment operation of the user, and generate an adjusted candidate image (or referred to as a second image) according to the adjusted parameter on the basis of the second HDR file.
This step is similar to step S505 in fig. 5, and will not be described in detail here.
It can be understood that the above-mentioned process of S1201-S1204 is a process of generating a corresponding image by the mobile phone after the user clicks the shooting control, that is, the process is a shooting photo part in the image processing method. The process of S1205-S1210 is a process of adjusting the dynamic range by the user, that is, the process is an editing photo part in the image processing method.
In some embodiments, for a specific process of capturing a photo portion in the above image processing method, referring to fig. 20, specifically, after a plurality of images with different exposure amounts are obtained by a mobile phone, at least two corresponding images may be extracted from the plurality of images according to whether the ambient brightness satisfies a preset condition. Then, the mobile phone can preprocess the at least two images to obtain at least two processed images. And then, the mobile phone can input the at least two processed images into a first image fusion model to obtain a first high dynamic range imaging file. Then, the mobile phone can generate a first image according to default parameters on the basis of the first high dynamic range imaging file.
In other embodiments, for a specific process of editing a photo portion in the above image processing method, reference may be made to fig. 21, and specifically, the mobile phone displays the first image. And then, under the condition that the editing operation of the user for the first image is received, displaying a dynamic range adjusting control by the mobile phone. And then, responding to the selection operation of a user for any dynamic range gear in the dynamic range adjustment control, and extracting at least two images matched with the environment brightness from the plurality of images with different exposure according to the selected dynamic range gear by the mobile phone. Then, the mobile phone can preprocess the at least two images to obtain at least two processed images. And then, the mobile phone can input the processed at least two images into a second image fusion model to obtain a second high dynamic range imaging file. And then, responding to the adjustment operation of the user for the parameter adjustment control, and displaying the image after parameter adjustment in real time by the mobile phone based on the adjustment operation.
Note that, the file format of the second image is the same as that of the first image, and both are JPG images. The file format of the first image and the second image is different from the file format of the HDR file. That is, the second image is generated according to parameters corresponding to the tone mapping curve on the basis of the HDR file, and is not generated based on the JPG image generated by the mobile phone, so that a great amount of original detail information can be prevented from being lost, the reduction degree of scene details can be improved to the greatest extent, and the visual effect is more vivid.
In some embodiments, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method of adjusting a usage parameter as described above.
In some embodiments, the application provides a computer program product which, when run on an electronic device, causes the electronic device to perform the method of adjusting a usage parameter as described above.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. An image processing method is characterized by being applied to electronic equipment; the method comprises the following steps:
Responding to the editing operation of a user on the first image, and extracting at least two target images corresponding to the dynamic range gear adjusted by the user from a plurality of images by the electronic equipment; the first image is an image generated according to preset parameters on the basis of a first HDR file, the first HDR file is obtained by fusing a plurality of images, and the plurality of images are images with different exposure amounts acquired by the electronic equipment;
the electronic equipment fuses the at least two target images to obtain a second HDR file, and displays a parameter adjustment control for the second HDR file; wherein the file format of the second HDR file is different from the file format of the first image;
Responding to the adjustment operation of the user for the parameter adjustment control, and determining an adjusted parameter corresponding to the adjustment operation by the electronic equipment;
The electronic device displays a second image; the second image is an image generated according to the adjusted parameters on the basis of the second HDR file, and the file format of the second image is the same as that of the first image.
2. The method of claim 1, wherein the second image includes detail features not available from the first image; and/or the first image includes detail features not found in the second image.
3. The method according to claim 1, wherein the method further comprises:
Under the condition that the electronic equipment starts an HDR function, responding to shooting operation of the user, and acquiring the plurality of images by the electronic equipment;
And the electronic equipment fuses at least two images in the plurality of images to obtain the first HDR file.
4. The method of claim 3, wherein the electronic device fusing at least two of the plurality of images to obtain the first HDR file comprises:
the electronic equipment acquires the ambient brightness of a shooting environment;
the electronic equipment extracts at least two images matched with the ambient brightness from the plurality of images with different exposure amounts;
And the electronic equipment fuses the at least two images according to a first image fusion model to obtain the first HDR file.
5. The method of claim 4, wherein the electronic device extracting at least two images matching the ambient brightness from the plurality of images of different exposure amounts comprises:
Under the condition that the brightness value of the ambient brightness is larger than a preset brightness value, the electronic equipment extracts images corresponding to at least two first exposure values from the plurality of images; or alternatively
Under the condition that the brightness value of the ambient brightness is smaller than or equal to the preset brightness value, the electronic equipment extracts images corresponding to at least two second exposure amounts from the plurality of images; wherein a maximum value of the at least two second exposure amounts is greater than a maximum value of the at least two first exposure amounts.
6. The method of claim 3, wherein the electronic device extracting at least two target images corresponding to the user-adjusted dynamic range gear from the plurality of images in response to the editing operation of the user on the first image, comprises:
responsive to an editing operation of the user for the first image, the electronic device displays a dynamic range adjustment control;
responding to the selection operation of the user for the dynamic range adjustment control, and extracting at least two target images matched with the adjusted dynamic range gear from the multiple images with different exposure amounts by the electronic equipment;
The electronic device fuses the at least two target images to obtain a second HDR file, including:
and the electronic equipment fuses the at least two target images according to a second image fusion model to obtain the second HDR file.
7. The method of claim 6, wherein the second image fusion model comprises a plurality of second sub-image fusion models; the electronic device fuses the at least two target images according to a second image fusion model to obtain the second HDR file, which comprises:
The electronic equipment invokes a target sub-image fusion model from the second image fusion model according to the adjusted dynamic range gear; wherein the target sub-image fusion model is one of the plurality of second sub-image fusion models;
And the electronic equipment fuses the at least two target images according to the target sub-image fusion model to obtain the second HDR file.
8. An electronic device comprising a display screen, a camera, a memory, and one or more processors; the display screen, the camera, the memory and the processor are coupled; the camera is used for acquiring images, the display screen is used for displaying the images generated by the processor and the images acquired by the camera, the memory is used for storing computer program codes, and the computer program codes comprise computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 7.
9. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 7.
CN202311679551.0A 2023-12-08 2023-12-08 Image processing method and electronic equipment Active CN117395495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311679551.0A CN117395495B (en) 2023-12-08 2023-12-08 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311679551.0A CN117395495B (en) 2023-12-08 2023-12-08 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN117395495A CN117395495A (en) 2024-01-12
CN117395495B true CN117395495B (en) 2024-05-17

Family

ID=89439560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311679551.0A Active CN117395495B (en) 2023-12-08 2023-12-08 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117395495B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013162190A (en) * 2012-02-02 2013-08-19 Canon Inc High dynamic range moving image photographing auxiliary device
CN105227858A (en) * 2015-10-30 2016-01-06 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN114463191A (en) * 2021-08-26 2022-05-10 荣耀终端有限公司 Image processing method and electronic equipment
CN116416122A (en) * 2021-12-31 2023-07-11 荣耀终端有限公司 Image processing method and related device
CN116681598A (en) * 2022-02-23 2023-09-01 深圳市汇顶科技股份有限公司 Image processing method, chip, electronic device, and computer-readable storage medium
CN116866495A (en) * 2022-03-22 2023-10-10 北京小米移动软件有限公司 Image acquisition method, device, terminal equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013162190A (en) * 2012-02-02 2013-08-19 Canon Inc High dynamic range moving image photographing auxiliary device
CN105227858A (en) * 2015-10-30 2016-01-06 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN114463191A (en) * 2021-08-26 2022-05-10 荣耀终端有限公司 Image processing method and electronic equipment
CN116416122A (en) * 2021-12-31 2023-07-11 荣耀终端有限公司 Image processing method and related device
CN116681598A (en) * 2022-02-23 2023-09-01 深圳市汇顶科技股份有限公司 Image processing method, chip, electronic device, and computer-readable storage medium
CN116866495A (en) * 2022-03-22 2023-10-10 北京小米移动软件有限公司 Image acquisition method, device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN117395495A (en) 2024-01-12

Similar Documents

Publication Publication Date Title
WO2020168956A1 (en) Method for photographing the moon and electronic device
CN113810598B (en) Photographing method, electronic device and storage medium
CN113132620A (en) Image shooting method and related device
CN112150399A (en) Image enhancement method based on wide dynamic range and electronic equipment
WO2020102978A1 (en) Image processing method and electronic device
CN113905182B (en) Shooting method and equipment
CN113810603B (en) Point light source image detection method and electronic equipment
CN114466134A (en) Method and electronic device for generating HDR image
CN116437198B (en) Image processing method and electronic equipment
CN115631250B (en) Image processing method and electronic equipment
CN117395495B (en) Image processing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN115150542A (en) Video anti-shake method and related equipment
CN117135468B (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
CN115460343B (en) Image processing method, device and storage medium
CN115705663B (en) Image processing method and electronic equipment
CN117201930B (en) Photographing method and electronic equipment
CN117278864B (en) Image capturing method, electronic device, and storage medium
CN116452437B (en) High dynamic range image processing method and electronic equipment
CN116723417B (en) Image processing method and electronic equipment
CN116723410B (en) Method and device for adjusting frame interval
CN116048323B (en) Image processing method and electronic equipment
CN117745620A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant