CN115967846A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN115967846A
CN115967846A CN202210112240.5A CN202210112240A CN115967846A CN 115967846 A CN115967846 A CN 115967846A CN 202210112240 A CN202210112240 A CN 202210112240A CN 115967846 A CN115967846 A CN 115967846A
Authority
CN
China
Prior art keywords
image frame
image
electronic equipment
long
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210112240.5A
Other languages
Chinese (zh)
Inventor
崔瀚涛
王梓蓉
丁志兵
王宏杰
李逸伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to PCT/CN2022/113363 priority Critical patent/WO2023056785A1/en
Priority to EP22877809.8A priority patent/EP4287607A1/en
Publication of CN115967846A publication Critical patent/CN115967846A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides an image processing method and electronic equipment, relates to the technical field of image application, and can enable details of images or videos shot by a plurality of cameras to be richer in a scene shot by the electronic equipment starting the plurality of cameras, so that the shooting effect of the electronic equipment is favorably improved. The method can determine the target zooming magnification and start the second camera after the electronic equipment responds to the zooming operation input by a user; when the target zooming multiplying power is larger than or equal to a first preset value and smaller than or equal to a second preset value, the first camera and the second camera both adopt an overlapping exposure mode to collect image frames, then the image frames collected by the first camera and the second camera are subjected to fusion processing, and a preview image is generated.

Description

Image processing method and electronic equipment
The present application claims priority of chinese patent application entitled "a method and an electronic apparatus for image processing" filed by the national intellectual property office on 09.10/09.2021, application number 202111176340.6, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of image application technologies, and in particular, to an image processing method and an electronic device.
Background
Currently, an increasing number of people use electronic devices (e.g., cell phones) to take pictures and videos to record life drips. In order to improve the shooting performance of the electronic equipment (for example, the focal length and the shooting function of the electronic equipment are richer, and the imaging quality is better), the electronic equipment can adopt a plurality of cameras to shoot photos and videos. The multiple cameras have different functions, for example, the multiple cameras can include a wide-angle camera, a telephoto camera, a blurring camera, and the like, and thus the multiple cameras can be combined to shoot pictures and videos, so as to improve shooting performance of the electronic device.
However, when a plurality of cameras are combined to shoot photos and videos, parameters (such as focal length, angle of view, and the like) of different cameras are not consistent, so that when an electronic device shoots photos and videos by using a plurality of cameras, dynamic ranges of finally generated photos or videos are not consistent, and the effect of the photos or videos is not good.
Disclosure of Invention
The application provides an image processing method and electronic equipment, which can enable the details of images or videos shot by a plurality of cameras to be richer in a scene shot by starting the plurality of cameras by the electronic equipment, and are beneficial to improving the shooting effect of the electronic equipment.
In order to achieve the technical purpose, the following technical scheme is adopted in the application:
in a first aspect, an image processing method is provided, which is applied to an electronic device, where the electronic device includes a first camera and a second camera; the method comprises the following steps: the electronic equipment responds to the operation of starting the first camera by a user and displays a first preview image; the first preview image is generated from a first long exposure image frame; the first long exposure image frame is generated by fusing a first image frame and a second image frame through electronic equipment; the electronic equipment responds to the zooming operation input by the user, determines the target zooming magnification, starts a second camera and displays a second preview image corresponding to the target zooming magnification; when the target zoom magnification is larger than or equal to a first preset value and smaller than or equal to a second preset value, a second preview image is generated by fusing a second long exposure image frame and a third long exposure image frame; the second long exposure image frame is generated by fusing a third image frame and a fourth image frame by the electronic equipment; the third long exposure image frame is generated by fusing a fifth image frame and a sixth image frame by the electronic equipment; the exposure duration of the first image frame, the third image frame and the fifth image frame is a first exposure duration; the exposure time of the second image frame, the fourth image frame and the sixth image frame is a second exposure time; the first exposure duration is different from the second exposure duration; the first image frame, the second image frame, the third image frame and the fourth image frame are collected by a first camera; the fifth image frame and the sixth image frame are acquired by the second camera.
Based on the first aspect, firstly, the electronic device responds to the operation of starting the first camera by a user and displays a first preview image; then, when a user inputs zooming operation, the electronic equipment determines a target zooming magnification and starts a second camera to display a second preview image corresponding to the target zooming magnification; the second view image is generated by fusing the second long exposure image frame and the third long exposure image frame; the second long exposure image frame is generated by fusing a third image frame and a fourth image frame by the electronic equipment; the third long exposure image frame is generated by fusing a fifth image frame and a sixth image frame by the electronic equipment, and the exposure time of the third image frame and the fifth image frame is the first exposure time; the exposure time of the fourth image frame and the sixth image frame is a second exposure time; the first exposure duration is different from the second exposure duration; the third image frame and the fourth image frame are acquired by a first camera; the fifth image frame and the sixth image frame are collected by a second camera; that is to say, because the first camera and the second camera adopt the same exposure mode to collect original image frames (namely, a third image frame, a fourth image frame, a fifth image frame and a sixth image frame), the electronic device fuses the original image frames collected by the first camera and the second camera to generate a second long exposure image frame and a third long exposure image frame with the same dynamic range; on the basis, the electronic equipment generates a second preview image by fusing the second long exposure image frame and the third long exposure image frame, so that the details of the second preview image are richer, and the effect is better.
In a possible implementation manner of the first aspect, the method further includes: when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value, a second preview image is generated by a second long-exposure image frame; and the fourth preset value is smaller than the first preset value.
It should be noted that, in this embodiment, when the target zoom magnification is greater than or equal to the third preset value and less than or equal to the fourth preset value, the second preview image is generated from the second long-exposure image frame; and the second long exposure image frame is generated by the fusion processing of the third image frame and the fourth image frame collected by the first camera. In other words, in this embodiment, if the target zoom magnification is small, the second preview image is generated by fusing the original image frames collected by the first camera; on this basis, a second preview image may also be generated from the first long exposure image frame. Illustratively, the third preset value may be, for example, 1 ×, and the fourth preset value may be, for example, 4.4 ×. When the target zoom magnification is 1 ×, the electronic device may generate a second preview image from the first long-exposure image frame.
In this design, when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value, the second preview image is generated from the second long-exposure image frame; when the fourth preset value is smaller than the first preset value, namely when the target zoom magnification is small, the electronic device performs fusion processing on the original image frames (namely the third image frame and the fourth image frame) acquired by the first camera to generate a second preview image, so that the power consumption of the device is reduced.
In a possible implementation manner of the first aspect, the method further includes: when the target zoom magnification is greater than or equal to a fifth preset value, a second preview image is generated by a third long-exposure image frame; and the fifth preset value is larger than the second preset value.
In this design, when the target zoom magnification is greater than or equal to a fifth preset value, the second preview image is generated from the third long-exposure image frame; because the fifth preset value is larger than the second preset value, that is, when the target zoom magnification is larger, the electronic device performs fusion processing on the original image frames (namely, the fifth image frame and the sixth image frame) acquired by the second camera to generate a second preview image, which is beneficial to reducing the power consumption of the device.
In a possible implementation manner of the first aspect, the starting, by the electronic device, the second camera includes: and when the target zooming multiplying power is larger than the first preset zooming multiplying power, the electronic equipment starts the second camera.
In the design mode, when the target zooming magnification is larger than the first preset zooming magnification, the electronic equipment can start the second camera, and the power consumption of the equipment is further reduced.
In a possible implementation manner of the first aspect, the second preview image is a video preview image; the method further comprises the following steps: the electronic equipment displays a first interface; the first interface is a preview interface during shooting, and comprises a recording control; the electronic equipment responds to the operation of a user on the recording control to generate a video file; and the video file is generated by the electronic equipment through fusion processing of a second long exposure image frame and the third long exposure image frame.
It should be noted that, in this embodiment, the video file generated by the electronic device is also related to the target zoom magnification. For example, when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value, the video file is generated by the electronic device for the second long-exposure image frame fusion processing. And when the target zoom magnification is greater than or equal to a fifth preset value, the video file is generated by the electronic equipment for the fusion processing of the third long-exposure image frame.
In the design mode, because the first camera and the second camera of the electronic equipment adopt the same exposure mode to collect the original image frames, the electronic equipment fuses the original image frames collected by the first camera and the second camera to generate a second preview image with richer image details; furthermore, when the second preview image is a video preview image, the electronic device starts to record a video in response to the operation of the user on the recording control, so that the details of the video recorded by the electronic device are richer, and the effect is better.
In a possible implementation manner of the first aspect, the second preview image is a preview image of the electronic device during a video recording process.
In the design mode, because the first camera and the second camera of the electronic equipment adopt the same exposure mode to collect the original image frames, the electronic equipment fuses the original image frames collected by the first camera and the second camera to generate a second preview image with richer image details; furthermore, when the second preview image is a preview image of the electronic device in the video recording process, the details of the preview image of the electronic device in the video recording process can be richer, and the effect is better.
In a possible implementation manner of the first aspect, the second preview image is a photographed preview image; the method further comprises the following steps: the electronic equipment displays a second interface; the second interface is a preview interface during shooting, and comprises a shooting control; the electronic equipment responds to the operation of a user on the photographing control to generate a photographing file; and the photographing file is generated by fusing the second long exposure image frame and the third long exposure image frame by the electronic equipment.
It should be noted that, in this embodiment, the photographing file generated by the electronic device is also related to the target zoom magnification. For example, when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value, the photographing file is generated by the electronic device for the second long-exposure image frame fusion processing. And when the target zoom magnification is greater than or equal to a fifth preset value, the photographing file is generated by the electronic equipment for the fusion processing of the third long-exposure image frame.
In the design mode, because the first camera and the second camera of the electronic equipment adopt the same exposure mode to collect the original image frames, the electronic equipment fuses the original image frames collected by the first camera and the second camera to generate a second preview image with richer image details; furthermore, when the second preview image is the photographing preview image, the electronic device responds to the operation of the user on the photographing control to generate the photographing file, so that the image details of the generated photographing file are richer, and the effect is better.
In one possible implementation manner of the first aspect, before displaying the second preview image corresponding to the target zoom magnification, the method further includes: the electronic equipment performs image conversion processing on the second long exposure image frame and the third long exposure image frame; the image conversion process includes: the electronic equipment converts the second long exposure image frame into a second long exposure image frame in a target format and converts the third long exposure image frame into a third long exposure image frame in the target format; the bandwidth of the second long exposure image frame is higher than that of the second long exposure image frame in the target format, and the bandwidth of the third long exposure image frame is higher than that of the third long exposure image frame in the target format.
In the design mode, before displaying a second preview image corresponding to the target zoom magnification, the electronic device performs image conversion processing on a second long exposure image frame and a third long exposure image frame, namely the electronic device converts the second long exposure image frame into the second long exposure image frame in the target format and converts the third long exposure image frame into the third long exposure image frame in the target format; the bandwidth of the second long exposure image frame during transmission is higher than that of the second long exposure image frame in the target format, and the bandwidth of the third long exposure image frame during transmission is higher than that of the third long exposure image frame in the target format, so that the bandwidths of the second long exposure image frame and the third long exposure image frame during transmission can be reduced, and the reduction of the power consumption of equipment is facilitated. In one possible implementation manner of the first aspect, the electronic device includes N consecutive frames of the second long-exposure image frame and M consecutive frames of the third long-exposure image frame; n is more than or equal to 1, M is more than or equal to 1; the image conversion process further includes: in a second long exposure image frame of continuous N frames, if local information in the second long exposure image frame at the nth moment does not meet a preset condition, the electronic equipment repairs according to the local information in the second long exposure image frame at the (N-1) th moment and the (N + 1) th moment; n is more than or equal to 2; and/or the electronic equipment repairs the local information in the third long exposure image frame at the M-1 th moment and the M +1 th moment if the local information in the third long exposure image frame at the M-th moment does not meet the preset condition in the third long exposure image frame of the continuous M frames; m is more than or equal to 2; wherein the local information comprises at least one of color, texture, or shape.
In the design mode, under the condition that the local information of the second long exposure image frame at the current moment is insufficient, the electronic equipment can complement the second long exposure image frame at the previous moment and the next moment; under the condition that the local information of the third long exposure image frame at the current moment is insufficient, the electronic equipment can complement the third long exposure image frame at the previous moment and the next moment, and the image display effect is further improved.
In a possible implementation manner of the first aspect, the method further includes: the electronic equipment carries out multi-shot smoothing algorithm processing on the second long exposure image frame in the target format and the third long exposure image frame in the target format; the multi-shot smoothing algorithm is used to reduce noise or distortion of the second long-exposure image frame of the target format and the third long-exposure image frame of the target format.
In the design mode, the electronic equipment performs multi-shot smoothing algorithm processing on the second long-exposure image frame in the target format and the third long-exposure image frame in the target format, and is beneficial to reducing noise or distortion of the second long-exposure image frame in the target format and the third long-exposure image frame in the target format.
In a possible implementation manner of the first aspect, the method further includes: the electronic equipment carries out first preset algorithm processing on a second long exposure image frame in the target format and a third long exposure image frame in the target format; the first preset algorithm processing comprises at least one of image simulation transformation processing, multi-frame high dynamic range image processing or gamma processing.
In the design mode, the electronic equipment carries out first preset algorithm processing on a second long-exposure image frame in a target format and a third long-exposure image frame in the target format, and the first preset algorithm processing comprises at least one of image simulation transformation processing, multi-frame high dynamic range image processing or gamma processing, so that the dynamic range of the image can be further improved through the first preset algorithm.
In a possible implementation manner of the first aspect, the method further includes: after the electronic equipment displays the second preview image, the electronic equipment caches the image frame acquired by the first camera in the first photographing queue; and caching the image frames collected by the second camera into a second photographing queue.
In a possible implementation manner of the first aspect, the generating, by the electronic device, a photographing file in response to an operation of the photographing control by a user includes: the electronic equipment responds to the operation of a user on the photographing control, selects a first image from the first photographing queue, and selects a second image from the second photographing queue; the first image is the image of the latest frame in all the images of the first photographing queue; the second image is the image of the latest frame in all the images of the second photographing queue; the electronic equipment carries out second preset algorithm processing on the first image and the second image to generate a photographing file in a target image format; the second pre-set algorithm process is used to preserve details in the first and second images.
In the design mode, when the electronic equipment takes a picture, the electronic equipment selects a first image from a first picture taking queue and selects a second image from a second picture taking queue; the first image is the image of the latest frame in all the images of the first photographing queue; the second image is the latest frame image of all the images in the second photographing queue, so that when the electronic equipment performs second preset algorithm processing on the first image and the second image to generate a photographing file in a target image format, the time delay of producing the photographing file is favorably reduced.
In a possible implementation manner of the first aspect, the method further includes: the electronic equipment carries out third preset algorithm processing on the first image and the second image; the third preset algorithm is used for fusing the field angles of the first image and the second image.
In the design mode, the electronic equipment carries out third preset algorithm processing on the first image and the second image; since the third preset algorithm is used for blending the field angles of the first image and the second image, the image effect of the generated photographed file can be further improved.
In a possible implementation manner of the first aspect, the performing, by the electronic device, a second preset algorithm process on the first image and the second image includes: the electronic equipment carries out second preset algorithm processing on the first target image in the first photographing queue and the second target image in the second photographing queue; the timestamp of the first target image is the same as the timestamp of the second target image; or the difference value between the time stamp of the first target image and the time stamp of the second target image is smaller than the preset value.
In the design mode, when the electronic equipment carries out second preset algorithm processing on a first target image in the first photographing queue and a second target image in the second photographing queue, the time stamp of the first target image is the same as that of the second target image; or the difference value between the time stamp of the first target image and the time stamp of the second target image is smaller than the preset value, so that the power consumption of the device can be further reduced.
In a second aspect, an electronic device is provided, which has the function of implementing the first aspect. The function can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, an electronic device is provided that includes a display screen, a memory, and one or more processors; the display screen, the memory and the processor are coupled; the memory for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the steps of: the electronic equipment responds to the operation that a user starts a first camera and displays a first preview image; the first preview image is generated from a first long exposure image frame; the first long exposure image frame is generated by fusing the first image frame and the second image frame by the electronic equipment; the electronic equipment responds to zoom operation input by a user, determines a target zoom magnification and starts a second camera to display a second preview image corresponding to the target zoom magnification; when the target zoom magnification is greater than or equal to a first preset value and less than or equal to a second preset value, a second preview image is generated by fusing a second long-exposure image frame and a third long-exposure image frame; the second long exposure image frame is generated by fusing a third image frame and a fourth image frame by the electronic equipment; the third long exposure image frame is generated by fusing a fifth image frame and a sixth image frame through the electronic equipment; the exposure duration of the first image frame, the third image frame and the fifth image frame is a first exposure duration; the exposure time of the second image frame, the fourth image frame and the sixth image frame is a second exposure time; the first exposure duration is different from the second exposure duration; the first image frame, the second image frame, the third image frame and the fourth image frame are collected by a first camera; the fifth image frame and the sixth image frame are acquired by the second camera.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value, a second preview image is generated by a second long-exposure image frame; and the fourth preset value is smaller than the first preset value.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: when the target zoom magnification is greater than or equal to a fifth preset value, the second preview image is generated by a third long exposure image frame; and the fifth preset value is larger than the second preset value.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to perform the following steps: and when the target zooming multiplying power is larger than the first preset zooming multiplying power, the electronic equipment starts the second camera.
In one possible design of the third aspect, the second preview image is a video preview image; the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment displays a first interface; the first interface is a preview interface during shooting, and comprises a recording control; the electronic equipment responds to the operation of a user on the recording control to generate a video file; and the video file is generated by the electronic equipment through fusion processing of a second long exposure image frame and the third long exposure image frame.
In a possible design of the third aspect, the second preview image is a preview image of the electronic device during recording of the video.
In one possible design of the third aspect, the second preview image is a photographed preview image; the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment displays a second interface; the second interface is a preview interface during shooting, and comprises a shooting control; the electronic equipment responds to the operation of a user on the photographing control to generate a photographing file; and the photographing file is generated by fusing the second long exposure image frame and the third long exposure image frame by the electronic equipment.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment performs image conversion processing on the second long exposure image frame and the third long exposure image frame; the image conversion process includes: the electronic equipment converts the second long exposure image frame into a second long exposure image frame in a target format and converts the third long exposure image frame into a third long exposure image frame in the target format; the bandwidth of the second long exposure image frame is higher than that of the second long exposure image frame in the target format, and the bandwidth of the third long exposure image frame is higher than that of the third long exposure image frame in the target format.
In one possible design of the third aspect, the electronic device includes N consecutive frames of the second long-exposure image frame and M consecutive frames of the third long-exposure image frame; n is more than or equal to 1, M is more than or equal to 1; the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: in a second long exposure image frame of continuous N frames, if local information in the second long exposure image frame at the nth time does not meet a preset condition, the electronic equipment repairs the local information in the second long exposure image frame at the nth-1 time and the (N + 1) th time; n is more than or equal to 2; and/or the electronic equipment repairs the local information in the third long exposure image frame at the M-1 th moment and the M +1 th moment if the local information in the third long exposure image frame at the M-th moment does not meet the preset conditions in the third long exposure image frame of the continuous M frames; m is more than or equal to 2; wherein the local information comprises at least one of color, texture, or shape.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment carries out multi-shot smoothing algorithm processing on the second long exposure image frame in the target format and the third long exposure image frame in the target format; the multi-shot smoothing algorithm is used to reduce noise or distortion of the second long-exposure image frame of the target format and the third long-exposure image frame of the target format.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment carries out first preset algorithm processing on a second long exposure image frame in a target format and a third long exposure image frame in the target format; the first preset algorithm processing comprises at least one of image simulation transformation processing, multi-frame high dynamic range image processing or gamma processing.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to perform the following steps: the electronic equipment responds to the operation of a user on the photographing control, selects a first image from the first photographing queue, and selects a second image from the second photographing queue; the first image is the image of the latest frame in all the images of the first photographing queue; the second image is the image of the latest frame in all the images of the second photographing queue; the electronic equipment carries out second preset algorithm processing on the first image and the second image to generate a photographing file in a target image format; the second pre-set algorithm process is used to preserve details in the first image and the second image.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment carries out third preset algorithm processing on the first image and the second image; the third preset algorithm is used for fusing the field angles of the first image and the second image.
In one possible design of the third aspect, the computer instructions, when executed by the processor, cause the electronic device to perform the following steps in particular: the electronic equipment carries out second preset algorithm processing on the first target image in the first photographing queue and the second target image in the second photographing queue; the timestamp of the first target image is the same as the timestamp of the second target image; or the difference value between the time stamp of the first target image and the time stamp of the second target image is smaller than the preset value.
In a fourth aspect, a computer-readable storage medium is provided, in which computer instructions are stored, which when executed on a computer, cause the computer to perform the image processing method of any one of the above first aspects.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image processing method of any of the first aspects above.
For technical effects brought by any one of the design manners in the second aspect to the fourth aspect, reference may be made to technical effects brought by different design manners in the first aspect, and details are not described herein.
Drawings
FIG. 1 is a schematic diagram of an overlapping exposure method according to an embodiment of the present disclosure;
fig. 2a is a schematic composition diagram of a pixel unit according to an embodiment of the present disclosure;
FIG. 2b is a schematic diagram illustrating a red sub-pixel according to an embodiment of the present disclosure;
fig. 3 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 5a is a schematic diagram illustrating a process of entering a video mode by a mobile phone according to an embodiment of the present application;
fig. 5b is a schematic flow chart of image processing according to an embodiment of the present disclosure;
fig. 6 is a schematic view of an exposure mode of a main camera according to an embodiment of the present disclosure;
fig. 7a is a schematic view of an interface for recording a video by a mobile phone according to an embodiment of the present application;
fig. 7b is a schematic flowchart of an image processing according to an embodiment of the present application;
fig. 8a is a schematic view of zoom ratio fusion of main shooting and sub-shooting provided in an embodiment of the present application;
fig. 8b is a schematic flowchart of an image processing according to an embodiment of the present application;
fig. 9 is a schematic flowchart of image processing according to a fourth embodiment of the present disclosure;
fig. 10a is a schematic view of a mobile phone photographing interface provided in an embodiment of the present application;
fig. 10b is a schematic flowchart of image processing according to an embodiment of the present disclosure;
fig. 11 is a sixth schematic flowchart of image processing according to an embodiment of the present application;
fig. 12 is a seventh flowchart of image processing according to an embodiment of the present disclosure;
fig. 13 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, the meaning of "a plurality" is two or more unless otherwise specified.
In the related art, the electronic device may employ one or more cameras to capture images. Taking an example that the electronic device acquires an image by using one camera, exemplarily, the electronic device starts a main camera (or called main camera) to acquire an original image frame; and then, the electronic equipment encodes the original image frame acquired by the main camera to generate a preview image. Taking the example that the electronic device adopts two cameras to collect images, exemplarily, the electronic device starts a main camera (or called main camera) and an auxiliary camera (or called auxiliary camera) at the same time to collect original image frames; and then, the electronic equipment performs fusion processing on the original image frames acquired by the main shooting and the radiation so as to generate a preview image.
However, in some embodiments, due to different parameters of the main shot and the sub shot, the main shot can acquire a plurality of original image frames (or referred to as multi-frame images) in one exposure period, and the sub shot can acquire only one original image frame (or referred to as single-frame image) in one exposure period; therefore, the dynamic range of the single-frame image is limited, so that the dynamic ranges of the original image frame acquired by the main shooting and the original image frame acquired by the auxiliary shooting are inconsistent, and further, the details of the preview image obtained by fusing the original image frame acquired by the main shooting and the original image frame acquired by the auxiliary shooting are not rich enough, and the effect is not good enough. Further, because the preview image can be a photographing preview image or a video preview image, on this basis, after the user clicks the photographing control (or video control), the electronic device can encode the photographing preview image to generate a photographing file; correspondingly, the electronic device can also perform coding processing on the video preview image to generate a video file. Thus, the details of the generated photo file (or video file) are not rich enough, and the effect is not good.
The parameters of the main shot and the sub shot are different, for example, the focal lengths of the main shot and the sub shot (for example, the main shot is a wide-angle camera, and the sub shot is a telephoto camera), the field of view (FOV) are different, and the like.
Based on this, the embodiment of the present application provides an image processing method, which can be applied to an electronic device having a plurality of cameras; the method can ensure that the dynamic range of the original image frames acquired by each camera in the plurality of cameras is consistent; and then, the electronic equipment can perform the same dynamic image algorithm synthesis processing on the original image frames collected by each camera, so that the image details of the processed original image frames are richer, and the details of the preview images generated after the original image frames collected by each camera are subjected to fusion processing are richer, thereby being beneficial to improving the shooting effect of the electronic equipment.
In some embodiments, the electronic device may process the raw image frames acquired by each camera using the same High-Dynamic Range (HDR) algorithm. Among other things, HDR can provide more dynamic range and image detail. It should be noted that the dynamic range refers to a ratio of a maximum value to a minimum value of the variable signal, and the dynamic range in the image is a ratio of a luminance value of a brightest object to a luminance value of a darkest object in the image, and the larger the ratio of the luminance value of the brightest object to the luminance value of the darkest object in the image is, the more the image luminance is displayed in the image, the more the image luminance is displayed, the luminance displayed in the image is closer to the luminance of the real shooting environment. Because the dynamic range of the image is larger, the image can show more brightness states, so that the details of the image are richer.
Illustratively, the electronic device may include two cameras. For example, the electronic device includes a main camera (or main shot) and a sub camera (or sub shot). On the basis, in order to make the dynamic ranges of the original image frames acquired by the main shooting and the sub shooting consistent, in some embodiments, the main shooting and the sub shooting can acquire the original images in the same exposure mode. And then, the electronic equipment carries out HDR algorithm synthesis processing on the original image frames acquired by the main shooting and the auxiliary shooting, so that the details of the original image frames acquired by the main shooting and the auxiliary shooting are richer.
Note that, the original image frames acquired by the main shooting and the sub-shooting may be Low Dynamic Range (LDR) images, for example.
In some embodiments, the exposure mode of the main shot and the sub shot collecting original image frames is an overlapping exposure mode. For example, the main shot and the sub shot may acquire original image frames by 2 times (or 2-exp) overlapping exposure. Specifically, the mode of 2-exp overlapping exposure is as follows: in one exposure period, the electronic device may control the camera to expose for two exposure durations, respectively, and acquire an image frame within each exposure duration. In some embodiments, the two exposure periods may not be equal, such as two exposure periods comprising a long exposure period and a short exposure period; the electronic device may capture image frames in a staggered exposure mode with long exposure durations and short exposure durations. Based on the overlapping exposure mode, the electronic equipment can acquire two columns of image frames. For example, the electronic device acquires a long exposure image frame during a long exposure time period and acquires a short exposure image frame during a short exposure time period.
Illustratively, as shown in fig. 1, a schematic diagram of a staggered exposure manner provided in the embodiments of the present application is shown. Referring to fig. 1, the long exposure time period may be labeled, for example, as L, and the short exposure time period may be labeled, for example, as S. For example, for an example of an exposure period having a duration of 33ms (millisecond), if a camera of the electronic device acquires image frames at a rate of 30fps (frames per second), the long exposure duration may be 30ms at the longest, and the short exposure duration may be 5ms at the shortest. In this way, the camera can acquire one column of image frames within 30ms and acquire one column of image frames within 5ms, that is, the camera can acquire two columns of image frames within one exposure period.
For example, the electronic device includes two cameras (e.g., a main camera and a sub camera), and illustratively, the main camera acquires a first image frame in a long exposure time (e.g., 30 ms) and acquires a second image frame in a short exposure time (e.g., 5 ms). The side shot acquires a third image frame during a long exposure time (e.g., 30 ms) and acquires a fourth image frame during a short exposure time (e.g., 5 ms). In some embodiments, the electronic device may perform HDR algorithm synthesis processing on a first image frame and a second image frame acquired by a main camera to generate a first long-exposure image frame; meanwhile, the electronic equipment performs HDR algorithm synthesis processing on the third image frame and the fourth image frame acquired by the shooting, so as to generate a second long-exposure image frame. Wherein the first long exposure image frame and the second long exposure image frame have HDR effect, i.e. the first long exposure image frame and the second long exposure image frame have a larger dynamic range and richer image details. And then, the electronic equipment transmits the first long exposure image frame and the second long exposure image frame to the display screen, so that the display screen displays a preview image according to the first long exposure image frame and the second long exposure image frame. It should be understood that the preview image is an image with HDR effects.
It can be understood that, because the main shooting and the auxiliary shooting both adopt an overlapping exposure mode to collect image frames, the dynamic ranges of the first image frame and the second image frame collected by the main shooting and the third image frame and the fourth image frame collected by the auxiliary shooting are consistent; therefore, the electronic device can adopt the same HDR algorithm synthesis processing to the first image frame and the second image frame, and to the third image frame and the fourth image frame, so that the dynamic ranges of the synthesized first long-exposure image frame and the synthesized second long-exposure image frame are consistent, and the synthesized first long-exposure image frame and the synthesized second long-exposure image frame have larger dynamic ranges and more image details, namely the image details of the first long-exposure image frame and the second long-exposure image frame are richer. Furthermore, after the electronic equipment transmits the first long exposure image frame and the second long exposure image frame to the display screen, details of a preview image generated by the display screen according to the first long exposure image frame and the second long exposure image frame are richer, and the shooting effect of the electronic equipment is improved.
In addition, when the electronic equipment adopts an overlapping exposure mode to collect an original image frame, two columns of image frames can be collected by the camera in one exposure period due to the overlapping exposure mode, and one exposure period comprises two exposure durations with different durations; therefore, when the electronic device fuses two collected image frames into one image frame based on different exposure durations, image spots, or ghost (ghost) in the image frame can be effectively reduced. The image light spots refer to the phenomenon that the image frames acquired by the electronic equipment are whitish or halo due to different illumination in the shooting process of the electronic equipment.
In other embodiments, the exposure mode for acquiring the original image frames by the main shooting and the sub-shooting is a Color Filter Array (CFA) exposure mode. The color filter array may also be referred to as a four-pixel bayer array (QCFA). Illustratively, as shown in fig. 2a, a four-pixel bayer array includes one red sub-pixel (R), two green sub-pixels (G), and one blue sub-pixel (B) for one pixel unit, which may also be referred to as an RGGB pixel unit. Wherein each sub-pixel forms a 2 x 2 matrix. For example, referring to fig. 2a, R1, R2, R3, and R4 included in the red subpixel R form a 2 × 2 matrix; g1, G2, G3, and G4 included in the green sub-pixel G form a 2 × 2 matrix and G5, G6, G7, and G8 form a 2 × 2 matrix; b1, B2, B3, and B4 included in the blue sub-pixel B form a 2 × 2 matrix.
Taking a red sub-pixel as an example for illustration, exemplarily, in conjunction with fig. 2a, as shown in fig. 2b, R1 and R4 output image frames with the same exposure duration; r2 and R3 output image frames with different exposure durations. For example, R1 and R4 output image frames using a medium exposure time duration (M); r2, outputting an image frame by adopting a long exposure time length (L); r3 outputs the image frame with a short exposure time duration (S). Accordingly, for the green sub-pixel, G1 and G4 output image frames with the same exposure duration; g2 and G3 output image frames with different exposure durations. For example, G1 and G4 output image frames with a medium exposure duration (M); g2, outputting an image frame by adopting a long exposure time length (L); g3 outputs the image frame with a short exposure duration (S). Accordingly, for the blue sub-pixel, B1 and B4 output image frames with the same exposure duration; b2 and B3 output image frames with different exposure durations. For example, B1 and B4 output image frames with a medium exposure duration (M); b2, outputting an image frame by adopting a long exposure time length (L); b3 outputting the image frame with the short exposure time duration (S).
On the basis, the electronic equipment can carry out HDR algorithm synthesis processing on the image frame output in the long exposure time, the image frame output in the middle exposure time and the image frame output in the short exposure time to synthesize an image frame with an HDR effect, so that the dynamic range of the synthesized image frame is larger, and the image details are richer. Furthermore, the display screen of the electronic equipment can display the preview image according to the synthesized image frame, so that the image details of the preview image are richer, and the shooting effect of the electronic equipment is improved.
It should be noted that the above-mentioned overlapping exposure mode and the exposure mode of the four-pixel bayer array are merely some examples of the embodiments of the present application, and do not constitute a limitation of the present application. It should be understood that, as long as the exposure modes that can make the dynamic ranges of the original image frames acquired by the main shot and the sub shot consistent belong to the scope of the embodiments of the present application, no further description is given here.
The following describes in detail an image processing method provided in an embodiment of the present application with reference to the drawings of the specification.
For example, the electronic device in the embodiment of the present application may be an electronic device having a shooting function. For example, the electronic device may be a mobile phone motion camera (GoPro), a digital camera, a tablet computer, a desktop, a laptop, a handheld computer, a notebook computer, a car-mounted device, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and the like, and the embodiment of the present application is not particularly limited to the specific form of the electronic device.
Fig. 3 is a schematic structural diagram of the electronic device 100. Among them, the electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only an exemplary illustration, and does not limit the structure of the electronic device. In other embodiments, the electronic device may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (QLED), and the like.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1.
In the embodiment of the application, the electronic device may include two or more cameras. Taking an example that the electronic device includes two cameras (such as a main camera and an auxiliary camera), illustratively, the main camera and the auxiliary camera adopt the same exposure mode to collect original image frames, and then the electronic device performs HDR algorithm synthesis processing on the original image frames collected by the main camera and the auxiliary camera, so that the dynamic range of the original image frames collected by the main camera and the auxiliary camera of the electronic device is larger, and the image details are richer.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device selects a frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent cognition of electronic equipment, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the electronic device. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as audio, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121. For example, in the embodiment of the present application, the processor 110 may execute instructions stored in the internal memory 121, and the internal memory 121 may include a program storage area and a data storage area.
The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, phone book and the like) created in the using process of the electronic device. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic equipment can support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc.
In some embodiments, the software system of the electronic device 100 may employ a hierarchical architecture, an event-driven architecture, a micro-core architecture, or a cloud architecture. In the embodiment of the present application, a layered architecture Android system is taken as an example to exemplarily explain a software structure of the electronic device 100.
Fig. 4 is a software structure diagram of an electronic device according to an embodiment of the present application.
It will be appreciated that the hierarchical architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may include an application layer (APP), a framework layer (FWK), a Hardware Abstraction Layer (HAL), and a kernel layer (kernel). As shown in fig. 4, the Android system may further include an Android runtime (Android runtime) and a system library. For convenience of understanding, in the embodiment of the present application, the software structure diagram shown in fig. 4 further includes some hardware structures of the electronic device in fig. 3. Such as a camera, a display screen, etc.
The application layer may include a series of application packages. As shown in fig. 4, the application package may include APP1, APP2, APP3, and the like. In some embodiments, the application package may include some applications having a camera function (e.g., a camera application, etc.). When the electronic equipment runs the camera application, the electronic equipment starts the camera and collects original image frames through the camera. In the embodiment of the application, the electronic equipment comprises a plurality of cameras, for example, the electronic equipment comprises two cameras of a main camera and a secondary camera.
The application framework layer provides an Application Programming Interface (API) and a programming framework for an application program of the application layer. The application framework layer includes a number of predefined functions. The application framework layer provides programming services to application layer calls through the API interface. As shown in fig. 4, the application framework layer includes a camera service framework.
The Android runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java voice, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, composition, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The hardware abstraction layer is an interface layer located between the kernel layer and the hardware, and can be used for abstracting the hardware. Illustratively, as shown in FIG. 4, the application abstraction layer includes a camera interface.
The kernel layer provides underlying drivers for various hardware of the electronic device. Illustratively, as shown in fig. 4, the core layer includes a camera driver module.
The image processing method provided in the embodiment of the present application is described with reference to the software configuration diagram shown in fig. 4. In some embodiments, when a user starts a camera application, the electronic device needs to invoke a camera to capture raw image frames. Based on this, when the user starts the camera application, the camera application triggers an instruction to start the camera. And then, the camera application calls an API (application programming interface) interface of the framework layer to send a command for starting the camera to the camera service framework, the camera service framework calls a camera interface in the hardware abstraction layer, and the command for starting the camera is sent to the camera driving module. The camera driving module can drive the camera to collect original image frames.
On the basis, the camera transmits the acquired original image frame to a camera interface in the hardware abstraction layer. The HDR algorithm synthesis processing is carried out on the camera interface original image frames, and then the processed original image frames are transmitted to the display screen, so that the display screen displays the preview image. It will be appreciated that the processed raw image frames can provide greater dynamic range, as well as greater image detail.
Illustratively, an ISP may be provided in the camera interface. And the camera carries out HDR algorithm synthesis processing on the original image frame through the ISP. In some embodiments, the ISP includes a default RAW domain processing module, an ISP front end module, and an ISP back end module. For example, the camera interface may perform HDR algorithm synthesis processing on the original image frame through a preset RAW domain processing module.
The preset RAW domain processing module refers to a module designed by a preset RAW image processing algorithm. The preset RAW image processing algorithm is a deep learning network for enhancing the image quality of a RAW domain. In some embodiments, the preset RAW domain processing algorithm may be a software image processing algorithm. The predetermined RAW domain processing algorithm may be a software algorithm in a hardware abstraction layer algorithm library of the electronic device. In other embodiments, the pre-set RAW domain processing algorithm may be a hardware image processing algorithm. The preset RAW domain processing algorithm may be a hardware image processing algorithm implemented by calling an image processing algorithm capability of the ISP.
It should be noted that the preset RAW domain processing algorithm may also be referred to as a preset image processing algorithm. The preset RAW domain processing algorithm is referred to in the embodiment of the present application because the preset RAW domain processing algorithm inputs an image of a RAW domain. The preset RAW domain processing algorithm may output an image in a RAW domain or an image in an RGB domain, which is not limited in this embodiment of the present application.
Illustratively, taking an electronic device as an example of a cell phone, in some embodiments, as shown in fig. 5a (1), in response to a user operating an icon 201 of a "camera" application in a cell phone home screen interface, the cell phone displays an interface 202 as shown in fig. 5a (2). The interface 202 is a preview interface for mobile phone photographing, and the preview interface is used for displaying a preview image (i.e., a photographing preview image) during mobile phone photographing. For example, when the mobile phone responds to the icon 201 of the "camera" application by the user, the mobile phone starts the camera (e.g., starts the main shot and the sub shot), and then the mobile phone collects the original image frames through the main shot and the sub shot and displays the original image frames in the preview interface (i.e., the interface 202). Still as shown in fig. 5a (2), the interface 202 also includes a "portrait" mode, a "record" mode, a "movie" mode, and a "professional" mode. The video recording mode and the movie recording mode are both used for recording video files; the "professional" mode is used for taking a picture. Taking the "movie" mode as an example, in other embodiments, as shown in (2) of fig. 5a, in response to the user selecting the "movie" mode 203, the handset displays an interface 204 as shown in (3) of fig. 5 a. The interface 204 is a preview interface before the mobile phone video recording, and the preview interface is used for displaying a preview image (i.e., a video preview image) during the mobile phone video recording. For example, when the mobile phone responds to the operation of the user selecting the "movie" mode 203, the mobile phone captures original image frames through the camera (e.g., start main shooting and sub-shooting), and displays the original image frames in the preview interface (i.e., the interface 204).
It should be noted that, in the embodiment of the present application, no matter the electronic device is in a photographing state or a video recording state, after the electronic device collects original image frames through the main shooting and the sub shooting, the same HDR algorithm synthesis processing is performed on the original image frames collected by the main shooting and the sub shooting, and the original image frames are finally displayed in the preview interface. After the HDR algorithm synthesis processing, preview images (such as a shooting preview image and a video preview image) displayed in a preview interface have larger dynamic range and more image details.
Taking an example that the electronic device includes two cameras, the two cameras are respectively a main shooting and a sub shooting, and the main shooting and the sub shooting adopt a 2-exp overlapping exposure mode to acquire an original image frame, a specific flow of the image processing method provided by the embodiment of the application is exemplarily described. As shown in fig. 5b, the main camera is connected to the first preset RAW domain processing module, and the first preset RAW domain processing module is connected to the first ISP front end module. The secondary camera is connected with a second preset RAW domain processing module, and the second preset RAW domain processing module is connected with a second ISP front-end module. As also shown in fig. 5b, the first ISP front end module and the second ISP front end module are connected to a multiple shot Smoothing Algorithm (SAT) module, respectively; the SAT module is connected with the first ISP backend module and the second ISP backend module respectively.
As also shown in fig. 5b, the main camera acquires a first image frame L0 in a first exposure duration and a second image frame S0 in a second exposure duration by overlapping exposure; the main shooting inputs a first image frame L0 and a second image frame S0 to a first preset RAW domain processing module; the first preset RAW domain processing module is used for performing HDR algorithm synthesis processing on the first image frame L0 and the second image frame S0 to generate a first long exposure image frame L0'. Then, the first preset RAW domain processing module inputs the first long exposure image frame L0 'to the first ISP front end module, and the first ISP front end module is configured to perform YUV domain processing on the first long exposure image frame L0', and convert the first long exposure image frame L0 'into a first long exposure image frame L0' in a YUV format. Then, the first ISP front-end module transmits the first long exposure image frame L0' in YUV format to the SAT module, and the SAT module is configured to smooth the first long exposure image frame L0', improve the image quality of the first long exposure image frame L0', and reduce interference (e.g., reduce image noise or distortion). Then, the SAT module transmits the processed first long exposure image frame L0 'to the first ISP back end module, and the first ISP back end module is configured to perform image enhancement on the first long exposure image frame L0'.
Accordingly, as also shown in fig. 5b, the subphotograph capture overlap exposure mode captures a third image frame L1 during a first exposure period and a fourth image frame S1 during a second exposure period; the vice shooting inputs the third image frame L1 and the fourth image frame S1 to a second preset RAW domain processing module; the second preset RAW domain processing module is used for performing HDR algorithm synthesis processing on the third image frame L1 and the fourth image frame S1 to generate a second long-exposure image frame L1'. Then, the second preset RAW domain processing module inputs the second long exposure image frame L1 'to the second ISP front end module, and the second ISP front end module is configured to perform "YUV domain" processing on the second long exposure image frame L1', and convert the second long exposure image frame L1 'into a second long exposure image frame L1' in a YUV format. And then, the second ISP front-end module transmits the second long exposure image frame L1' in the YUV format to the SAT module, and the SAT module is used for smoothing the second long exposure image frame L1', so that the image quality of the second long exposure image frame L1' is improved, and the interference is reduced. Then, the SAT module transmits the processed second long exposure image frame L1 'to a second ISP back end module, and the second ISP back end module is configured to perform image enhancement on the second long exposure image frame L1'.
It should be noted that the YUV format is an image color coding, where Y represents Luminance luminence, and U and V represent Chrominance chromance. It should be appreciated that the first long exposure image frame L0 'and the second long exposure image frame L1' have a greater dynamic range and richer image details.
With the above embodiment, as also shown in fig. 5b, the first ISP backend transmits the first long exposure image frame L0 'to the display screen, and the second ISP backend transmits the second long exposure image frame L1' to the display screen, so that the display screen displays the preview image according to the first long exposure image frame L0 'and the second long exposure image frame L1'. On the basis, after a user clicks a shooting control (such as a shooting control and a video control; the video control is used for recording a video file), the electronic equipment encodes the first long exposure image frame L0 'and the second long exposure image frame L1' to generate a shooting file or a video file.
As can be seen from the above embodiments, when the camera of the electronic device captures original image frames by using overlapping exposure, one complete exposure period includes one long exposure duration (for example, marked as Expo L) and one short exposure duration (for example, marked as Expo S). Of course, a camera of the electronic device may capture raw image frames in multiple consecutive exposure periods, and perform HDR algorithm synthesis processing on the raw image frames captured by the camera in each exposure period. Taking a camera of an electronic device as a main shot, taking an example of collecting original image frames in two consecutive exposure periods as an illustration, as shown in fig. 6, for example, the main shot collects a long-exposure image frame VC0 (n) in a long-exposure duration Expo L of a first exposure period, and collects a short-exposure image frame VC1 (n) in a short-exposure duration Expo S of the first exposure period. Accordingly, the main camera acquires the long-exposure image frame VC0 (n + 1) within the long-exposure duration Expo L of the second exposure period, and acquires the short-exposure image frame VC1 (n + 1) within the short-exposure duration Expo S of the second exposure period. And then, the main camera transmits a long exposure image frame VC0 (n) and a short exposure image frame VC1 (n) acquired in a first exposure period and a long exposure image frame VC0 (n + 1) and a short exposure image frame VC1 (n + 1) acquired in a second exposure period to a first preset RAW domain processing module, the first preset RAW domain processing module performs HDR algorithm synthesis processing on the long exposure image frame VC0 (n) and the short exposure image frame VC1 (n) to generate a first long exposure image frame, and performs HDR algorithm synthesis processing on the long exposure image frame VC0 (n + 1) and the short exposure image frame VC1 (n + 1) to generate a second long exposure image frame. In this way, the first preset RAW field processing module includes a multi-frame image composed of a first long-exposure image frame and a second long-exposure image frame.
Further, after the first long-exposure image frame and the second long-exposure image frame are processed by the first preset RAW domain processing module, the first long-exposure image frame and the second long-exposure image frame may also be processed by the first ISP front-end module, the SAT module, and the first ISP back-end module, and the specific processing method may refer to fig. 5b and the corresponding embodiment, which are not described in detail herein.
It should be noted that, the camera of the electronic device may also acquire the original image frame in three consecutive exposure periods; or the original image frames are acquired in a plurality of (four and more than four) exposure periods, and the specific acquisition mode can refer to the above embodiments, which are not listed here.
In an embodiment of the present application, an electronic device includes a plurality of cameras; for example, the electronic device may include a main shot and a sub shot. For example, the main shot may be a middle-focus camera (or standard camera), and the sub-shot may be a long-focus (tele) camera. In some embodiments, when a user uses the electronic device before shooting, a display screen of the electronic device displays an image captured by the camera, that is, a display screen of the electronic device displays a preview image. On the basis, the user can enlarge the preview image according to specific needs; for example, the user may input a zoom operation to resize a preview image displayed on the display screen. The zooming operation is used for displaying a preview image corresponding to the target zooming magnification on a display screen of the electronic equipment. In some embodiments, the reference zoom magnification of the main shot (i.e., the middle-focus camera) is 1 ×, and the reference zoom magnification of the sub shot (i.e., the telephoto camera) is 5 ×, for example, when the electronic device is shooting, the electronic device may start the main shot to capture the original image frame. Before the electronic device does not receive the zoom operation of the user, the electronic device displays a zoom magnification which is a reference zoom magnification (such as 1 x) of the main shooting. And then, a main camera of the electronic equipment acquires an original image frame and displays a preview image corresponding to the reference zoom magnification.
The zoom magnification may be an optical zoom magnification or a digital zoom magnification. For example, the zoom magnification may be 1 ×,3 ×,4 ×, 4.5 ×,4.9 × or 5 × or the like. Wherein "1 ×" indicates a zoom magnification of 1 time; "3 ×" indicates a zoom magnification of 3 times; "4 ×" indicates a zoom magnification of 4 times. In addition, the magnification in the embodiment of the present application may also be referred to as a multiple. That is, the zoom magnification may also be referred to as a zoom magnification.
Taking an electronic device as a mobile phone as an example for illustration, in the process of recording a video by the mobile phone, the mobile phone displays an interface 301 as shown in (1) in fig. 7 a. The interface 301 includes a recording control 302. In response to the operation of the recording control 302 by the mobile phone, the mobile phone displays an interface 303 as shown in (2) in fig. 7a, where the interface 303 is a viewfinder interface of the mobile phone during recording a video. Interface 303 includes, among other things, a pause button 304 and an end recording button 305. Illustratively, the handset pauses recording of video in response to user operation of the pause button 304; alternatively, the mobile phone finishes recording the video in response to the user operating the finish recording button 305, and saves the recorded video in the mobile phone (e.g., photo album application).
As also shown in fig. 7a (2), a zoom control 306 for adjusting zoom magnification is also included in the interface 303. Illustratively, the zoom magnification displayed in the interface 303 is 4.5 ×, and when the mobile phone responds to a user operation of "+" in the zoom control 306, the mobile phone increases the current zoom magnification, for example, by 5.0 ×. When the mobile phone responds to the user's operation of "-" in the zoom control 306, the mobile phone decreases the current zoom magnification, for example, to 4.0 ×.
In some embodiments, when the electronic device receives a zoom operation of a user, the electronic device starts a corresponding camera to capture an original image frame according to the zoom operation. Illustratively, in order to display a preview image corresponding to a target zoom magnification, the electronic device needs to start a secondary shooting, and acquire an original image frame through the joint work of the primary shooting and the secondary shooting, and then the electronic device displays the preview image corresponding to the target zoom magnification according to the original image frame. Illustratively, when the target zoom magnification is in a range of [1.0 x, 4.4 x ], the electronic device starts a main shooting to collect an original image frame, and displays a preview image corresponding to the target zoom magnification according to the original image frame collected by the main shooting. When the range of the target zoom magnification is [4.5 x, 4.9 x ], the electronic equipment simultaneously starts a main shooting and a secondary shooting to collect original image frames, and displays preview images corresponding to the target zoom magnification according to the original image frames collected by the main shooting and the secondary shooting. And when the target zoom magnification is larger than 5.0 multiplied by the number, the electronic equipment starts the sub-shooting to collect the original image frame, and displays a preview image corresponding to the target zoom magnification according to the original image frame collected by the sub-shooting.
Note that, when the target zoom magnification is larger than 5.0 × the main shooting of the electronic device may also capture an original image frame, but the preview image finally displayed by the electronic device is generated from the original image frame captured by the sub-shooting.
Illustratively, when a user starts any camera (such as a main camera) of the electronic equipment to shoot, the electronic equipment displays a preview image. Before the electronic device does not receive a zoom operation input by a user, the electronic device displays a preview image corresponding to a reference zoom magnification (e.g., 1 ×) of the main shooting. After the electronic equipment receives the zoom operation input by the user, if the zoom operation instructs the electronic equipment to display the preview image with the target zoom magnification of 4.5 times, the electronic equipment starts the secondary shooting. Then, the main shooting and the auxiliary shooting of the electronic equipment simultaneously collect original image frames, and preview images with target zoom magnification of 4.5 x are displayed according to the original image frames collected by the main shooting and the auxiliary shooting.
In some embodiments, in a case that the electronic device does not receive a zoom operation input by a user, or the electronic device receives a zoom operation by the user, and a target zoom magnification indicated by the zoom operation is in a range of [1.0 x, 4.4 x ], the electronic device displays a preview image corresponding to the target zoom magnification using an original image frame captured by a main camera. Illustratively, as shown in fig. 7b, the main shot acquires original image frames by means of 2-exp overlapping exposure. For example, the main shooting acquires a first image frame L0 for a first exposure duration (e.g., a long exposure duration), and acquires a second image frame S0 for a second exposure duration (e.g., a short exposure duration); then, the main camera inputs the first image frame L0 and the second image frame S0 into a first preset RAW domain processing module, and the first preset RAW domain processing module performs HDR algorithm synthesis processing on the first image frame L0 and the second image frame S0 to generate a first long exposure image frame L0'. The first preset RAW domain processing module transmits the first long exposure image frame L0 'to the first ISP front-end module, the first ISP front-end module carries out YUV domain processing on the first long exposure image frame L0', and the first long exposure image frame L0 'is converted into a first long exposure image frame L0' in a YUV format. And then, the first ISP front-end module transmits the first long exposure image frame L0 'in the YUV format to the first ISP rear-end module, and the first ISP rear-end module performs image enhancement on the first long exposure image frame L0' in the YUV format. And finally, the first ISP rear-end module outputs the first long exposure image frame L0 'to the display screen, so that the display screen displays a preview image according to the first long exposure image frame L0'.
The first ISP front-end module and the first ISP back-end module may be configured with a preset image algorithm to process the first long exposure image frame L0'. For example, as shown in fig. 7b, an image Transformation Matching (GTM) algorithm module is preset in the first ISP front-end module; the GTM module is used for carrying out YUV domain processing on the first long exposure image frame L0'. The GTM module is further configured to, in the consecutive first long-exposure image frames L0', if the local information in the first long-exposure image frame L0' at a certain time is insufficient, complement the local information in the images at the previous time and the next time. An image simulation transformation (WRAP) algorithm module, a multi-frame HDR algorithm module, a GAMMA (GAMMA) algorithm module and the like are preset in the first ISP back-end module. The WRAP module is used for carrying out image enhancement on the first long exposure image frame L0'. The multi-frame HDR algorithm module is used for correcting the dynamic range of the first long exposure image frame L0'; for example, after the first long exposure image frame L0' is processed by the multi-frame HDR algorithm module, a continuous multi-frame HDR image is generated. The GAMMA algorithm module may perform an Electronic Image Stabilization (EIS) process on the first long-exposure image frame L0', and dynamically compress the first long-exposure image frame L0'.
In other embodiments, the electronic device receives a user input of a zoom operation for instructing the electronic device to display a preview image at a target zoom magnification of 4.5 ×. On the basis, the electronic equipment responds to the zooming operation and starts the sub-shooting to collect the original image frames, namely the main shooting and the sub-shooting included in the electronic equipment simultaneously collect the original image frames. Illustratively, as shown in fig. 8a, the main camera of the electronic device collects original image frames at a target zoom magnification of 4.5 ×, i.e., the zoom magnification of the original image frames output by the main camera is 4.5 ×. Meanwhile, the sub-shooting of the electronic equipment collects the original image frame according to a reference zoom magnification of 5.0 times, namely the zoom magnification of the original image frame output by the sub-shooting is 5.0 times. On the basis, the electronic equipment performs algorithm processing such as fusion and equalization on an original image frame acquired by a main shooting and a secondary shooting of the original image frame with the zoom magnification of 4.5 multiplied by 5.0 multiplied by the original image frame to generate a preview image with the zoom magnification of 4.5 multiplied by the original image frame.
Illustratively, as shown in fig. 8b, the main shot acquires the original image frame by using 2-exp overlapping exposure. For example, the main shooting acquires a first image frame L0 for a first exposure duration (e.g., a long exposure duration), and acquires a second image frame S0 for a second exposure duration (e.g., a short exposure duration); the main shooting inputs a first image frame L0 and a second image frame S0 to a first preset RAW domain processing module; the first preset RAW domain processing module is used for performing HDR algorithm synthesis processing on the first image frame L0 and the second image frame S0 to generate a first long-exposure image frame L0'. Then, the first preset RAW domain processing module inputs the first long exposure image frame L0 'to the first ISP front-end module, and the first ISP front-end module is configured to perform YUV domain processing on the first long exposure image frame L0', and convert the first long exposure image frame L0 'into a first long exposure image frame L0' in a YUV format. Then, the first ISP front-end module transmits the first long exposure image frame L0' in YUV format to the SAT module. Meanwhile, the original image frame is collected in a manner of sub-shooting 2-exp overlapping exposure. For example, the sub-shot acquires the third image frame L1 for a first exposure duration (e.g., a long exposure duration), and acquires the fourth image frame S1 for a second exposure duration (e.g., a short exposure duration); the vice shooting inputs the third image frame L1 and the fourth image frame S1 to a second preset RAW domain processing module; the second preset RAW domain processing module is used for performing HDR algorithm synthesis processing on the third image frame L1 and the fourth image frame S1 to generate a second long-exposure image frame L1'. And then, the second preset RAW domain processing module inputs the second long exposure image frame L1 'to the second ISP front end module, and the second ISP front end module is configured to perform YUV domain processing on the second long exposure image frame L1', and convert the second long exposure image frame L1 'into a second long exposure image frame L1' in a YUV format. Then, the second ISP front-end module transmits the second long exposure image frame L1' in YUV format to the SAT module.
The SAT module is used for fusing the first long exposure image frame L0 'and the second long exposure image frame L1' and carrying out equalization algorithm processing, and the zoom magnification of the fused first long exposure image frame L0 'and the fused second long exposure image frame L1' is 4.5 x. On the basis, the SAT module transmits the first long exposure image frame L0 'to the first ISP backend module, and the first ISP backend module is used for performing image enhancement on the first long exposure image frame L0'. Meanwhile, the SAT module transmits the second long exposure image frame L1 'to the second ISP backend module, and the second ISP backend module is configured to perform image enhancement on the second long exposure image frame L1'. Then, the electronic apparatus displays a preview image with a zoom magnification of 4.5 × from the first long-exposure image frame L0 'and the second long-exposure image frame L1'. After the user clicks the shooting control (such as the shooting control or the video control), the electronic device encodes the first long exposure image frame L0 'and the second long exposure image frame L1' so that the display screen of the electronic device displays a shooting file (or a video file) with a zoom magnification of 4.5 ×.
In addition, a preset image algorithm can be set in the first ISP front-end module and the first ISP rear-end module to process the first long exposure image frame L0'; the second ISP front end module and the second ISP back end module may set a preset image algorithm to process the second long exposure image frame L1'. For example, as shown in fig. 8b, GTM algorithm modules are preset in the first ISP front end module and the second ISP front end module; the GTM algorithm module is used for processing the first long exposure image frame L0 'and performing YUV domain processing on the second long exposure image frame L1'. The GTM algorithm module is further configured to complement, in the continuous first long-exposure image frame L0', if the local information in the first long-exposure image frame L0' at a certain time is insufficient, the local information in the images at the previous time and the next time; and in the continuous second long exposure image frame L1', if the local information in the second long exposure image frame L1' at a certain time is insufficient, the local information in the images at the previous time and the next time can be complemented. A WRAP algorithm module, a multi-frame HDR algorithm module, a GAMMA algorithm module and the like are preset in the first ISP back-end module and the second ISP back-end module. The WRAP module is used for carrying out image enhancement on the first long exposure image frame L0 'and the second long exposure image frame L1'. The multi-frame HDR algorithm module is used for correcting the dynamic ranges of the first long exposure image frame L0 'and the second long exposure image frame L1'; for example, after the first long exposure image frame L0' is processed by the multi-frame HDR algorithm module, a continuous multi-frame HDR image is generated; after the second long exposure image frame L1' is processed by the multi-frame HDR algorithm module, a continuous multi-frame HDR image is generated. The gama algorithm module may perform EIS processing on the first and second long-exposure image frames L0 'and L1', and dynamically compress the first and second long-exposure image frames L0 'and L1'.
It should be noted that, in the embodiment of the present application, the preset gama algorithm module in the first ISP backend module may perform EIS processing on the first long exposure image frame L0' by using a common-version EIS. The GAMMA algorithm module preset in the second ISP back-end module can adopt an enhanced EIS to perform EIS processing on the second long exposure image frame L1'.
In still other embodiments, the electronic device receives a user input of a zoom operation instructing the electronic device to display a preview image at a target zoom magnification of 5.0 ×. On the basis, the electronic equipment responds to the zooming operation and starts the secondary shooting to collect the original image frame. Illustratively, as shown in fig. 9, the subpool captures the original image frame in a 2-exp overlapping exposure. For example, the main shot acquires the third image frame L1 for a first exposure duration (e.g., a long exposure duration), and acquires the fourth image frame S1 for a second exposure duration (e.g., a short exposure duration); then, the secondary shooting inputs the third image frame L1 and the fourth image frame S1 into a second preset RAW domain processing module, and the second preset RAW domain processing module performs HDR algorithm synthesis processing on the third image frame L1 and the fourth image frame S1 to generate a second long-exposure image frame L1'. The second preset RAW domain processing module transmits the first long exposure image frame L1 'to the second ISP front-end module, the second ISP front-end module performs YUV domain processing on the second long exposure image frame L1', and the second long exposure image frame L1 'is converted into a second long exposure image frame L1' in a YUV format. And then, the second ISP front-end module transmits the YUV-format second long exposure image frame L1 'to the second ISP rear-end module, and the second ISP rear-end module performs image enhancement on the YUV-format second long exposure image frame L1'. And finally, the second ISP rear-end module outputs the second long-exposure image frame L1 'to the display screen, so that the display screen displays a preview image with the zoom magnification of 5.0 x according to the second long-exposure image frame L1'.
The second ISP front-end module and the second ISP back-end module may set a preset image algorithm to process the second long-exposure image frame L1'. For example, as shown in fig. 9, a GTM algorithm module is preset in the second ISP front-end module; the GTM module is used for carrying out YUV domain processing on the second long exposure image frame L1'. The GTM module is further configured to complement, in the second continuous long-exposure image frame L1', if the local information in the second long-exposure image frame L1' at a certain time is insufficient, the local information in the images at the previous time and the next time may be complemented. A WRAP algorithm module, a multi-frame HDR algorithm module, a GAMMA algorithm module and the like are preset in the second ISP back-end module. The WRAP module is used for carrying out image enhancement on the second long exposure image frame L1'. The multi-frame HDR algorithm module is used for correcting the dynamic range of the second long exposure image frame L1'; for example, after the second long-exposure image frame L1' is processed by the multi-frame HDR algorithm module, a continuous multi-frame HDR image is generated. The gama algorithm module may perform EIS processing on the second long exposure image frame L1', and dynamically compress the second long exposure image frame L1'.
It should be understood that, in the embodiment of the present application, the electronic device may implement two functions of taking a picture and recording a video, that is, when a display screen of the electronic device displays a preview image, the preview image includes a preview image of taking a picture and a preview image of a video. In some embodiments, the difference between the electronic device implementing the photographing function and the video recording function is that: the electronic device needs to be compatible with a zero-latency (ZSL) mechanism to implement the photographing function. The ZSL means that when the electronic device implements the photographing function, after the display screen of the electronic device displays the photographing preview image, the electronic device will keep some latest original image frames in the image buffer queue. When the electronic device receives a photographing operation of a user (that is, a photographing control is triggered), the electronic device finds some related original image frames from the image buffer queue to encode and display the original image frames, so as to generate a photographing file. That is, when the user triggers photographing, the electronic device may find out the relevant image frame from the image buffer queue at the first time and immediately present the image frame on the display screen of the electronic device, thereby implementing zero-delay quick photographing.
In some embodiments, a zero-latency processor (ZSL Manager) is also provided in the electronic device. The ZSL processor is used for managing the original image frames collected by the camera during photographing, and performing operations such as configuration, queuing, frame selection and the like on the original image frames.
It should be noted that, when the electronic device takes a picture, if the electronic device receives a zoom operation of a user, the electronic device needs to display a preview image corresponding to a target zoom magnification on one hand, and also needs to manage an image cache queue on the other hand, so that the electronic device can quickly display a picture taking file when receiving the picture taking operation of the user.
Taking an electronic device as a mobile phone as an example for illustration, during the process of taking a picture by the mobile phone, the mobile phone displays an interface 401 as shown in fig. 10 a. The interface 401 includes a zoom control 402 for user adjustment of zoom magnification. Illustratively, the current zoom magnification displayed in the interface 401 is 1.0 ×, and then the mobile phone adjusts the current zoom magnification in response to the user operating the zoom control 402. For example, the mobile phone increases the current zoom magnification in response to the user sliding the zoom control 402 up.
As also shown in fig. 10a, the interface 401 further includes a photographing control 403, and the mobile phone can generate a photographing file in response to the user operating the photographing control 403, and save the photographing file into the mobile phone (e.g., an album application).
Taking the example that the electronic device is held by a user for taking a picture, in combination with the above embodiment, when the electronic device is taking a picture, the electronic device may start any camera (for example, main shooting) to collect an original image frame. Before the electronic device does not receive a zoom operation input by a user, the electronic device displays a photographed preview image corresponding to a reference zoom magnification (e.g., 1 ×) for main shooting. After the electronic equipment receives the zoom operation input by the user, if the zoom operation instructs the electronic equipment to display the shooting preview image with the target zoom magnification larger than 4.5 times, the electronic equipment starts the secondary shooting. And then, the main shooting device and the auxiliary device of the electronic device simultaneously collect original image frames, and display shooting preview images with the target zoom magnification larger than 4.5 x according to the original image frames collected by the main shooting device and the auxiliary shooting device.
For example, in a case where the electronic device does not receive a zoom operation input by a user, or the electronic device receives a zoom operation by a user, and a target zoom magnification indicated by the zoom operation is [1.0 x, 4.4 x ], the electronic device displays a photo preview image corresponding to the target zoom magnification using an original image frame acquired by a main camera. When the range of the target zoom magnification instructed by the zoom operation is [4.5 x, 4.9 x ], the electronic equipment simultaneously starts the sub-shooting to collect the original image frames, and displays the shooting preview images corresponding to the target zoom magnification according to the original image frames collected by the main shooting and the sub-shooting. And when the target zoom magnification indicated by the zoom operation is larger than 5.0 multiplied by the number, starting the secondary shooting by the electronic equipment to collect the original image frame, and displaying the shooting preview image corresponding to the target zoom magnification according to the original image frame collected by the secondary shooting.
In some embodiments, when the electronic device receives a zoom operation input by a user, and the zoom operation is used to instruct the electronic device to display a photographed preview image with a target zoom magnification of 4.4 ×, the main camera captures an original image frame in a 2-exp overlapping exposure manner, as shown in fig. 10 b. For example, the main shot acquires a first image frame L0 for a first exposure duration (e.g., a long exposure duration), and acquires a second image frame S0 for a second exposure duration (e.g., a short exposure duration); then, the main camera inputs the first image frame L0 and the second image frame S0 into the first preset RAW domain processing module. On one hand, the first preset RAW domain processing module performs HDR algorithm synthesis processing on the first image frame L0 and the second image frame S0 to generate a first long-exposure image frame L0'. The first preset RAW domain processing module transmits the first long exposure image frame L0 'to the first ISP front-end module, and the first ISP front-end module performs YUV domain processing on the first long exposure image frame L0' and converts the first long exposure image frame L0 'into a first long exposure image frame L0' in a YUV format. And then, the first ISP front-end module transmits the first long exposure image frame L0 'in the YUV format to the first ISP rear-end module, and the first ISP rear-end module performs image enhancement on the first long exposure image frame L0' in the YUV format. And finally, the first ISP rear-end module outputs the first long exposure image frame L0 'to the display screen, so that the display screen displays the shooting preview image according to the first long exposure image frame L0'. On the other hand, the first preset RAW domain processing module retains the first image frame L0 and the second image frame S0 in the image buffer queue to generate a first photographing queue. For example, the first preset RAW domain processing module may transmit the first image frame L0 and the second image frame S0 to a first double rate synchronous dynamic random access memory (DDR), so as to generate a photographing queue. And then, when the electronic equipment receives the photographing operation of the user, the electronic equipment reads the first photographing queue from the first DDR and processes the first photographing queue to generate a photographing file in a target image format. The target image format may be, for example, (JPEG) or other formats, which is not limited in this embodiment of the present invention.
It should be noted that, for the example of the first ISP front-end module and the first ISP back-end module, reference may be made to the foregoing embodiments, and details are not repeated here.
Illustratively, the electronic device includes a preset bayer offline processing module and a preset RAW domain processing module. On the basis, the electronic equipment can perform Bayer processing and RAW domain processing on the shooting queue. Then, the electronic equipment transmits the processed first photographing queue to a first ISP rear-end module; the first ISP backend module is used for generating a photographing file (such as a JPEG file) in a target image format according to the first photographing queue.
For ease of understanding, the bayer domain and the RAW domain will be explained first. Bayer domain: each lens of the digital camera is provided with an optical sensor for measuring the brightness of light, but if a full-color image is to be obtained, three optical sensors are generally required to obtain red, green and blue three-primary-color information, and in order to reduce the cost and the volume of the digital camera, manufacturers generally adopt a CCD or CMOS image sensor, and generally, an original image output by the CMOS image sensor is in a bayer domain RGB format, a single pixel only contains a color value, and to obtain a gray value of the image, it is necessary to interpolate the complete color information of each pixel first and then calculate the gray value of each pixel. That is, the bayer domain refers to a raw picture format inside the digital camera.
RAW field: RAW domain images, i.e., RAW images, contain data processed from an image sensor of a digital camera, scanner, or motion picture film scanner. This is so named because the RAW domain image has not been processed, printed or used for editing. The RAW domain image contains the most original information of the image and is not subjected to the nonlinear processing in the ISP process.
In this embodiment, an original image frame acquired by a camera of an electronic device is a RAW domain image. On the basis, when the RAW domain image is converted into an image of other format, the phenomenon of image information loss exists. Therefore, in the above embodiment, when the electronic device receives a photographing operation of a user, the electronic device reads the first photographing queue from the first DDR, performs RAW domain processing on the first photographing queue by using the preset RAW domain processing module, and then transmits the processed original image frame to the first ISP back end module to be converted into a photographing file in the target image format, so that more details in the original image frame can be retained, and the display effect of the photographing file is improved.
In other embodiments, when the electronic device receives a zoom operation input by a user and the zoom operation is used to instruct the electronic device to display a photographed preview image with a target zoom magnification of 4.5 ×, as shown in fig. 11, the main shot captures original image frames in a 2-exp overlapping exposure manner. For example, the main shot acquires a first image frame L0 for a first exposure duration (e.g., a long exposure duration), and acquires a second image frame S0 for a second exposure duration (e.g., a short exposure duration); then, the main camera inputs the first image frame L0 and the second image frame S0 into the first preset RAW domain processing module. Meanwhile, the original image frame is collected by the sub-camera in a 2-exp overlapping exposure mode. For example, the sub-shot acquires the third image frame L1 for a first exposure duration (e.g., a long exposure duration), and acquires the fourth image frame S1 for a second exposure duration (e.g., a short exposure duration); then, the sub-shooting inputs the third image frame L1 and the fourth image frame S1 into the second preset RAW domain processing module. On one hand, the first preset RAW domain processing module performs HDR algorithm synthesis processing on the first image frame L0 and the second image frame S0 to generate a first long exposure image frame L0'. The first preset RAW domain processing module transmits the first long exposure image frame L0 'to the first ISP front-end module, and the first ISP front-end module performs YUV domain processing on the first long exposure image frame L0' and converts the first long exposure image frame L0 'into a first long exposure image frame L0' in a YUV format. And then, the first ISP front-end module transmits the first long exposure image frame L0 'in the YUV format to the first ISP rear-end module, and the first ISP rear-end module performs image enhancement on the first long exposure image frame L0' in the YUV format. Correspondingly, the second preset RAW domain processing module performs HDR algorithm synthesis processing on the third image frame L1 and the fourth image frame S1 to generate a second long-exposure image frame L1'. The second preset RAW domain processing module transmits the second long exposure image frame L1 'to the second ISP front end module, and the second ISP front end module performs YUV domain processing on the second long exposure image frame L1', and converts the second long exposure image frame L1 'into a second long exposure image frame L1' in a YUV format. And then, the second ISP front-end module transmits the YUV-format second long exposure image frame L1 'to the second ISP rear-end module, and the second ISP rear-end module performs image enhancement on the YUV-format second long exposure image frame L1'. And finally, the first ISP rear-end module outputs the first long exposure image frame L0 'to the display screen, and the second ISP rear-end module outputs the second long exposure image frame L1' to the display screen, so that the display screen displays the shooting preview image according to the first long exposure image frame L0 'and the second long exposure image frame L1'. On the other hand, the first preset RAW domain processing module keeps the first image frame L0 and the second image frame S0 in an image buffer queue to generate a first photographing queue; the second preset RAW domain processing module keeps the third image frame L1 and the fourth image frame S1 in the image buffer queue to generate a second photographing queue. For example, the first preset RAW domain processing module may transfer the first image frame L0 and the second image frame S0 into the first DDR; the second preset RAW domain processing module may transfer the third image frame L1 and the fourth image frame S1 into the second DDR. Then, when the electronic equipment receives the photographing operation of the user, the electronic equipment reads the first photographing queue from the first DDR and reads the second photographing queue from the second DDR; the electronic equipment respectively carries out Bayer processing and RAW domain processing on the first photographing queue and the second photographing queue. And then, the electronic equipment transmits the processed first photographing queue to the first ISP rear-end module, and transmits the processed second photographing queue to the second ISP rear-end module.
It should be noted that, because the field of view (FOV) of the original image frame acquired by the main shooting is different from that of the original image frame acquired by the secondary shooting, after the electronic device transmits the processed first shooting queue to the first ISP backend module and transmits the processed second shooting queue to the second ISP backend module, the electronic device may further perform field angle fusion on the first shooting queue and the second shooting queue, and generate a shooting file in the target image format according to the fused first shooting queue and second shooting queue.
It should be understood that the first photographing queue is an image frame of the main photographing output, and the second photographing queue is an image frame of the sub photographing output. Under the condition that the electronic equipment generates the photographing file in the target image format according to the first photographing queue and the second photographing queue, the electronic equipment can determine the time stamps of all the image frames in the first photographing queue of the main photographing output and the time stamps of all the image frames in the second photographing queue of the auxiliary photographing output, and the time stamps of all the image frames in the first photographing queue of the main photographing output are aligned with the time stamps of all the image frames in the second photographing queue of the auxiliary photographing output.
For example, when the time stamps of the respective image frames in the first photographing queue and the time stamps of the respective image frames in the second photographing queue are the same, the electronic device may subject the image frames of the same time stamps to bayer processing and RAW domain processing. For another example, when the time stamps of the respective image frames in the first photographing queue and the time stamps of the respective image frames in the second photographing queue are not the same, the electronic device may perform bayer processing and RAW domain processing on the image frames with the two time stamps close as a set of images. For example, if a difference between a first timestamp in the first photographing queue and a second timestamp in the second photographing queue is smaller than a preset value, the electronic device may perform bayer processing and RAW domain processing on an image frame corresponding to the first timestamp and an image frame corresponding to the second timestamp as a group of images.
Still as shown in fig. 11, in the process that the electronic device generates the photographing preview image through the original image frame acquired by the main photographing and the original image frame acquired by the sub photographing, after the first preset RAW domain processing module transmits the first long exposure image frame L0 'obtained by fusing the first image frame L0 and the second image frame S0 acquired by the main photographing to the first ISP front end module, and the second preset RAW domain processing module transmits the second long exposure image frame L1' obtained by fusing the third image frame L1 and the fourth image frame S1 acquired by the sub photographing to the second ISP front end module, the first long exposure image frame L0 'and the second long exposure image frame L1' may be processed by a 3A statistical algorithm in the first ISP front end module and the second ISP front end module. Exemplary, 3A statistical algorithms include an Auto Focus (AF) algorithm, an Auto Exposure (AE) algorithm, and an Auto White Balance (AWB) algorithm.
In still other embodiments, when the electronic device receives a zoom operation input by a user and the zoom operation is used to instruct the electronic device to display a photographed preview image with a target zoom magnification of 5.0 ×, the sub-photographing captures original image frames in a 2-exp overlapping exposure manner as shown in fig. 12. For example, the sub-shot acquires the third image frame L1 for a first exposure duration (e.g., a long exposure duration), and acquires the fourth image frame S1 for a second exposure duration (e.g., a short exposure duration); then, the sub-shooting inputs the third image frame L1 and the fourth image frame S1 into the second preset RAW domain processing module. The second preset RAW domain processing module performs HDR algorithm synthesis processing on the third image frame L1 and the fourth image frame S1 to generate a second long-exposure image frame L1'. The second preset RAW domain processing module transmits the second long exposure image frame L1 'to the second ISP front end module, and the second ISP front end module performs YUV domain processing on the second long exposure image frame L1', and converts the second long exposure image frame L1 'into a second long exposure image frame L1' in a YUV format. And then, the second ISP front-end module transmits the YUV-format second long exposure image frame L1 'to the second ISP rear-end module, and the second ISP rear-end module performs image enhancement on the YUV-format second long exposure image frame L1'. And the second ISP rear-end module outputs the second long exposure image frame L1 'to the display screen, so that the display screen displays the shooting preview image according to the second long exposure image frame L1'. On the other hand, the second preset RAW domain processing module retains the third image frame L1 and the fourth image frame S1 in the image buffer queue to generate a second photographing queue. The second preset RAW domain processing module may transfer the third image frame L1 and the fourth image frame S1 into the second DDR. Then, when the electronic equipment receives the photographing operation of the user, the electronic equipment reads a second photographing queue from the second DDR; the electronic equipment carries out Bayer processing and RAW domain processing on the second photographing queue. Then, the electronic equipment transmits the processed second photographing queue to a second ISP rear-end module; the second ISP backend module is configured to generate a photographing file (e.g., a file in JPEG format) in the target image format according to the second photographing queue.
In the above embodiment, when the electronic device is in the photographing state, the electronic device may retain the original image frames captured by the main shot and/or the side shot in the image buffer queue. When the photographing control is triggered, the electronic device finds out the related original image frame from the image cache queue to be coded and displayed so as to generate a photographing file. That is, when the user triggers to take a picture, the electronic device can find out the relevant image frame from the image buffer queue at the first time and immediately present the relevant image frame on the display screen of the electronic device, so that the zero-delay quick picture taking is realized.
Fig. 13 is a schematic flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 13, the method includes S501-S503.
S501, the electronic equipment responds to the operation that the user starts the first camera and displays the first preview image.
Wherein the first preview image is generated from a first long exposure image frame; the first long exposure image frame is generated by fusing the first image frame and the second image frame by the electronic equipment; the first image frame and the second image frame are collected by the first camera, the exposure duration of the first image frame is a first exposure duration, and the exposure duration of the second image frame is a second exposure duration. Wherein the first exposure time length and the second exposure time length are different.
For example, the first camera may be a main camera (or main camera) as described in the above embodiments. The first preview image may be a video preview image or a photo preview image.
In some embodiments, as shown in fig. 5b, 7b, 8b, 10b, and 11, the first image frame may be, for example, L0, the second image frame may be, for example, S0, and the first long exposure image frame may be, for example, L0'. In addition, with reference to the above embodiment, the first exposure time period is longer than the second exposure time period; the first exposure time period may be, for example, a long exposure time period, and the second exposure time period may be, for example, a short exposure time period.
And S502, the electronic equipment responds to the zooming operation input by the user, determines the target zooming magnification and starts the second camera.
Illustratively, as shown in fig. 7a and 10a, the zoom operation input by the user may be, for example, a touch operation (e.g., a sliding operation or a clicking operation) of the control 306 or the control 402 by the user. Wherein the zoom operation is used to determine a target zoom magnification.
The second camera may be, for example, a sub camera (or a sub camera) described in the above embodiments.
And S503, the electronic equipment displays a second preview image corresponding to the target zoom magnification.
When the target zoom magnification is larger than or equal to a first preset value and smaller than or equal to a second preset value, the second preview image is generated by fusing the second long-exposure image frame and the third long-exposure image frame. The second long exposure image frame is generated by fusing a third image frame and a fourth image frame by the electronic equipment; the third long-exposure image frame is generated by fusing the fifth image frame and the sixth image frame by the electronic device. The third image frame and the fourth image frame are collected by the first camera; the fifth image frame and the sixth image frame are acquired by the second camera. The exposure time of the third image frame and the fifth image frame is a first exposure time, and the exposure time of the fourth image frame and the sixth image frame is a second exposure time.
Illustratively, as shown in fig. 5b, 7b, 8b, 10b, and 11, the third image frame may be L0, the fourth image frame may be S0, and the second long exposure image frame may be L0', for example. It should be noted that the first image frame and the second image frame, and the third image frame and the fourth image frame are acquired by the first camera at different times. For example, the first image frame and the second image frame are acquired by the first camera at a first time, and the third image frame and the fourth image frame are acquired by the second camera at a second time.
Illustratively, as shown in fig. 5b, 8b, 11 and 12, the fifth image frame may be, for example, L1, the sixth image frame may be, for example, S1, and the third long-exposure image frame may be, for example, L1'.
In some embodiments, as described in conjunction with the above embodiments, the first preset value may be, for example, 4.5 ×, and the second preset value may be, for example, 4.9 ×. That is, when the target zoom magnification is in the range of [4.5 ×,4.9 × ], the second preview image is generated after the second long-exposure image frame and the third long-exposure image frame are fused, that is, the second preview image displayed by the electronic device is generated by fusing the original image frames captured by the first camera and the second camera.
In some embodiments, the second preview image is generated from the second long-exposure image frame when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value. Wherein the fourth preset value is less than the first preset value.
Illustratively, in conjunction with the above-described embodiments, the third preset value may be, for example, 1.0 x, and the fourth preset value may be, for example, 4.4 x. That is, when the target zoom magnification is in the range of [1.0 ×,4.4 × ], the second preview image is generated from the second long-exposure image frame, i.e., the second preview image displayed by the electronic device is generated by fusing the original image frames captured by the first camera.
It should be noted that, in this embodiment, when the target zoom magnification is greater than or equal to the third preset value and less than or equal to the fourth preset value, the second preview image is generated from the second long-exposure image frame; and the second long exposure image frame is generated by the fusion processing of the third image frame and the fourth image frame acquired by the first camera. In other words, in this embodiment, if the target zoom magnification is small, the second preview image is generated by the original image frame captured by the first camera through the fusion processing; on this basis, a second preview image may also be generated from the first long exposure image frame. Illustratively, the third preset value may be, for example, 1 x, and the fourth preset value may be, for example, 4.4 x. When the target zoom magnification is 1 ×, the electronic device may generate a second preview image from the first long-exposure image frame.
In some embodiments, the second preview image is generated from the third long-exposure image frame when the target zoom magnification is greater than or equal to a fifth preset value. And the fifth preset value is larger than the second preset value.
Illustratively, in combination with the above embodiments, the fifth preset value may be, for example, 5.0 ×. That is, when the target zoom magnification is greater than or equal to 5.0 ×, the second preview image is generated from the third long-exposure image frame, that is, the second preview image displayed by the electronic device is generated by performing the fusion process on the original image frame captured by the second camera.
In some embodiments, the electronic device activates the second camera when the target zoom magnification is greater than a first preset zoom magnification. Illustratively, the first preset zoom magnification may be, for example, 4.4 ×. That is, the electronic device may activate the second camera only when the target zoom magnification is greater than 4.4 × the target zoom magnification. Wherein the second camera may be a telephoto camera (such as the sub-camera in the above-mentioned embodiments).
In some embodiments, the second preview image is a video preview image; the method further comprises the following steps: the electronic equipment displays a first interface; the first interface is a preview interface during shooting, and comprises a recording control; the electronic equipment responds to the operation of a user on the recording control to generate a video file; and the video file is generated by the electronic equipment through fusion processing of a second long exposure image frame and the third long exposure image frame.
Illustratively, as shown in (1) of fig. 7a, the first interface may be, for example, an interface 301, and the recording control may be, for example, a control 302.
It should be noted that, in this embodiment, the video file generated by the electronic device is also related to the target zoom magnification. For example, when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value, the video file is generated by the electronic device for the second long-exposure image frame fusion processing. And when the target zoom magnification is greater than or equal to a fifth preset value, the video file is generated by the electronic equipment for the fusion processing of the third long-exposure image frame.
In some embodiments, the second preview image is a preview image of the electronic device during recording of the video.
Illustratively, after the user clicks the record control 302, a second preview image is displayed in the interface 303 as shown in fig. 7a (2). It should be noted that, during the process of recording a video, the image displayed in the interface 303 may also be referred to as a preview image, and the electronic device generates a video file only after the user clicks the end recording control 305 again.
In some embodiments, the second preview image is a photographed preview image; the method further comprises the following steps: the electronic equipment displays a second interface; the second interface is a preview interface during shooting, and comprises a shooting control; the electronic equipment responds to the operation of a user on the photographing control to generate a photographing file; the photographing file is generated by the electronic equipment through fusion processing of the second long exposure image frame and the third long exposure image frame.
Illustratively, as shown in fig. 10a, the second interface may be, for example, an interface 401, and the photographing control may be, for example, a control 403.
It should be noted that, in this embodiment, the photographing file generated by the electronic device is also related to the target zoom magnification. For example, when the zoom ratio of the target is greater than or equal to a third preset value and less than or equal to a fourth preset value, the photographing file is generated by the electronic device for the fusion processing of the second long-exposure image frame. And when the target zoom magnification is greater than or equal to a fifth preset value, the photographing file is generated by the electronic equipment for the fusion processing of the third long-exposure image frame.
In some embodiments, before displaying the second preview image corresponding to the target zoom magnification, the method may include: the electronic equipment performs image conversion processing on the second long exposure image frame and the third long exposure image frame; the image conversion process includes: the electronic equipment converts the second long exposure image frame into a second long exposure image frame in a target format and converts the third long exposure image frame into a third long exposure image frame in the target format; the bandwidth of the second long exposure image frame is higher than that of the second long exposure image frame in the target format; the bandwidth of the third long exposure image frame at the time of transmission is higher than the bandwidth of the third long exposure image frame in the target format at the time of transmission.
Illustratively, with reference to the foregoing embodiment and fig. 7b, 8b, 9, 10b, 11 and 12, the electronic device may perform image conversion processing on the second long-exposure image frame through the first ISP front-end module, and perform image conversion processing on the third long-exposure image frame through the second ISP front-end module. For example, GTM algorithm modules are preset in the first ISP front-end module and the second ISP front-end module, and the GTM algorithm modules are used for performing image conversion processing on the second long-exposure image frame and the third long-exposure image frame, where the image conversion processing may be "YUV domain" processing, for example. On the basis, the second long exposure image frame in the target format obtained through the image conversion processing can be, for example, a second long exposure image frame in a YUV domain format; the third long exposure image frame in the target format obtained through the image conversion processing may be, for example, a third long exposure image frame in a "YUV domain" format.
In some embodiments, the electronic device includes N consecutive frames of the second long exposure image frame and M consecutive frames of the third long exposure image frame; n is more than or equal to 1, M is more than or equal to 1; the image conversion process further includes: in a second long exposure image frame of continuous N frames, if local information in the second long exposure image frame at the nth time does not meet a preset condition, the electronic equipment repairs the local information in the second long exposure image frame at the nth-1 time and the (N + 1) th time; n is more than or equal to 2; and/or the electronic equipment repairs the local information in the third long exposure image frame at the M-1 th moment and the M +1 th moment if the local information in the third long exposure image frame at the M-th moment does not meet the preset conditions in the third long exposure image frame of the continuous M frames; m is more than or equal to 2; wherein the local information comprises at least one of color, texture, or shape.
With reference to the foregoing embodiment, the GTM algorithm module is further configured to, when the local information in the second long-exposure image frame at the current time is insufficient, complement the local information in the second long-exposure image frame at the previous time and the next time according to the local information in the second long-exposure image frame at the previous time and the next time; correspondingly, in the case that the local information in the third long-exposure image frame at the current moment is insufficient, the local information in the third long-exposure image frame at the previous moment and the next moment is complemented.
In some embodiments, the method further comprises: the electronic equipment carries out multi-shot smoothing algorithm processing on the second long exposure image frame in the target format and the third long exposure image frame in the target format; wherein the multi-shot smoothing algorithm is used to reduce noise or distortion of the second long exposure image frame of the target format and the third long exposure image frame of the target format.
Illustratively, in combination with the above-described embodiments, the electronic device may perform the multi-shot smoothing algorithm processing on the second long-exposure image frame of the target format and the third long-exposure image frame of the target format by the SAT algorithm module.
In some embodiments, the method further comprises: the electronic equipment carries out first preset algorithm processing on a second long exposure image frame in the target format and a third long exposure image frame in the target format; the first preset algorithm processing comprises at least one of image simulation transformation processing, multi-frame high dynamic range image processing or gamma processing.
Exemplarily, the electronic device performs a first preset algorithm processing on a second long exposure image frame in a target format through a first ISP back-end module; and carrying out first preset algorithm processing on the third long exposure image frame in the target format through a second ISP rear-end module.
In some embodiments, the method further comprises: after the electronic equipment displays the second preview image, the electronic equipment caches the image frames acquired by the first camera in the first photographing queue and caches the image frames acquired by the second camera in the second photographing queue.
Exemplarily, the electronic device reserves an image frame acquired by a first camera into a buffer queue through a first preset RAW domain processing module to generate a first photographing queue; the electronic equipment reserves the image frame collected by the second camera into the cache queue through the second preset RAW domain processing module so as to generate a second photographing queue. For example, the first predetermined RAW domain processing module and the second predetermined RAW domain processing module include DDR. The first preset RAW domain processing module transmits image frames acquired by the first camera to the DDR to generate a first photographing queue; and the second preset RAW domain processing module transmits the image frames acquired by the second camera to the DDR to generate a second photographing queue.
In some embodiments, the electronic device generates the photographing file in response to the user operating the photographing control, including: the electronic equipment responds to the operation of a user on the photographing control, selects a first image from the first photographing queue, and selects a second image from the second photographing queue; the first image is the latest frame image in all the images of the first photographing queue, and the second image is the latest frame image in all the images of the second photographing queue; the electronic equipment carries out second preset algorithm processing on the first image and the second image to generate a photographing file in a target image format; the second pre-set algorithm process is used to preserve details in the first image and the second image.
Illustratively, the second preset algorithm process includes a bayer process and a RAW domain process. The target image format may be JPEG, for example.
In some embodiments, the method further comprises: the electronic equipment carries out third preset algorithm processing on the first image and the second image; the third preset algorithm is used for fusing the field angles of the first image and the second image.
In some embodiments, the electronic device performs a second preset algorithm process on the first image and the second image, including: the electronic equipment carries out second preset algorithm processing on the first target image in the first photographing queue and the second target image in the second photographing queue; wherein the timestamp of the first target image is the same as the timestamp of the second target image; or the difference value between the time stamp of the first target image and the time stamp of the second target image is smaller than the preset value.
The embodiment of the application provides an electronic device which comprises a display screen, a first camera, a second camera, a memory and one or more processors; the display screen is used for displaying images collected by the first camera and the second camera or images generated by the processor; the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the various functions or steps performed by the electronic device in the embodiments described above. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 3.
Embodiments of the present application further provide a chip system, as shown in fig. 14, the chip system 1800 includes at least one processor 1801 and at least one interface circuit 1802. The processor 1801 may be the processor 110 shown in fig. 3 in the foregoing embodiment. The interface circuit 1802 may be, for example, an interface circuit between the processor 110 and the external memory 120; or an interface circuit between the processor 110 and the internal memory 121.
The processor 1801 and the interface circuit 1802 may be interconnected by wires. For example, the interface circuit 1802 may be used to receive signals from other devices (e.g., a memory of an electronic device). Also for example, the interface circuit 1802 may be used to send signals to other devices, such as the processor 1801. Illustratively, the interface circuit 1802 may read instructions stored in the memory and send the instructions to the processor 1801. The instructions, when executed by the processor 1801, may cause the electronic device to perform the steps performed by the handset 180 in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
Embodiments of the present application further provide a computer-readable storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to perform various functions or steps performed by the electronic device in the foregoing method embodiments.
Embodiments of the present application further provide a computer program product, which, when running on a computer, causes the computer to execute each function or step performed by the electronic device in the above method embodiments.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. An image processing method is applied to electronic equipment; the electronic equipment comprises a first camera and a second camera; the method comprises the following steps:
the electronic equipment responds to the operation that a user starts the first camera and displays a first preview image; the first preview image is generated from a first long exposure image frame; the first long exposure frame is generated by fusing a first image frame and a second image frame by the electronic equipment;
the electronic equipment responds to zoom operation input by a user, determines a target zoom magnification and starts the second camera to display a second preview image corresponding to the target zoom magnification; when the target zoom magnification is larger than or equal to a first preset value and smaller than or equal to a second preset value, the second preview image is generated by fusing a second long-exposure image frame and a third long-exposure image frame; the second long-exposure image frame is generated by fusing a third image frame and a fourth image frame by the electronic equipment; the third long-exposure image frame is generated by fusing a fifth image frame and a sixth image frame by the electronic equipment;
wherein an exposure duration of the first image frame, the third image frame, and the fifth image frame is a first exposure duration; the exposure time of the second image frame, the fourth image frame and the sixth image frame is a second exposure time; the first exposure duration is different from the second exposure duration;
the first image frame, the second image frame, the third image frame, and the fourth image frame are acquired by the first camera; the fifth image frame and the sixth image frame are captured by the second camera.
2. The method of claim 1, further comprising: when the target zoom magnification is greater than or equal to a third preset value and less than or equal to a fourth preset value, the second preview image is generated from the second long-exposure image frame;
wherein the fourth preset value is less than the first preset value.
3. The method of claim 1, further comprising:
when the target zoom magnification is greater than or equal to a fifth preset value, the second preview image is generated by the third long-exposure image frame;
wherein the fifth preset value is greater than the second preset value.
4. The method of any of claims 1-3, wherein the electronic device activating the second camera comprises:
and when the target zooming magnification is larger than a first preset zooming magnification, the electronic equipment starts the second camera.
5. The method of any of claims 1-4, wherein the second preview image is a video preview image; the method further comprises the following steps:
the electronic equipment displays a first interface; the first interface is a preview interface during shooting, and comprises a recording control;
the electronic equipment responds to the operation of the user on the recording control to generate a video file; and the video file is generated by the electronic equipment through fusion processing of the second long exposure image frame and the third long exposure image frame.
6. The method according to any of claims 1-4, wherein the second preview image is a preview image of the electronic device during recording of the video.
7. The method of any of claims 1-4, wherein the second preview image is a photographed preview image; the method further comprises the following steps:
the electronic equipment displays a second interface; the second interface is a preview interface during shooting, and comprises a shooting control;
the electronic equipment responds to the operation of the user on the photographing control to generate a photographing file; and the photographing file is generated by the electronic equipment through the fusion processing of the second long exposure image frame and the third long exposure image frame.
8. The method of claim 1, wherein prior to said displaying the second preview image corresponding to the target zoom magnification, the method further comprises:
the electronic equipment performs image conversion processing on the second long exposure image frame and the third long exposure image frame;
the image conversion process includes:
the electronic equipment converts the second long exposure image frame into a second long exposure image frame in a target format and converts the third long exposure image frame into a third long exposure image frame in the target format; the bandwidth of the second long exposure image frame is higher than that of the second long exposure image frame in the target format, and the bandwidth of the third long exposure image frame is higher than that of the third long exposure image frame in the target format.
9. The method of claim 8, wherein the electronic device includes N consecutive frames of the second long exposure image frame and M consecutive frames of the third long exposure image frame; n is more than or equal to 1, M is more than or equal to 1; the image conversion process further includes:
in the second long exposure image frames of the continuous N frames, if the local information in the second long exposure image frame at the nth moment does not meet the preset condition, the electronic equipment repairs according to the local information in the second long exposure image frame at the (N-1) th moment and the (N + 1) th moment; n is more than or equal to 2; and/or
In the third long exposure image frames of the M continuous frames, if the local information in the third long exposure image frame at the M-th moment does not meet the preset condition, the electronic equipment repairs the local information in the third long exposure image frame at the M-1 th moment and the M +1 th moment; m is more than or equal to 2;
wherein the local information comprises at least one of color, texture, or shape.
10. The method according to claim 8 or 9, characterized in that the method further comprises:
the electronic equipment carries out multi-shot smoothing algorithm processing on the second long exposure image frame in the target format and the third long exposure image frame in the target format; the multi-shot smoothing algorithm is used for reducing noise or distortion of the second long-exposure image frame of the target format and the third long-exposure image frame of the target format.
11. The method of claim 10, further comprising:
the electronic equipment carries out first preset algorithm processing on the second long exposure image frame in the target format and the third long exposure image frame in the target format; wherein the first preset algorithm processing comprises at least one of image simulation transformation processing, multi-frame high dynamic range image processing or gamma processing.
12. The method of claim 7, further comprising:
after the electronic equipment displays a second preview image, caching the image frame collected by the first camera into a first photographing queue by the electronic equipment; and caching the image frames collected by the second camera into a second photographing queue.
13. The method of claim 12, wherein the electronic device generates a photo file in response to a user operating a photo control, comprising:
the electronic equipment responds to the operation of a user on the photographing control, selects a first image from the first photographing queue, and selects a second image from the second photographing queue; the first image is the image of the latest frame in all the images of the first photographing queue; the second image is the image of the latest frame in all the images of the second photographing queue;
the electronic equipment performs second preset algorithm processing on the first image and the second image to generate a photographing file in a target image format; the second pre-set algorithm processing is used for preserving details in the first image and the second image.
14. The method of claim 13, further comprising:
the electronic equipment carries out third preset algorithm processing on the first image and the second image; the third preset algorithm is used for fusing the field angles of the first image and the second image.
15. The method according to claim 13 or 14, wherein the electronic device performs a second preset algorithm processing on the first image and the second image, and the method comprises:
the electronic equipment carries out second preset algorithm processing on the first target image in the first photographing queue and the second target image in the second photographing queue; the timestamp of the first target image is the same as the timestamp of the second target image; or the difference value between the time stamp of the first target image and the time stamp of the second target image is smaller than a preset value.
16. An electronic device, comprising: the system comprises a display screen, a first camera, a second camera, a memory and one or more processors;
the display screen is used for displaying images collected by the first camera and the second camera or images generated by the processor; the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-15.
17. A computer-readable storage medium comprising computer instructions; the computer instructions, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-15.
CN202210112240.5A 2021-10-09 2022-01-29 Image processing method and electronic equipment Pending CN115967846A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/113363 WO2023056785A1 (en) 2021-10-09 2022-08-18 Image processing method and electronic device
EP22877809.8A EP4287607A1 (en) 2021-10-09 2022-08-18 Image processing method and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021111763406 2021-10-09
CN202111176340 2021-10-09

Publications (1)

Publication Number Publication Date
CN115967846A true CN115967846A (en) 2023-04-14

Family

ID=87360471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210112240.5A Pending CN115967846A (en) 2021-10-09 2022-01-29 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115967846A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098000A1 (en) * 2016-10-05 2018-04-05 Samsung Electronics Co., Ltd. Image processing systems including plurality of image sensors and electronic devices including the same
CN110198419A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110198417A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110198418A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110213502A (en) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
KR20200018921A (en) * 2018-08-13 2020-02-21 자화전자(주) Apparatus for generating image with multi zoom
CN111641778A (en) * 2018-03-26 2020-09-08 华为技术有限公司 Shooting method, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098000A1 (en) * 2016-10-05 2018-04-05 Samsung Electronics Co., Ltd. Image processing systems including plurality of image sensors and electronic devices including the same
CN111641778A (en) * 2018-03-26 2020-09-08 华为技术有限公司 Shooting method, device and equipment
KR20200018921A (en) * 2018-08-13 2020-02-21 자화전자(주) Apparatus for generating image with multi zoom
CN110198419A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110198417A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110198418A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110213502A (en) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
WO2022262260A1 (en) Photographing method and electronic device
CN112532857B (en) Shooting method and equipment for delayed photography
JP4497912B2 (en) Camera and zoom control method of camera
CN113489894B (en) Shooting method and terminal in long-focus scene
CN113747048B (en) Image content removing method and related device
WO2022262344A1 (en) Photographing method and electronic device
CN101958976A (en) Image processing method and wireless terminal equipment
EP4120668A1 (en) Video processing method and electronic device
WO2023077939A1 (en) Camera switching method and apparatus, and electronic device and storage medium
WO2023035921A1 (en) Method for image snapshot in video recording, and electronic device
CN115967846A (en) Image processing method and electronic equipment
EP4287607A1 (en) Image processing method and electronic device
CN115633252A (en) Shooting method and related equipment thereof
CN113382162B (en) Video shooting method and electronic equipment
CN110049254A (en) Image processing method, device, storage medium and electronic equipment
CN115776532B (en) Method for capturing images in video and electronic equipment
CN117082295B (en) Image stream processing method, device and storage medium
CN116095509B (en) Method, device, electronic equipment and storage medium for generating video frame
CN116051368B (en) Image processing method and related device
RU2789447C1 (en) Method and apparatus for multichannel video recording
WO2023103885A1 (en) Photographing method and apparatus, device and storage medium
EP4262226A1 (en) Photographing method and related device
CN115460343A (en) Image processing method, apparatus and storage medium
CN117135257A (en) Image display method and electronic equipment
CN117336612A (en) Video image processing circuit, method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination