CN113382169A - Photographing method and electronic equipment - Google Patents

Photographing method and electronic equipment Download PDF

Info

Publication number
CN113382169A
CN113382169A CN202110681582.4A CN202110681582A CN113382169A CN 113382169 A CN113382169 A CN 113382169A CN 202110681582 A CN202110681582 A CN 202110681582A CN 113382169 A CN113382169 A CN 113382169A
Authority
CN
China
Prior art keywords
image
images
exposure
photographing
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110681582.4A
Other languages
Chinese (zh)
Other versions
CN113382169B (en
Inventor
陈国乔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110681582.4A priority Critical patent/CN113382169B/en
Publication of CN113382169A publication Critical patent/CN113382169A/en
Application granted granted Critical
Publication of CN113382169B publication Critical patent/CN113382169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application provides a photographing method and electronic equipment, and relates to the field of image processing. The method can shorten the photographing time and improve the user experience. The method comprises the following steps: acquiring an image in a photographing preview mode; after receiving a first operation used by a user for photographing, generating a composite image by using M frames of images including a first collected image; the first collected image is an image collected in a photographing preview mode, the first collected image comprises an image of at least one exposure parameter, the M frames of images comprise images of N exposure parameters, and M and N are positive integers greater than 1 respectively. The application is applied to photographing.

Description

Photographing method and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a photographing method and an electronic device.
Background
In real circumstances, the difference in brightness observed, i.e., the ratio of the brightest object brightness to the darkest object brightness, is about 108The maximum difference in brightness seen by the human eye is about 105However, the current image display devices and image acquisition devices such as a display and a camera can only represent 256 different brightness. Therefore, when shooting with a camera, on one hand, the exposure of the camera can be reduced to make the picture show more details in dark places, but the details of the picture part with high brightness are sacrificed; on the other hand, the exposure of the camera can be increased to make the picture show more bright details,but this sacrifices the detail of the low brightness picture portion.
In order to make the photos show more dynamic range and picture details, high-dynamic range (HDR) technology has come to work. Specifically, HDR is a photograph that includes both light and dark details, which is obtained by continuously taking a plurality of photographs, sequentially increasing (or decreasing) the exposure amount of the photographs, and then fusing the photographs.
However, HDR requires capturing multiple photos and fusing the multiple photos, which results in a long photographing time and affects the user experience.
Disclosure of Invention
The embodiment of the application provides a photographing method and electronic equipment, which are used for shortening photographing time.
In a first aspect, a photographing method is provided, including: acquiring an image in a photographing preview mode; after receiving a first operation used by a user for photographing, generating a composite image by using M frames of images including a first collected image; the first collected image is an image collected in a photographing preview mode, M frames of images comprise images of N exposure parameters, and M and N are positive integers greater than 1 respectively. In the method of the present application, after receiving a first operation by a user, a composite image can be generated using a first captured image of at least one exposure parameter captured before the first operation. Therefore, the electronic device can be prevented from spending time acquiring the image of the at least one exposure parameter (namely the exposure parameter of the first acquired image) after receiving the first operation, so that the drawing time of the electronic device is shortened, and the user experience is improved.
In one possible design, the method further includes: a second captured image is captured after receiving the first operation. Generating a composite image using M frames of images including the first captured image, comprising: generating a composite image by using the first collected image and the second collected image; the first captured image and the second captured image include images of N exposure parameters. In the above design, by generating a composite image by using the first captured image captured before receiving the first operation and the second captured image captured after receiving the first operation, on one hand, it is possible to avoid taking time to capture an image of the at least one exposure parameter (i.e., the exposure parameter of the first captured image) after receiving the first operation, and the time for drawing the image of the electronic device is shortened; on the other hand, after receiving the first operation, according to the current photographing requirement, a second collected image of the exposure parameters other than the at least one exposure parameter (i.e. the exposure parameter of the first collected image) is collected, and a composite image is generated by using the first collected image and the second collected image, so that the imaging effect of the final composite image can be improved.
In one possible design, the first captured image is an image of a first exposure parameter; the second collected image includes an image of an exposure parameter other than the first exposure parameter among the N types of exposure parameters. In the above design, in consideration of some application scenarios, the electronic device only acquires an image of one exposure parameter (i.e., the first exposure parameter) in the preview photographing mode, and at this time, other images required for generating a composite image in addition to the first acquired image may be acquired in a manner of acquiring a second acquired image (i.e., an image of an exposure parameter other than the first exposure parameter among the N exposure parameters) after receiving the first operation, and then the composite image is generated, so that the time for drawing the image of the electronic device is shortened, and the user experience effect is improved.
In one possible design, the first exposure parameter is a first exposure parameter for a normal exposure of the image.
In one possible design, the first captured image includes: images of W exposure parameters; and the second collected image comprises an image of the exposure parameters except the W exposure parameters in the N exposure parameters. In the above design, in consideration of some application scenarios, the electronic device may acquire images of multiple exposure parameters (i.e., W exposure parameters) in the preview photographing mode, and at this time, other images required for generating a composite image in addition to the first acquired image may be acquired in a manner of acquiring a second acquired image (i.e., an image of an exposure parameter other than W exposure parameters in N exposure parameters) after receiving the first operation, and then the composite image is generated, so that the time for drawing the image of the electronic device is shortened, and the user experience effect is improved.
In one possible design, in the preview taking mode, the image is acquired, including: and acquiring images of N exposure parameters in a photographing preview mode. Generating a composite image using M frames of images including the first captured image, comprising: fusing the first collected image to generate a composite image; the first captured image includes images of N exposure parameters. In the above design, in consideration of some application scenarios, the electronic device may acquire multiple exposure parameters in the preview mode, so that all images sufficient for generating a composite image may be acquired in the preview mode, and thus after receiving the first operation, only the acquired images need to be used to generate a contract image, and no time is spent on acquiring the images, thereby further shortening the time for drawing the image of the electronic device and improving the user experience.
In one possible design, the method further includes: displaying a preview image in a photographing preview mode; the preview image is an image generated by fusing images of Q exposure parameters; wherein Q is less than N. In the above design, since the preview image displayed in the photo preview mode is an image generated by fusing images of Q exposure parameters, compared with an image generated by fusing images of N exposure parameters, the preview image has the advantages of less system resources required to be occupied, faster generation speed, and the like, and therefore, the effects of saving system resources and ensuring the fluency of preview image display in the photo preview mode can be achieved by the above design.
In one possible design, the first acquired image is an image acquired using an interleaved high dynamic range Stagger HDR technique. Through the design, the effects of shortening the drawing time of the electronic equipment and improving the user experience can be achieved on the electronic equipment adopting the Stagger HDR technology.
In one possible design, the first acquired image is an image acquired using a dual conversion gain DCG technique. Through the design, the effects of shortening the drawing time of the electronic equipment and improving the use experience of a user on the electronic equipment adopting the DCG technology can be achieved.
In one possible design, the method further includes: determining whether the first collected image meets a preset condition; the preset conditions include at least one of the following: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. Generating a composite image using M frames of images including the first captured image, comprising: after determining that the first captured image satisfies the preset condition, generating a composite image using M frames of images including the first captured image. Through the design, the image quality of the first acquired image used for generating the synthetic image can be ensured, and the possibility of phenomena such as ghost, shake and the like in the finally generated synthetic image is reduced.
In one possible design, the first captured image is the image that is finally stored in the ZSL buffer zero-second delay buffer before the first operation for taking a picture by the user is received. In the above design, by selecting the closest image in the ZSLbuffer as the first captured image for generating the synthesized image, the interval duration of each image for generating the synthesized image can be reduced, the possibility of occurrence of phenomena such as ghosting and shaking in the finally generated synthesized image can be reduced, and the success rate of generating the synthesized image can be improved.
In one possible design, the method further includes: a first captured image satisfying a preset condition is determined from the P frame images. The P frame image is an image collected in a photographing preview mode, and the preset conditions include at least one of the following conditions: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. Through the design, the image quality of the first acquired image used for generating the synthetic image can be ensured, and the possibility of phenomena such as ghost, shake and the like in the finally generated synthetic image is reduced.
In one possible design, the P-frame picture is the picture that was most recently stored in the ZSL buffer zero-second delay buffer after receiving the first operation of the user for taking a picture. This reduces the interval time between images for generating the composite image, reduces the possibility of occurrence of ghost, shake, and the like in the finally generated composite image, and improves the success rate of generating the composite image.
In a second aspect, an electronic device is provided, comprising: the image acquisition unit is used for acquiring images in a photographing preview mode; the image processing unit is used for generating a composite image by utilizing the M frames of images containing the first collected image after receiving a first operation used by a user for photographing; the first collected image is an image collected in a photographing preview mode, M frames of images comprise images of N exposure parameters, and M and N are positive integers greater than 1 respectively.
In one possible design, the image acquisition unit is further configured to acquire a second acquired image after receiving the first operation; the image processing unit is used for generating a composite image by using M frames of images including a first collected image after receiving a first operation used by a user for photographing, and comprises the following steps: the image processing unit is specifically used for generating a composite image by utilizing the first collected image and the second collected image; the first captured image and the second captured image include images of N exposure parameters.
In one possible design, the first captured image is an image of a first exposure parameter; the second collected image includes an image of an exposure parameter other than the first exposure parameter among the N types of exposure parameters.
In one possible design, the first exposure parameter is a first exposure parameter for a normal exposure of the image.
In one possible design, the first captured image includes: images of W exposure parameters; and the second collected image comprises an image of the exposure parameters except the W exposure parameters in the N exposure parameters.
In one possible design, the image capturing unit is configured to capture an image in a preview photographing mode, and includes: the image acquisition unit is specifically used for acquiring images of N exposure parameters in a photographing preview mode; the image processing unit is used for generating a composite image by using M frames of images including a first collected image after receiving a first operation used by a user for photographing, and comprises the following steps: the image processing unit is specifically used for fusing the first collected image to generate a composite image; the first captured image includes images of N exposure parameters.
In one possible design, the electronic device further includes: the display unit is used for displaying the preview image in a photographing preview mode; the preview image is an image generated by fusing images of Q exposure parameters; wherein Q is less than N.
In one possible design, the first acquired image is an image acquired using an interleaved high dynamic range Stagger HDR technique.
In one possible design, the first acquired image is an image acquired using a dual conversion gain DCG technique.
In one possible design, the image processing unit is further configured to determine whether the first captured image satisfies a preset condition; the preset conditions include at least one of the following: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The image processing unit is used for generating a composite image by using M frames of images including a first collected image after receiving a first operation used by a user for photographing, and comprises the following steps: and the image processing unit is used for generating a composite image by using the M frames of images containing the first collected image after the first collected image is determined to meet the preset condition.
In one possible design, the first captured image is the image that is finally stored in the ZSL buffer zero-second delay buffer before the first operation for taking a picture by the user is received.
In one possible design, the image processing unit is further configured to determine a first captured image satisfying a preset condition from the P frame image; the P frame image is an image collected in a photographing preview mode, and the preset conditions include at least one of the following conditions: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot.
In one possible design, the P-frame picture is the picture that was most recently stored in the ZSL buffer zero-second delay buffer after receiving the first operation of the user for taking a picture.
In a third aspect, an electronic device is provided, including: one or more processors, the one or more processors coupled with one or more memories, the one or more memories storing computer programs; the computer program, when executed by the one or more processors, causes the electronic device to perform a method of taking a picture as provided in the first aspect or any of the designs of the first aspect above.
In a fourth aspect, a computer-readable storage medium is provided, comprising: computer software instructions; the computer software instructions, when executed in a computer, cause the computer to perform the method of taking a picture as provided in the first aspect or any of the designs of the first aspect described above.
In a fifth aspect, there is provided a computer program product for causing a computer to perform the photographing method as provided in the first aspect or any one of the designs of the first aspect when the computer program product runs on the computer.
The effect description of the second aspect to the fifth aspect may refer to the effect description of the first aspect, and is not repeated herein.
Drawings
FIG. 1 is a diagram illustrating the effect of HDR photographing;
FIG. 2 is a schematic flow chart of synthesizing an image using HDR;
FIG. 3 is a timing diagram of an electronic device capturing images;
FIG. 4 is a second timing diagram of an electronic device acquiring an image;
FIG. 5 is a third schematic timing diagram of an electronic device acquiring an image;
FIG. 6 is a schematic diagram of a pixel circuit;
FIG. 7 is a fourth timing diagram of an electronic device acquiring images;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 10 is a timing diagram illustrating an image captured by an electronic device according to an embodiment of the present disclosure;
fig. 11 is a second timing diagram illustrating an image captured by an electronic device according to a second embodiment of the present disclosure;
fig. 12 is a second flowchart of a photographing method according to an embodiment of the present application;
fig. 13 is a third schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 14 is a third schematic timing diagram of an electronic device acquiring an image according to an embodiment of the present disclosure;
fig. 15 is a fourth schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 16 is a fourth timing diagram illustrating an image captured by an electronic device according to an embodiment of the present disclosure;
fig. 17 is a fifth timing diagram illustrating an image captured by an electronic device according to an embodiment of the present disclosure;
fig. 18 is a fifth flowchart illustrating a photographing method according to an embodiment of the present application;
fig. 19 is a sixth schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 20 is a sixth schematic timing diagram of an electronic device acquiring an image according to an embodiment of the present disclosure;
fig. 21 is a seventh schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 22 is a seventh schematic timing diagram of an electronic device acquiring an image according to an embodiment of the present disclosure;
fig. 23 is an eighth schematic timing diagram of an electronic device acquiring an image according to an embodiment of the present disclosure;
fig. 24 is an eighth schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 25 is a ninth schematic flowchart illustrating a photographing method according to an embodiment of the present application;
fig. 26 is a ninth schematic timing diagram illustrating an image captured by an electronic device according to an embodiment of the present application;
fig. 27 is a tenth of a timing diagram of an electronic device acquiring an image according to an embodiment of the present application;
fig. 28 is a tenth of a flowchart illustrating a photographing method according to an embodiment of the present application;
fig. 29 is an eleventh flowchart illustrating a photographing method according to an embodiment of the present application;
fig. 30 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. For convenience of clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items with substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance. Also, in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration, and words such as "exemplary" or "for example" are used to present relevant concepts in a concrete manner for ease of understanding.
First, a related art related to the embodiment of the present application is described:
1. high Dynamic Range (HDR)
HDR can be understood as a technique that fuses multi-frame low-dynamic range (LDR) images of multiple exposure luminances into a frame HDR image. More picture details may be included in the HDR image.
Fig. 1 is a schematic diagram of a photograph of an object on a table, where fig. 1 (a) is an LDR image with a higher exposure brightness and fig. 1 (b) is an LDR image with a higher exposure brightness. It can be seen that in fig. 1 (a), because the exposure brightness is large, on one hand, the picture details in the box with low brightness can be clearly displayed, and on the other hand, because the brightness of the basketball on the table is large, the problem of overexposure exists, only the outline of the basketball on the table can be displayed; in fig. 1 (b), since the exposure luminance is low, the details of the picture of the basketball portion can be clearly displayed, and the details of the picture in the box with low luminance cannot be displayed. Further, (a) and (b) of fig. 1 can be merged into (c) of fig. 1 by HDR, and it can be seen that in (c) of fig. 1, not only the details of the picture of the basketball portion but also the details of the picture in the box can be displayed.
Specifically, a general HDR photography process is shown in fig. 2, and includes:
s101, the electronic equipment starts a camera application, enters a photographing preview mode, and displays a photographing preview picture.
S102, the electronic equipment receives the photographing operation of the user.
S103, the electronic equipment responds to the photographing operation and sets exposure parameters.
For example, if four frames of EV0 images, one frame of EV-2 images, and one frame of EV-4 images are required to perform HDR. The electronic device sets exposure parameters so that the camera sequentially acquires four frames of EV0 images, one frame of EV-2 images, and one frame of EV-4 images according to the set exposure parameters.
For the sake of simplicity, the exposure brightness of the normal exposure is denoted as EV0, and the exposure brightness is denoted as EV0 × 2 in the embodiment of the present applicationnDenoted as EVn. For example, EV-1 indicates that the exposure brightness is half of EV0, and EV-2 indicates that the exposure brightness is half of EV-1; for another example, EV1 indicates that the exposure luminance is twice EV0, and EV2 indicates that the exposure luminance is twice EV 1.
And S104, acquiring an image by the camera according to the exposure parameters.
Continuing with the above example, the camera captures 4 frames of EV0 images, one frame of EV-2 images, and one frame of EV-4 images in sequence.
And S105, the electronic equipment synthesizes the images by using the acquired images to obtain the HDR image.
Specifically, fig. 3 is a timing diagram of a photographing process according to an embodiment of the present disclosure. The timing chart comprises two time axes, wherein the upper time axis is used for reflecting the time of image exposure of a photosensitive element in the camera, and the lower time axis is used for reflecting the time of photosensitive data reading of the camera.
Wherein, the electronic equipment receives the photographing operation of the user at the time point t 1. After receiving the photographing operation, the electronic device needs a period of time to generate exposure parameters, send the exposure parameters to the camera, and enable the camera to execute the exposure parameters. Specifically, assuming that the time taken for the electronic device to generate the exposure parameter, send the exposure parameter to the camera, and make the camera execute the exposure parameter is three acquisition periods T, the electronic device acquires 4 frames of EV0 images, one frame of EV-2 images, and one frame of EV-4 images in 6 acquisition periods in a dashed frame as shown in fig. 3. Then, the electronic device synthesizes the HDR image according to the acquired 6 frames of images.
2. Staggered high dynamic range (stager high-dynamic range, stager HDR)
The stagger HDR is a technology which can collect multiple frames of images with different exposure brightness in one collection period by improving the frame rate of a sensor. For example, stagger HDR may generate a long-exposure image and a short-exposure image in one acquisition cycle, e.g., as shown in fig. 4, the electronic device may acquire two frame images of EV0 and EV-4 in one acquisition cycle T. As another example, stagger HDR may generate long, medium, and short exposure images in one acquisition cycle, such as shown in FIG. 5, and the electronic device may acquire three frame images EV0, EV-2, and EV-4 in one acquisition cycle T. In contrast, in the scene of the stager HDR, since images with various exposure parameters are acquired, a blank frame interval (this frame interval may be referred to as VB) between two frame images is shorter than the frame interval VB in the scene of the non-stager HDR.
Currently, stager HDR is mainly applied in a preview mode and a video capture mode to increase HDR effect of video or preview pictures. The frame interval VB of the stager HDR is short, so that the ghost condition in the video and preview process can be well reduced.
3. Double Conversion Gain (DCG)
DCG can be understood as having two capacitors storing photon energy in the photosite corresponding to a pixel, or can be understood as the ability to do two reads in one pixel cell circuit.
Illustratively, as shown in fig. 6 (a), a pixel element circuit structure of a conventional Complementary Metal Oxide Semiconductor (CMOS) is shown, in which PD is a photodiode, TX is a transfer transistor, RST is a reset transistor, SF is a source follower, RS is a row select transistor, V is a reset transistorAAPIX is the analog pixel supply voltage, VOUTIs a pixel output voltage node, FD is a floating diffusion node, CFDIs the FD node capacitance. Fig. 6 (b) shows a pixel circuit structure using DCG. Compared with fig. 6 (a), the circuit structure shown in fig. 6 (b) has one capacitor and a transistor DCG added at the dashed box, so that it has the capability of making two readings in the circuit.
Therefore, in a scene adopting DCG, reading results similar to stagger HDR can be achieved in one-time sensitization. Unlike stager HDR, which results in a superposition of exposures, DCG results in a new exposure.
Illustratively, fig. 7 is a timing diagram of conventional HDR, stager HDR, and DCG, three HDR in acquiring an image. It can be seen that images of EV0, EV-2, and EV-4 for the same three exposure parameters are acquired, and the time taken for DCG is shorter, so DCG can further reduce ghosting and can also increase the time for long exposure.
The photographing method provided by the present application is described below with reference to examples:
as can be seen from the solutions described in the above S101-S105, in the solution, at least 9 acquisition cycles are required from the time when the user performs the photographing operation to the time when the electronic device acquires the 6 frames of images to be fused. In addition, the time required by the electronic device to synthesize the HDR image is relatively long, which affects the user experience.
In view of the foregoing technical problem, an embodiment of the present application provides a photographing method, in which an image before a user triggers photographing (referred to as a first captured image for short) may be used to generate a composite image. Therefore, the electronic equipment can be prevented from spending time acquiring the image of the exposure parameter of the first acquired image after receiving the operation of triggering the photographing by the user.
Thus, the time required for acquiring the image after receiving the photographing operation can be saved. Therefore, the photographing speed is increased, and the use experience of a user is improved.
The following describes the scheme provided by the embodiments of the present application with reference to specific examples:
the embodiment of the application provides a photographing method which can be applied to electronic equipment. After detecting that a user inputs an operation at a specific position of the touch screen, the electronic device can display a corresponding interface and information according to the operation.
For example, the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and an electronic device with a touch screen, such as a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and the embodiment of the present application does not particularly limit the specific form of the device.
Taking an electronic device as a mobile phone as an example, referring to fig. 8, the electronic device may include a processor 310, an external memory interface 320, an internal memory 321, a Universal Serial Bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, a button 390, a motor 391, an indicator 392, a camera 393, a display 394, a Subscriber Identity Module (SIM) card interface 395, and the like.
The sensor module 380 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 300. In other embodiments of the present application, electronic device 300 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units, such as: the processor 310 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor (NPU), and/or a Micro Controller Unit (MCU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 300. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 310. If the processor 310 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 310, thereby increasing the efficiency of the system.
In some embodiments, processor 310 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, a Serial Peripheral Interface (SPI), an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 340 is configured to receive charging input from a charger. The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 and provides power to the processor 310, the internal memory 321, the external memory, the display 394, the camera 393, and the wireless communication module 360. In other embodiments, the power management module 341 and the charging management module 340 may be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 300. The wireless communication module 360 may provide a solution for wireless communication applied to the electronic device 300, including Wireless Local Area Networks (WLANs), such as Wi-Fi networks, Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), NFC, Infrared (IR), and the like.
The electronic device 300 implements display functions via the GPU, the display 394, and the application processor, among other things. The GPU is an image processing microprocessor coupled to a display 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 394 is used to display images, video, and the like. The display screen may be a touch screen. In some embodiments, the electronic device 300 may include 1 or N display screens 394, N being a positive integer greater than 1.
The electronic device 300 may implement a shooting function through the ISP, the camera 393, the video codec, the GPU, the display 394, the application processor, and the like. The ISP is used to process the data fed back by the camera 393. Camera 393 is used to capture still images or video. In some embodiments, electronic device 300 may include 1 or N cameras 393, N being a positive integer greater than 1.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent recognition of the electronic device 300, for example: film sticking state recognition, image restoration, image recognition, face recognition, voice recognition, text understanding and the like.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 300. The external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The processor 310 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321. The internal memory 321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 300, and the like. In addition, the internal memory 321 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 300 may implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the earphone interface 370D, and the application processor. Such as music playing, recording, etc.
Touch sensors, also known as "Touch Panels (TPs)". The touch sensor may be disposed on the display screen 394, and the touch sensor and the display screen 394 form a touch screen, which is also referred to as a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display 394. In other embodiments, the touch sensor may be disposed on a surface of the electronic device 300 at a different location than the display screen 394.
Keys 390 include a power-on key, a volume key, etc. The motor 391 may generate a vibration cue. Indicator 392 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 395 is for connecting a SIM card.
In order to save the time required for receiving the image acquired after the photographing operation in the photographing process, thereby accelerating the photographing speed and improving the user experience, the method provided by the embodiment of the application can be realized through two ways:
in a first mode, a composite image can be generated by using a first captured image captured in a photographing preview mode and a second captured image captured after receiving a first operation for photographing by a user.
And in the second mode, all images required for generating the composite image are collected in the shooting preview mode, and the composite image can be generated without collecting the images again after the first operation is received.
The following describes the implementation of the above two modes in detail with reference to examples:
the first method is as follows:
in the first mode, on one hand, when the HDR image is synthesized, a multi-frame image with multiple exposure parameters needs to be acquired first, and then the HDR image can be determined according to the multi-frame image. On the other hand, it is considered that in some scenarios, when the electronic device is in the photo preview mode, the electronic device will capture an image according to preset exposure parameters, and after receiving a first operation for photo taking by the user, the electronic device starts to generate the exposure parameters of the image required for synthesizing the HDR image.
For example, the electronic device may estimate an exposure parameter (referred to as a first exposure parameter) of an image in a case where the image is normally exposed in the photo preview mode, and then capture an image with brightness of EV0 according to the first exposure parameter, as shown in fig. 3, before receiving a first operation (i.e., a photo operation in the figure) of the user, the electronic device captures an image of EV 0; after receiving the first operation of the receiving user, the electronic equipment starts to generate the exposure parameters of the images required for synthesizing the HDR images and acquires the required images according to the generated exposure parameters, namely, four frames of EV0 images, one frame of EV-2 images and one frame of EV-4 images are acquired after 3 acquisition periods of the photographing operation in FIG. 3.
Therefore, the photographing method provided in the embodiment of the present application can synthesize an HDR image by using two types of images. The image processing method includes the steps that one type of image is an image (called a first acquired image) acquired according to preset exposure parameters in a photographing preview mode, and after a first operation used by a user for photographing is received in the other type of image, the electronic equipment acquires the image according to the generated exposure parameters. Therefore, the time spent on acquiring the images with the preset exposure parameters is not needed after the first operation of the user is received, so that the time spent on acquiring the images is saved, the photographing speed can be increased, and the use experience of the user is improved.
Specifically, as shown in fig. 9, the photographing method provided by the present application may include the following steps:
s401, the electronic equipment starts a camera application and enters a photographing preview mode.
Specifically, a user may click a camera icon in a display interface of the electronic device to trigger the electronic device to start a camera application in response to the click operation, and enter a photographing preview mode.
In this application, the photographing preview mode can be understood as follows: before receiving a first operation of a user for taking a picture, the electronic equipment acquires an image by using the camera so as to display the acquired image on an interface for the user to preview the image. In practical applications, the photographing preview mode may also be referred to by other names, such as a camera preview mode, a browsing mode, and the like, and as long as the electronic device has a function of capturing an image by using a camera before receiving a first operation for photographing by a user in the mode, the mode may be understood as the photographing preview mode in this application.
S402, the electronic equipment estimates a first exposure parameter of the image under the condition of normal exposure.
The exposure parameter may be various parameters reflecting the exposure amount of the image. For example, the exposure parameters may include one or more of a light-on time, a shutter speed, a light-on area, and an aperture size. One exposure parameter may correspond to the exposure brightness of one image, and may also be understood as: when the electronic equipment collects an image according to an exposure parameter, the electronic equipment can collect the image of the exposure brightness corresponding to the exposure parameter.
In addition, in the embodiment of the present application, the normal exposure may be understood as an exposure performed so that the display effect of the image satisfies a preset condition, and the first exposure parameter of the image in the case of the normal exposure may be understood as an exposure parameter corresponding to the preset condition so that the display effect of the image satisfies the preset condition. The preset condition can be set according to the actual application scene.
S403, the electronic equipment acquires an image of the first exposure parameter.
In this embodiment, the first exposure parameter in the case of normal exposure is taken as an example for description. In some implementations, the electronic device may acquire the image using other exposure parameters in the preview taking mode, for example, the electronic device may acquire the image according to preset exposure parameters, and in these implementations, the electronic device may also acquire the image using other exposure parameters without acquiring the image using the first exposure parameter in the preview taking mode.
Illustratively, as shown in fig. 10, the electronic device captures an image of the EV0 according to the first exposure parameters in the preview taking mode before receiving the taking operation. In the implementation process, the acquired image may also be an image with other brightness, such as an EV1, an EV2, an EV-1, or an EV-2 image.
Specifically, after acquiring the image with the first exposure parameter, the electronic device may buffer the acquired image into a zero-second delay buffer (ZSL buffer) for subsequent processing. The ZSL buffer may adopt a first-in first-out storage mode, and when a new picture is acquired, the ZSL buffer deletes the old picture and stores the new picture.
S404, the electronic equipment receives a first operation used for photographing by the user.
For example, in a shooting preview mode, the electronic device may display a preset control on the interface, where the preset control is used for shooting after the user clicks the preset control, and at this time, the first operation may be an operation of the user clicking the preset control.
S405, the electronic equipment collects a second collected image.
In practical implementation, after receiving the first operation, the electronic device may store the image of the first exposure parameter stored in the ZSL buffer in the memory, and then empty the ZSL buffer, so as to start acquiring the second acquired image and store the second acquired image in the ZSL buffer.
And the second acquired image comprises images of other exposure parameters except the first exposure parameter in the N exposure parameters. Where N represents the number of exposure parameters of the image required to synthesize the HDR image.
Specifically, since the EV0 image of the first exposure parameter has already been acquired in S403, it is possible here to acquire only images of exposure parameters other than the first exposure parameter among the N exposure parameters, for example, the image of EV-2 and the image of EV-4. Thus, the second captured image may include only images of the N exposure parameters other than the first exposure parameter. Of course, in order to achieve a better image effect, the second captured image may further include an image of the first exposure parameter, which is not limited in the embodiment of the present application.
In one implementation, S405 may include:
s4051, the electronic device determines an exposure sequence.
Wherein the exposure sequence is used to reflect the number and order of the second captured images.
S4052, the electronic device collects a second collected image according to the exposure sequence.
Illustratively, as in FIG. 10, the electronic device acquires images of EV-2 and EV-4 at the 4 th acquisition cycle after receiving the first operation. In the example shown in fig. 10, since the electronic device, after receiving the first operation, performs the operations of determining the exposure sequence, generating the exposure parameters corresponding to the images in the exposure sequence, and adjusting the aperture of the camera by the electronic device according to the exposure parameters, the interval between the electronic device receiving the first operation and the electronic device capturing the second captured image is 3 capture cycles.
S406, the electronic equipment generates a composite image according to the first collected image and the second collected image.
The first captured image is an image captured in the preview photographing mode, that is, the first captured image is an image of the first exposure parameter captured in the above step S403. Specifically, the first captured image may be an image that is stored in the ZSL buffer most recently before the first operation of the user is received. For example, in fig. 10, the first captured image includes the image of the 4 frames EV0 of the ZSL buffer most recently stored by the electronic device before receiving the photographing operation.
Specifically, the electronic device may fuse the first captured image and the second captured image according to various HDR techniques to obtain a composite image (e.g., fuse 4 frames EV0, 1 frame EV-2, and 1 frame EV-4 in fig. 10 to obtain a composite image). In the embodiment of the present application, the technology adopted by the electronic device to fuse the first captured image and the second captured image may not be limited.
In addition, since it takes some time to perform the actions of determining the exposure sequence, generating the exposure parameters corresponding to the images in the exposure sequence, and adjusting the aperture of the camera according to the exposure parameters by the electronic device before the electronic device captures the second captured image, for example, 3 capture cycles are provided between the electronic device receiving the first operation and the electronic device capturing the second captured image in fig. 10. Therefore, the electronic device may also acquire some images from the time the electronic device receives the first operation to before acquiring the second acquired image, for example, in fig. 10, three frames of images of the EV0 with the first exposure parameter may be acquired after the electronic device receives the first operation.
Therefore, in one implementation, the step S406 may include:
s4061, the electronic device generates a composite image according to the first collected image, the second collected image and the third collected image.
And the third collected image is an image collected by the electronic equipment after the electronic equipment receives the first operation and before the electronic equipment collects the second collected image. Illustratively, as shown in fig. 11, the third captured image may include an image captured by the electronic device for three frames of EV0 after receiving the first operation.
In addition, on one hand, a ghost phenomenon may occur during the process of acquiring an image by the electronic device. The ghost phenomenon can be understood as a phenomenon that a ghost of an object appears in an image due to an excessively large position change amplitude of the object in the image in an image acquisition process. On the other hand, in the process of acquiring an image by the electronic device, the phenomenon that the automatic focusing function is not converged or the automatic exposure function is not converged may also exist. The phenomenon that the focal length of the electronic equipment is adjusted before focusing is successful can be understood as the phenomenon that the automatic focusing function is not converged; the automatic exposure function is not converged, and can be understood as a phenomenon that the exposure parameters of the camera are adjusted before the exposure parameters corresponding to normal exposure are not determined by the electronic equipment.
Therefore, in order to improve the success rate of image synthesis, in one implementation, after receiving the first operation of the user at S404, as shown in fig. 12, the method may further include:
s407, judging whether each image in the first collected image meets a preset condition.
Wherein the preset condition may include at least one of: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs.
Illustratively, in actual implementation: the "the moving speed of the object in the image is less than the first threshold" in the preset condition may be implemented by any one of the following manners:
the method comprises the steps that firstly, the movement speed of an object in an image is detected by using a sensor for detecting the movement speed of the object in electronic equipment, and then whether the movement speed is smaller than a first threshold value is judged;
and secondly, detecting whether the image has the ghost or not, and if the image does not have the ghost, determining that the motion speed of the object in the image is smaller than a first threshold value.
The preset condition that the shake amplitude of the electronic device is smaller than the second threshold value when the image is shot can be achieved by any one of the following methods:
the method comprises the steps that firstly, a sensor used for detecting shaking in the electronic equipment is used for detecting the shaking amplitude of the electronic equipment, and then whether the shaking amplitude is smaller than a second threshold value or not is judged;
and secondly, detecting whether the image has a ghost or not, and if the image does not have the ghost, determining that the shaking amplitude of the electronic equipment is smaller than a second threshold value when the image is shot.
In other words, in some scenarios, it may be determined whether the motion speed of the object in the image and the shake amplitude of the electronic device satisfy the condition, that is, whether the motion speed is less than the first threshold and the shake amplitude is less than the second threshold, by detecting whether the image has a ghost.
In the preset conditions, "the automatic focusing function is converged when the image is shot", the convergence can be determined by judging whether the focal length of the camera is changed or not within a preset time period, or whether an object in the image is clear or not.
In the preset conditions, "convergence of the automatic exposure function when an image is captured" may be determined by determining whether parameters such as a light-passing time, a shutter speed, a light-passing area, and a size of an aperture of a camera in a preset time period have changed, or whether exposure brightness of an image is normal.
Further, on the one hand, after determining that each image in the first captured image satisfies the preset condition through S407, the electronic device performs S406 again, that is, generates a composite image according to the first captured image and the second captured image.
On the other hand, if it is determined that the first captured image includes an image that does not satisfy the preset condition through S407, as shown in fig. 12, the method further includes:
and S408, the electronic equipment acquires a fourth acquired image.
Wherein the fourth captured image comprises an image of the N exposure parameters required to synthesize the HDR image. Illustratively, images of 3 exposure parameters, namely EV0, EV-2 and EV-4, are required to compose an HDR image. The fourth captured image may include 4 frames of EV0 images, 1 frame of EV-2 images, and 1 frame of EV-4 images in the dashed box in the figure, as shown in fig. 3.
And S409, generating a composite image by the electronic equipment according to the fourth acquired image.
Similar to S406, the electronic device may generate a composite image from the fourth captured image in accordance with various HDR techniques. In the embodiment of the present application, there may be no limitation on the HDR technology used by the electronic device.
In the above implementation manner, on one hand, after the first collected image is determined to meet the preset condition, a composite image can be generated according to the first collected image and the second collected image, so that the quality of the composite image is ensured; on the other hand, after the first collected image is determined to include the image which does not meet the preset condition, the fourth collected image can be collected again, and the composite image is generated according to the fourth collected image, so that the composite image can be generated smoothly.
In another implementation, after receiving the first operation of the user at S404, as shown in fig. 13, the method may further include:
s410, the electronic equipment determines a first collected image meeting preset conditions from the P frame images.
The P frame image is an image acquired in the shooting preview mode, that is, the P frame image is an image of the first exposure parameter acquired in the above step S403. Specifically, the P frame image may be the P frame image that was most recently stored in the ZSL buffer before the first operation by the user is received, for example, the electronic device reads the 6 frames of EV0 images that were most recently stored in the ZSL buffer before the first operation (i.e., the photographing operation in the figure) is received in fig. 14.
Wherein the preset condition may include at least one of: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs. The specific implementation process of determining whether the image meets the preset condition may refer to the corresponding description in S407, and is not described herein again.
For example, after receiving the first operation, the electronic device first reads the P frame image from the ZSL buffer before receiving the first operation, for example, the electronic device reads the 6 frame EV0 image acquired before receiving the first operation (i.e., the photographing operation in the figure) in fig. 14. Then, 4 frames of images satisfying a preset condition are selected from the 6 frames of images as a first captured image.
After the first captured image is determined through S410, the electronic device further executes S406 to generate a composite image according to the first captured image and the second captured image.
In the implementation mode, the P frame images with more frames are acquired firstly, and then the first acquisition image meeting the preset conditions is selected from the P frame images, so that the quality of the image included in the first acquisition image is ensured, and the quality of the synthesized image is further ensured.
It is to be understood that, in the embodiments of the present application, an electronic device may perform some or all of the steps in the embodiments of the present application, and these steps or operations are merely examples, and other operations or variations of various operations may also be performed in the embodiments of the present application. Further, the various steps may be performed in a different order presented in the embodiments of the application, and not all operations in the embodiments of the application may be performed. For example, S407 may be executed before S405, or may be executed after S405; for another example, S410 may be performed before S405, or may be performed after S405, which may not be limited by the method provided in the present application.
In a case that the electronic device supports capturing images with multiple exposure parameters in the photo preview mode, for example, in a scenario that the electronic device supports stagger HDR or DCG, as shown in fig. 15, the photo method provided by the present application may include the following steps:
s501, the electronic equipment starts a camera application and enters a photographing preview mode.
S502, the electronic equipment estimates a first exposure parameter of the image under the condition of normal exposure.
The implementation processes of S501 and S502 may refer to the corresponding descriptions in S401 and S402, and are not described herein again.
S503, the electronic equipment collects images of W exposure parameters according to the first exposure parameters.
Specifically, after determining the first exposure parameter (i.e. the exposure parameter of the EV0 image), the electronic device may further sequentially determine the exposure parameters of the images of EV-1, EV-2, EV-4, etc., and then capture the images of W exposure parameters in the preview taking mode by using the stagger HDR technique according to the determined exposure parameters.
The number of W may be determined according to the number of exposures within the same acquisition period supported by stagger HDR. For example, if the stager HDR technique is adopted to support two exposures in the same acquisition period, i.e. images with two exposure parameters can be acquired in the same acquisition period, then W may be 2. For another example, if the stager HDR technique is adopted to support three exposures in the same acquisition period, that is, images with three exposure parameters can be acquired in the same acquisition period, then W may be 2 or 3. For another example, the stager HDR technique is adopted to support four exposures in the same acquisition period, that is, images with four exposure parameters can be acquired in the same acquisition period, and then W may be 2, 3, or 4, and so on.
It should be noted that, in the present embodiment, the method is mainly described by taking the stager HDR as an example, and it is understood that the method may also be applied in the context of other technologies, such as DCG, and the application is not limited to this.
Illustratively, as shown in fig. 16, the electronic device acquires images of two exposure parameters, namely, images of EV0 and EV-4, in the preview photographing mode before receiving the photographing operation.
Specifically, after acquiring images of W exposure parameters, the electronic device may cache the acquired images in the ZSL buffer for subsequent processing.
S504, the electronic equipment receives a first operation used for photographing by the user.
The specific implementation process of S504 may refer to the content of S404, which is not described herein again.
And S505, acquiring a second acquired image by the equipment.
After receiving the first operation, the electronic device may store the images stored in the ZSL buffer (i.e., the images of the W exposure parameters described in S503) into the memory, and then empty the ZSL buffer to start acquiring the second acquired image and store the second acquired image into the ZSL buffer.
Wherein, the second captured image may include images of other exposure parameters than the W exposure parameters in the N exposure parameters. Where N represents the number of exposure parameters of the image required to synthesize the HDR image.
Specifically, since images of W exposure parameters have already been acquired in S403, only images of exposure parameters other than W exposure parameters among N exposure parameters need to be acquired here. Thus, the second captured image may include only images of the N exposure parameters other than the W exposure parameters. Of course, in order to achieve a better image effect, the second captured image may further include images of W exposure parameters, which is not limited in the embodiment of the present application.
In one implementation, S505 may include:
s5051, the electronic device determines an exposure sequence.
Wherein the exposure sequence is used to reflect the number and order of the second captured images.
S5052, the electronic equipment collects a second collected image according to the exposure sequence.
Illustratively, as shown in fig. 16, the electronic device acquires the image of EV-2 in the 4 th acquisition cycle after receiving the first operation (i.e., the photographing operation in the figure). In addition, since images with two exposure parameters can be acquired in one acquisition period in a stagger HDR scene, one frame of EV-4 image can be acquired at the same time of acquiring the EV-2 image, and therefore the second acquired image can comprise one frame of EV-2 image and one frame of EV-4 image.
In addition, like the above example, in the example shown in fig. 16, it takes a certain time (3 acquisition cycles in the figure) for the electronic device to perform actions such as determining an exposure sequence, generating exposure parameters corresponding to images in the exposure sequence, and adjusting the aperture of the camera according to the exposure parameters from the reception of the first operation to the start of acquiring the second acquired image.
S506, the electronic equipment generates a composite image according to the first collected image and the second collected image.
The first captured image is an image captured in the preview photographing mode, that is, the first captured image is an image of the W exposure parameters captured in the above step S503. Specifically, the first captured image may be an image that is stored in the ZSL buffer most recently before the first operation of the user is received. For example, in fig. 16, the first captured image includes 4 frames of EV0 images and 4 frames of EV-4 images captured by the electronic device before receiving the photographing operation. The second captured image includes 1 frame of the EV-2 image and 1 frame of the EV-4 image.
Specifically, the electronic device may fuse the first captured image and the second captured image according to various HDR techniques to obtain a composite image. In the embodiment of the present application, the technology adopted by the electronic device to fuse the first captured image and the second captured image may not be limited.
In addition, in an implementation manner, the S506 may include:
s5061, the electronic device generates a composite image according to the first collected image, the second collected image and the third collected image.
And the third collected image is an image collected by the electronic equipment after the electronic equipment receives the first operation and before the electronic equipment collects the second collected image. Illustratively, as shown in FIG. 17, the third captured image may include 3 frames of EV0 images and 3 frames of EV-2 images captured by the electronic device after receiving the first operation.
For specific implementation process and achieved beneficial effects of S5061, reference may be made to the corresponding description of S4061, and details are not repeated herein.
In addition, in order to improve the success rate of image synthesis, in one implementation, after receiving the first operation of the user at S504, as shown in fig. 18, the method further includes:
and S507, judging whether each image in the first collected image meets a preset condition.
Wherein the preset condition may include at least one of: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs.
Further, on the one hand, after determining that each image in the first captured image satisfies the preset condition through S507, the electronic device performs S506 again, that is, a composite image is generated according to the first captured image and the second captured image.
On the other hand, if it is determined through S507 that the first captured image includes an image that does not satisfy the preset condition, as shown in fig. 18, the method further includes:
and S508, the electronic equipment acquires a fourth acquired image.
Wherein the fourth captured image comprises an image of the N exposure parameters required to synthesize the HDR image.
And S509, the electronic equipment generates a composite image according to the fourth collected image.
For specific implementation procedures and achieved beneficial effects of S507-S509, reference may be made to the corresponding descriptions of S407-S409, which are not described herein again.
In another implementation, after receiving the first operation of the user at S404, as shown in fig. 19, the method further includes:
s510, the electronic equipment determines a first acquired image meeting a preset condition from the P frame images.
The P frame image is an image acquired in the shooting preview mode, that is, the P frame image is a P frame in the images of the N exposure parameters acquired in the above S503. Specifically, the P frame picture may be the P frame picture that was most recently stored in the ZSL buffer before the first operation by the receiving user.
Wherein the preset condition may include at least one of: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs. The specific implementation process of determining whether the image meets the preset condition may refer to the corresponding description in S407, and is not described herein again.
For example, after receiving the first operation, the electronic device first reads the P-frame image from the ZSL buffer before receiving the first operation, for example, the electronic device reads 12 frames of images (including 6 frames of EV0 images and 6 frames of EV-4 images) acquired in 6 acquisition periods acquired before receiving the first operation (i.e., the photographing operation in the figure) in fig. 20. Then, from these 12 frame images, 4 frame EV0 image and 1 frame EV-4 image satisfying a preset condition are selected as the first captured image to be combined with 1 frame EV-2 included in the second captured image into an HDR image.
For the beneficial effects achieved by S510, reference may be made to the corresponding description of S410 above, and details are not repeated here.
The second method comprises the following steps:
in the second embodiment, in consideration of the fact that the electronic device supports the capture of images with multiple exposure parameters in the preview photographing mode, the electronic device may capture all images required for the composite image in the preview photographing mode, and in this way, the capture of images after receiving the first operation may not be required.
Specifically, as shown in fig. 21, the method may include the following steps:
s601, the electronic equipment starts a camera application and enters a photographing preview mode.
S602, the electronic device estimates a first exposure parameter of the EV0 image in the case of normal exposure.
The implementation processes of S601 and S602 may refer to the corresponding descriptions in S401 and S402, and are not described herein again.
S603, the electronic equipment collects images of the N exposure parameters according to the first exposure parameters.
The number of N may be determined according to the number of exposure parameters required for synthesizing the HDR image and the HDR technology (e.g., stagger HDR or DCG) employed by the electronic device.
Therein, the electronic device may first determine N exposure parameters, for example, the N exposure parameters include an EV0 image, an EV-2 image and an EV-4 image, and the exposure parameters of the three images, according to the first exposure parameter, similar to S503 above. Images of these N exposure parameters are then acquired.
For example, taking stager HDR as an example, as shown in fig. 22, the electronic device captures images of three exposure parameters of EV0, EV-2, and EV-4 in each capture cycle before receiving a photographing operation.
As another example, taking DCG as an example, as shown in FIG. 23, before receiving the photographing operation, the electronic device acquires images of EV0, EV-2 and EV-4, three exposure parameters, in each acquisition cycle.
S604, the electronic equipment receives a first operation used by the user for photographing.
The specific implementation process of S604 may refer to the content of S404, which is not described herein again.
S605, the electronic equipment fuses the first collected image to generate a composite image.
The first captured image is an image captured in the preview mode, that is, the first captured image is an image of the N exposure parameters captured in the above step S603. Specifically, the first captured image may be an image that is stored in the ZSL buffer most recently before the first operation of the user is received. For example, in FIG. 22, the first captured image includes 4 frames of EV0, 4 frames of EV-2 and 4 frames of EV-4 images that were most recently stored in the ZSL buffer by the electronic device prior to receiving the photographing operation. As another example, in FIG. 23, the first captured image includes 4 frames of EV0, 4 frames of EV-2 and 4 frames of EV-4 images that were most recently stored in the ZSL buffer by the electronic device prior to receiving the photographing operation.
Specifically, the electronic device may fuse the first captured image according to various HDR techniques to obtain a composite image. In the embodiment of the present application, the technology adopted by the electronic device to fuse the first collected image may not be limited.
Additionally, in one implementation, as shown in fig. 24, the method further includes:
and S606, judging whether each image in the first collected image meets a preset condition.
Wherein the preset condition may include at least one of: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs.
Further, on the one hand, after determining that each image in the first captured image satisfies the preset condition through S606, the electronic device performs S605 again, that is, fuses the first captured image, to generate a composite image.
On the other hand, if it is determined through S606 that the first captured image includes an image that does not satisfy the preset condition, as shown in fig. 24, the method further includes:
and S607, the electronic equipment acquires a fourth acquired image.
Wherein the fourth captured image comprises an image of the N exposure parameters required to synthesize the HDR image.
And S608, the electronic equipment generates a composite image according to the fourth collected image.
For specific implementation procedures and achieved beneficial effects of S606-S608, reference may be made to the corresponding description of S407-S409, and details are not repeated here.
In another implementation, as shown in fig. 25, the method further includes:
and S609, the electronic equipment determines a first collected image meeting a preset condition from the P frame images.
The P frame image is an image acquired in the shooting preview mode, that is, the P frame image is a P frame image in the images of the N exposure parameters acquired in the above S603. Specifically, the P frame picture may be the P frame picture that was most recently stored in the ZSL buffer before the first operation by the receiving user.
Wherein the preset condition may include at least one of: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs. The specific implementation process of determining whether the image meets the preset condition may refer to the corresponding description in S407, and is not described herein again.
Illustratively, after receiving the first operation, the electronic device first reads the P-frame images from the ZSL buffer before receiving the first operation, for example, the electronic device reads 18 frames of images (including 6 frames of EV0 images, 6 frames of EV-2 images, and 6 frames of EV-4 images) acquired in 6 acquisition cycles acquired before receiving the first operation (i.e., the photographing operation in the figure) in fig. 26. Then, from these 18 frame images, 4 frame EV0 image, 1 frame EV-2 image, and 1 frame EV-4 image satisfying preset conditions are selected as the first captured image so as to fuse the first captured image to generate an HDR image.
As another example, the electronic device reads 18 frames of images (including 6 frames of EV0 images, 6 frames of EV-2 images, and 6 frames of EV-4 images) acquired in 6 acquisition cycles acquired before the first operation (i.e., the photographing operation) is received as shown in fig. 27. Then, from these 18 frame images, 4 frame EV0 image, 1 frame EV-2 image, and 1 frame EV-4 image satisfying preset conditions are selected as the first captured image so as to fuse the first captured image to generate an HDR image.
Additionally, in one implementation, as shown in fig. 28, the method further includes:
s610, displaying an image generated by fusing the images of the Q exposure parameters in a photographing preview mode. Wherein Q is less than N.
In one possible design, the Q exposure parameters may include the exposure parameter with the smallest exposure brightness and the exposure parameter with the largest exposure brightness among the N exposure parameters.
For example, as shown in fig. 26, before receiving the photographing operation, the electronic device may acquire images with 3 exposure parameters (i.e., N is 3) in the photographing preview mode. In this case, if images obtained by fusing images of 3 types of exposure parameters are displayed on the interface as preview images, a large amount of hardware resources are required, and the display may be not smooth.
Therefore, in the design, when the preview image is displayed in the photographing preview mode, the preview image is generated by fusing the images with less exposure parameters, and the preview image is displayed, so that the effect of displaying the preview image in the photographing preview mode more efficiently and smoothly is achieved.
For example, continuing the above example, when the electronic device acquires an image with 3 exposure parameters in the preview photographing mode, by adopting the above design of the present application, the acquired two exposure parameters may be fused to generate a preview image, and the preview image may be displayed. Specifically, the two exposure parameters used for fusion may be two exposure parameters with the minimum exposure brightness and the maximum exposure brightness among the 3 exposure parameters (i.e., the exposure parameters of the EV0 image and the EV-4 image).
In the method provided by the embodiment of the application, in the process of synthesizing the HDR image, as shown in fig. 29, first, the electronic device acquires a first acquired image in the preview photographing mode (i.e., S201 in the figure). Then, after receiving a first operation for photographing by the user, a composite image is generated using the first captured image (i.e., S202 in the figure). Thus, the time required for acquiring the image after receiving the photographing operation can be saved. Therefore, the photographing speed is increased, and the use experience of a user is improved.
It is understood that the electronic device includes hardware structures and/or software modules for performing the corresponding functions in order to realize the corresponding functions. According to the embodiment of the application, the functional modules of the electronic equipment are divided according to the method example. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. Optionally, the division of the modules in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 30 is a schematic diagram illustrating a composition of an electronic device according to an embodiment of the present application. The electronic device 40 may be a chip or a system on a chip. The electronic device 70 includes:
an image acquisition unit 701, configured to acquire an image in a shooting preview mode;
an image processing unit 702, configured to generate a composite image by using M frames of images including the first captured image after receiving a first operation of the user for taking a picture.
The first collected image is an image collected in a photographing preview mode, the first collected image comprises an image of at least one exposure parameter, the M frames of images comprise images of N exposure parameters, and M and N are positive integers greater than 1 respectively.
Optionally, the image capturing unit 701 is further configured to capture a second captured image after receiving the first operation.
An image processing unit 702, configured to generate a composite image using M frames of images including a first captured image after receiving a first operation of a user for taking a picture, includes: the image processing unit is specifically used for generating a composite image by utilizing the first collected image and the second collected image; the first captured image and the second captured image include images of N exposure parameters.
Optionally, the first collected image is an image of a first exposure parameter; the second collected image includes an image of an exposure parameter other than the first exposure parameter among the N types of exposure parameters.
Optionally, the first exposure parameter is a first exposure parameter under a normal exposure condition of the image.
Optionally, the first captured image includes: images of W exposure parameters; and the second collected image comprises an image of the exposure parameters except the W exposure parameters in the N exposure parameters.
Optionally, the image capturing unit 701 is configured to capture an image in a preview photographing mode, and includes: and the image acquisition unit is specifically used for acquiring images of N exposure parameters in a photographing preview mode.
An image processing unit 702, configured to generate a composite image using M frames of images including a first captured image after receiving a first operation of a user for taking a picture, includes: the image processing unit is specifically used for fusing the first collected image to generate a composite image; the first captured image includes images of N exposure parameters.
Optionally, the electronic device 70 further includes: a display unit 703 for displaying a preview image in the photographing preview mode. The preview image is an image generated by fusing images of Q exposure parameters. Wherein Q is less than N.
Optionally, the first captured image is an image captured using an interleaved high dynamic range Stagger HDR technique.
Optionally, the first acquired image is an image acquired by using a dual conversion gain DCG technique.
Optionally, the image processing unit 702 is further configured to determine whether the first captured image meets a preset condition; the preset conditions include at least one of the following: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot.
An image processing unit 702, configured to generate a composite image using M frames of images including a first captured image after receiving a first operation of a user for taking a picture, includes: and the image processing unit is used for generating a composite image by using the M frames of images containing the first collected image after the first collected image is determined to meet the preset condition.
Optionally, the first acquired image is an image finally stored in the ZSL buffer zero-second delay buffer before the first operation for taking a picture by the user is received.
Optionally, the image processing unit 702 is further configured to determine a first captured image meeting a preset condition from the P-frame image.
The P frame image is an image collected in a photographing preview mode, and the preset conditions include at least one of the following conditions: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot.
Optionally, the P-frame picture is a picture that is stored in the ZSL buffer zero-second delay buffer most recently after receiving the first operation for taking a picture by the user. The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed, the method provided in the embodiment of the present application is executed.
Embodiments of the present application also provide a computer program product including instructions. When the method is run on a computer, the computer can be enabled to execute the method provided by the embodiment of the application.
The functions or actions or operations or steps, etc., in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are all or partially generated upon loading and execution of computer program instructions on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (16)

1. A method of taking a picture, comprising:
acquiring an image in a photographing preview mode;
after receiving a first operation used by a user for photographing, generating a composite image by using M frames of images including a first collected image;
the first collected image is an image collected in the photographing preview mode, the M frames of images comprise images of N exposure parameters, and M and N are positive integers greater than 1 respectively.
2. The photographing method according to claim 1, wherein the method further comprises: acquiring a second acquired image after receiving the first operation;
generating a composite image using the M frames of images including the first captured image, comprising: generating the composite image using the first captured image and the second captured image; the first captured image and the second captured image include images of N exposure parameters.
3. The method of claim 2, wherein the first acquired image is an image of a first exposure parameter; and the second collected image comprises an image of the exposure parameters except the first exposure parameter in the N types of exposure parameters.
4. The method of claim 3, wherein the first exposure parameter is a first exposure parameter for a normal exposure of the image.
5. The method of claim 2, wherein the first captured image comprises: images of W exposure parameters; and the second collected image comprises an image of the exposure parameters except the W exposure parameters in the N exposure parameters.
6. The method of claim 1, wherein in the preview taking mode, capturing an image comprises: acquiring images of N exposure parameters in the photographing preview mode;
generating a composite image using the M frames of images including the first captured image, comprising: fusing the first collected image to generate the composite image; the first collected image comprises images of N exposure parameters.
7. The method of claim 6, further comprising:
displaying a preview image in the photographing preview mode; the preview image is an image generated by fusing images of Q exposure parameters; wherein Q is less than N.
8. The method of any of claims 5-7, wherein the first acquired image is an image acquired using an interleaved high dynamic range Stagger HDR technique.
9. The method according to any of claims 5-7, wherein the first acquired image is an image acquired using a dual conversion gain DCG technique.
10. The method according to any one of claims 1-9, further comprising: determining whether the first collected image meets a preset condition; the preset condition comprises at least one of the following conditions: the motion speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot;
generating a composite image using the M frames of images including the first captured image, comprising: generating the composite image using M frames of images including the first captured image after determining that the first captured image satisfies the preset condition.
11. The method as claimed in any one of claims 1 to 10, wherein the first captured image is the image stored in the ZSL buffer zero second delay buffer before the first operation for taking a picture by the user is received.
12. The method according to any one of claims 1-9, further comprising: determining the first acquisition image meeting preset conditions from a P frame image;
the P-frame image is an image acquired in the preview photographing mode, and the preset condition includes at least one of the following: the moving speed of an object in the image is smaller than a first threshold, the shaking amplitude of the electronic equipment is smaller than a second threshold when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot.
13. The method as claimed in claim 11, wherein the P-frame picture is a picture that has been stored in the ZSL buffer most recently after receiving a first operation for taking a picture by a user.
14. An electronic device, comprising: one or more processors coupled with one or more memories storing computer programs;
the computer program, when executed by the one or more processors, causes the electronic device to perform the method of taking a picture as provided in any one of claims 1-13.
15. A computer-readable storage medium, comprising: computer software instructions;
the computer software instructions, when executed in a computer, cause the computer to perform the method of taking a picture as provided in any one of claims 1-13.
16. A computer program product, characterized in that it causes a computer to carry out the photographing method as provided in any one of claims 1-13, when the computer program product is run on the computer.
CN202110681582.4A 2021-06-18 2021-06-18 Photographing method and electronic equipment Active CN113382169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110681582.4A CN113382169B (en) 2021-06-18 2021-06-18 Photographing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110681582.4A CN113382169B (en) 2021-06-18 2021-06-18 Photographing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113382169A true CN113382169A (en) 2021-09-10
CN113382169B CN113382169B (en) 2023-05-09

Family

ID=77577968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110681582.4A Active CN113382169B (en) 2021-06-18 2021-06-18 Photographing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113382169B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114143461A (en) * 2021-11-30 2022-03-04 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN115297254A (en) * 2022-07-04 2022-11-04 北京航空航天大学 Portable high-dynamic imaging fusion system under high-radiation condition
CN115499579A (en) * 2022-08-08 2022-12-20 荣耀终端有限公司 Processing method and device based on zero-second delay ZSL
CN115526787A (en) * 2022-02-28 2022-12-27 荣耀终端有限公司 Video processing method and device
CN116033275A (en) * 2023-03-29 2023-04-28 荣耀终端有限公司 Automatic exposure method, electronic equipment and computer readable storage medium
CN116055890A (en) * 2022-08-29 2023-05-02 荣耀终端有限公司 Method and electronic device for generating high dynamic range video
CN116452475A (en) * 2022-01-10 2023-07-18 荣耀终端有限公司 Image processing method and related device
WO2023160280A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Photographing method and related apparatus
CN117135468A (en) * 2023-02-21 2023-11-28 荣耀终端有限公司 Image processing method and electronic equipment
CN117156261A (en) * 2023-03-28 2023-12-01 荣耀终端有限公司 Image processing method and related equipment
CN114143461B (en) * 2021-11-30 2024-04-26 维沃移动通信有限公司 Shooting method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150244917A1 (en) * 2014-02-25 2015-08-27 Acer Incorporated Dynamic exposure adjusting method and electronic apparatus using the same
CN105376473A (en) * 2014-08-25 2016-03-02 中兴通讯股份有限公司 Photographing method, device and equipment
CN106060422A (en) * 2016-07-06 2016-10-26 维沃移动通信有限公司 Image exposure method and mobile terminal
CN107197169A (en) * 2017-06-22 2017-09-22 维沃移动通信有限公司 A kind of high dynamic range images image pickup method and mobile terminal
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN110121882A (en) * 2017-10-13 2019-08-13 华为技术有限公司 A kind of image processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150244917A1 (en) * 2014-02-25 2015-08-27 Acer Incorporated Dynamic exposure adjusting method and electronic apparatus using the same
CN105376473A (en) * 2014-08-25 2016-03-02 中兴通讯股份有限公司 Photographing method, device and equipment
CN106060422A (en) * 2016-07-06 2016-10-26 维沃移动通信有限公司 Image exposure method and mobile terminal
CN107197169A (en) * 2017-06-22 2017-09-22 维沃移动通信有限公司 A kind of high dynamic range images image pickup method and mobile terminal
CN110121882A (en) * 2017-10-13 2019-08-13 华为技术有限公司 A kind of image processing method and device
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114143461B (en) * 2021-11-30 2024-04-26 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN114143461A (en) * 2021-11-30 2022-03-04 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN116452475A (en) * 2022-01-10 2023-07-18 荣耀终端有限公司 Image processing method and related device
CN115526787A (en) * 2022-02-28 2022-12-27 荣耀终端有限公司 Video processing method and device
WO2023160280A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Photographing method and related apparatus
CN115526787B (en) * 2022-02-28 2023-10-20 荣耀终端有限公司 Video processing method and device
CN115297254B (en) * 2022-07-04 2024-03-29 北京航空航天大学 Portable high dynamic imaging fusion system under high radiation condition
CN115297254A (en) * 2022-07-04 2022-11-04 北京航空航天大学 Portable high-dynamic imaging fusion system under high-radiation condition
CN115499579A (en) * 2022-08-08 2022-12-20 荣耀终端有限公司 Processing method and device based on zero-second delay ZSL
CN115499579B (en) * 2022-08-08 2023-12-01 荣耀终端有限公司 Zero second delay ZSL-based processing method and device
WO2024045670A1 (en) * 2022-08-29 2024-03-07 荣耀终端有限公司 Method for generating high-dynamic-range video, and electronic device
CN116055890A (en) * 2022-08-29 2023-05-02 荣耀终端有限公司 Method and electronic device for generating high dynamic range video
CN117135468A (en) * 2023-02-21 2023-11-28 荣耀终端有限公司 Image processing method and electronic equipment
CN117156261A (en) * 2023-03-28 2023-12-01 荣耀终端有限公司 Image processing method and related equipment
CN116033275B (en) * 2023-03-29 2023-08-15 荣耀终端有限公司 Automatic exposure method, electronic equipment and computer readable storage medium
CN116033275A (en) * 2023-03-29 2023-04-28 荣耀终端有限公司 Automatic exposure method, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113382169B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN113382169B (en) Photographing method and electronic equipment
WO2021093793A1 (en) Capturing method and electronic device
KR102381713B1 (en) Photographic method, photographic apparatus, and mobile terminal
WO2020168956A1 (en) Method for photographing the moon and electronic device
WO2020073959A1 (en) Image capturing method, and electronic device
WO2021213477A1 (en) Viewfinding method for multichannel video recording, graphic user interface, and electronic device
JP7403551B2 (en) Recording frame rate control method and related equipment
CN111212235B (en) Long-focus shooting method and electronic equipment
WO2023015981A1 (en) Image processing method and related device therefor
CN109729274B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113660408B (en) Anti-shake method and device for video shooting
CN115526787B (en) Video processing method and device
CN115689963B (en) Image processing method and electronic equipment
US20210409588A1 (en) Method for Shooting Long-Exposure Image and Electronic Device
WO2024045670A1 (en) Method for generating high-dynamic-range video, and electronic device
CN113452898A (en) Photographing method and device
CN106803879A (en) Cooperate with filming apparatus and the method for finding a view
CN111953899B (en) Image generation method, image generation device, storage medium, and electronic apparatus
WO2024041394A1 (en) Photographing method and related apparatus
CN113572948B (en) Video processing method and video processing device
WO2021185374A1 (en) Image capturing method and electronic device
WO2023142830A1 (en) Camera switching method, and electronic device
WO2018088121A1 (en) Imaging device, imaging method, and imaging program
CN116723383B (en) Shooting method and related equipment
WO2023160224A9 (en) Photographing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant