CN113382169B - Photographing method and electronic equipment - Google Patents
Photographing method and electronic equipment Download PDFInfo
- Publication number
- CN113382169B CN113382169B CN202110681582.4A CN202110681582A CN113382169B CN 113382169 B CN113382169 B CN 113382169B CN 202110681582 A CN202110681582 A CN 202110681582A CN 113382169 B CN113382169 B CN 113382169B
- Authority
- CN
- China
- Prior art keywords
- image
- acquired
- exposure
- photographing
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The application provides a photographing method and electronic equipment, and relates to the field of image processing. The method can shorten the photographing time and improve the user experience. The method comprises the following steps: in a photographing preview mode, collecting an image; after a first operation of a user for photographing is received, generating a composite image by using M frames of images containing a first acquisition image; the first collected image is an image collected in a photographing preview mode, the first collected image comprises at least one image with exposure parameters, M frames of images comprise N images with exposure parameters, and M and N are positive integers larger than 1 respectively. The method and the device are applied to photographing.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a photographing method and an electronic device.
Background
In a practical environment, the observed brightness difference, i.e., the ratio of the brightest object brightness to the darkest object brightness, is about 10 8 The maximum difference in brightness that can be seen by the human eye is about 10 5 Whereas current image display devices, image acquisition devices such as displays, can represent only 256 different brightnesses. Therefore, when shooting with a camera, on the one hand, the photo can be made to show more dark details by reducing the exposure of the camera, but this sacrifices the details of the bright picture portion; on the other hand, the photo may be made to show more bright details by increasing the exposure of the camera, but this sacrifices the details of the picture portion with small brightness.
In order to enable photographs to represent more dynamic range and picture detail, high-dynamic range (HDR) technology has evolved. Specifically, HDR is a method of continuously taking a plurality of pictures, sequentially increasing (or decreasing) the exposure of the pictures, and then fusing the pictures, so that a picture including both bright and dark details can be obtained.
However, because the HDR needs to collect multiple photos and needs to fuse the multiple photos, the time spent in the whole process is relatively large, so that the photographing time is too long, and the use experience of the user is affected.
Disclosure of Invention
The embodiment of the application provides a photographing method and electronic equipment, which are used for shortening photographing time.
In a first aspect, a photographing method is provided, including: in a photographing preview mode, collecting an image; after a first operation of a user for photographing is received, generating a composite image by using M frames of images containing a first acquisition image; the first acquired image is an image acquired in a photographing preview mode, the M frames of images comprise N exposure parameters, and M and N are positive integers larger than 1 respectively. In the method described above, after receiving the first operation of the user, the first acquired image of at least one exposure parameter acquired before the first operation can be used to generate the composite image. In this way, it is avoided that the electronic device needs to spend time acquiring the image of the at least one exposure parameter (i.e. the exposure parameter of the first acquired image) after receiving the first operation, so that the image drawing time of the electronic device is shortened, and the user experience is improved.
In one possible design, the method further comprises: a second acquired image is acquired after receiving the first operation. Generating a composite image using the M-frame image comprising the first acquired image, comprising: generating a composite image using the first acquired image and the second acquired image; the first acquired image and the second acquired image comprise images of N exposure parameters. In the above-described design, by using the first captured image captured before the first operation is received and the second captured image captured after the first operation is received to generate the composite image, on the one hand, it is possible to avoid taking time to capture the image of the above-described at least one exposure parameter (i.e., the exposure parameter of the first captured image) after the first operation is received, shortening the drawing time of the electronic device; on the other hand, after the first operation is received, according to the current photographing requirement, a second acquired image of other exposure parameters except the at least one exposure parameter (namely the exposure parameter of the first acquired image) is acquired, and a composite image is generated by using the first acquired image and the second acquired image, so that the imaging effect of the final composite image can be improved.
In one possible design, the first acquired image is an image of a first exposure parameter; the second acquired image includes an image of an exposure parameter other than the first exposure parameter among the N exposure parameters. In the above design, considering that under some application scenarios, the electronic device only collects an image of one exposure parameter (i.e. the first exposure parameter) in the photographing preview mode, at this time, other images required for generating a composite image except for the first collected image may be collected by collecting a second collected image (i.e. an image of the exposure parameters except for the first exposure parameter) after receiving the first operation, and then the composite image is generated, so as to shorten the drawing time of the electronic device and improve the user experience.
In one possible design, the first exposure parameter is the first exposure parameter under normal exposure of the image.
In one possible design, the first acquired image includes: an image of W exposure parameters; the second acquired image includes images of exposure parameters other than the W exposure parameters among the N exposure parameters. In the above design, in consideration of some application scenarios, the electronic device may collect images of multiple exposure parameters (i.e., W exposure parameters) in the photographing preview mode, and at this time, other images required for generating a composite image except for the first collected image may be collected by collecting the second collected image (i.e., an image of exposure parameters except for W exposure parameters) after receiving the first operation, and then the composite image is generated, so as to achieve the effects of shortening the drawing time of the electronic device and improving the user experience.
In one possible design, in a photo preview mode, capturing an image includes: in the photographing preview mode, images of N exposure parameters are acquired. Generating a composite image using the M-frame image comprising the first acquired image, comprising: fusing the first acquired image to generate a composite image; the first acquired image includes images of N exposure parameters. In the above design, in consideration of some application scenarios, the electronic device may collect various exposure parameters in the photographing preview mode, so that all images enough to generate the composite image may be collected in the photographing preview mode, so that after the first operation is received, only the collected images are used to generate the contract image, and time is not required to be spent for collecting the images, thereby further shortening the drawing time of the electronic device and improving the user experience.
In one possible design, the method further comprises: displaying a preview image in a photographing preview mode; the preview image is an image generated by fusing the images with Q exposure parameters; wherein Q is less than N. In the above design, since the preview image displayed in the photographing preview mode is an image generated by fusing the images of the Q exposure parameters, compared with the image generated by fusing the images of the N exposure parameters, the preview image has the advantages of less system resources required to be occupied, higher generation speed and the like, so that the effects of saving the system resources and ensuring the smoothness of the display of the preview image in the photographing preview mode can be achieved through the above design.
In one possible design, the first acquired image is an image acquired using an interleaved high dynamic range Stagger HDR technique. Through the design, the effect of shortening the drawing time of the electronic equipment and improving the use experience of a user can be achieved on the electronic equipment adopting the Stagger HDR technology.
In one possible design, the first acquired image is an image acquired using a dual conversion gain DCG technique. Through the design, the effect of shortening the drawing time of the electronic equipment and improving the use experience of a user can be achieved on the electronic equipment adopting the DCG technology.
In one possible design, the method further comprises: determining whether the first acquired image meets a preset condition; the preset conditions include at least one of the following: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. Generating a composite image using the M-frame image comprising the first acquired image, comprising: after determining that the first acquired image satisfies the preset condition, generating a composite image using the M-frame image including the first acquired image. By the design, the image quality of the first acquired image for generating the composite image can be ensured, and the possibility of phenomena such as ghosting, shaking and the like in the finally generated composite image is reduced.
In one possible design, the first acquired image is an image stored in the ZSL buffer zero second delay buffer last before receiving the first operation of the user for photographing. In the design, the closest image in the ZSLbuffer is selected as the first acquired image for generating the synthetic image, so that the interval duration of each image for generating the synthetic image can be reduced, the possibility of occurrence of phenomena such as ghosting and shaking in the finally generated synthetic image is reduced, and the success rate of generating the synthetic image is improved.
In one possible design, the method further comprises: and determining a first acquisition image meeting preset conditions from the P frame images. The P frame image is an image acquired in a photographing preview mode, and the preset conditions comprise at least one of the following: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. By the design, the image quality of the first acquired image for generating the composite image can be ensured, and the possibility of phenomena such as ghosting, shaking and the like in the finally generated composite image is reduced.
In one possible design, the P frame image is the image that was recently stored in the ZSL buffer zero second delay buffer after receiving the first operation by the user to take a picture. Thus, the interval time of each image used for generating the composite image can be reduced, the possibility of phenomena such as ghosting, shaking and the like in the finally generated composite image can be reduced, and the success rate of generating the composite image can be improved.
In a second aspect, an electronic device is provided, comprising: the image acquisition unit is used for acquiring images in a photographing preview mode; the image processing unit is used for generating a composite image by using the M frame images containing the first acquired image after receiving a first operation of a user for photographing; the first acquired image is an image acquired in a photographing preview mode, the M frames of images comprise N exposure parameters, and M and N are positive integers larger than 1 respectively.
In one possible design, the image acquisition unit is further configured to acquire a second acquired image after receiving the first operation; an image processing unit for generating a composite image using an M-frame image including a first captured image after receiving a first operation for photographing by a user, including: the image processing unit is specifically used for generating a composite image by utilizing the first acquired image and the second acquired image; the first acquired image and the second acquired image comprise images of N exposure parameters.
In one possible design, the first acquired image is an image of a first exposure parameter; the second acquired image includes an image of an exposure parameter other than the first exposure parameter among the N exposure parameters.
In one possible design, the first exposure parameter is the first exposure parameter under normal exposure of the image.
In one possible design, the first acquired image includes: an image of W exposure parameters; the second acquired image includes images of exposure parameters other than the W exposure parameters among the N exposure parameters.
In one possible design, the image capturing unit is configured to capture an image in a photographing preview mode, and includes: the image acquisition unit is specifically used for acquiring images of N exposure parameters in a photographing preview mode; an image processing unit for generating a composite image using an M-frame image including a first captured image after receiving a first operation for photographing by a user, including: the image processing unit is specifically used for fusing the first acquired image to generate a composite image; the first acquired image includes images of N exposure parameters.
In one possible design, the electronic device further includes: a display unit for displaying a preview image in a photographing preview mode; the preview image is an image generated by fusing the images with Q exposure parameters; wherein Q is less than N.
In one possible design, the first acquired image is an image acquired using an interleaved high dynamic range Stagger HDR technique.
In one possible design, the first acquired image is an image acquired using a dual conversion gain DCG technique.
In one possible design, the image processing unit is further configured to determine whether the first acquired image meets a preset condition; the preset conditions include at least one of the following: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. An image processing unit for generating a composite image using an M-frame image including a first captured image after receiving a first operation for photographing by a user, including: and the image processing unit is used for generating a composite image by using the M frame image containing the first acquired image after determining that the first acquired image meets the preset condition.
In one possible design, the first acquired image is an image stored in the ZSL buffer zero second delay buffer last before receiving the first operation of the user for photographing.
In one possible design, the image processing unit is further configured to determine a first acquired image that meets a preset condition from the P-frame images; the P frame image is an image acquired in a photographing preview mode, and the preset conditions comprise at least one of the following: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot.
In one possible design, the P frame image is the image that was recently stored in the ZSL buffer zero second delay buffer after receiving the first operation by the user to take a picture.
In a third aspect, there is provided an electronic device comprising: one or more processors coupled to the one or more memories, the one or more memories storing a computer program; the computer program, when executed by one or more processors, causes the electronic device to perform the photographing method as provided by the first aspect or any of the designs of the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium comprising: computer software instructions; the computer software instructions, when run in a computer, cause the computer to perform the photographing method as provided by the first aspect or any of the designs of the first aspect described above.
In a fifth aspect, there is provided a computer program product which, when run on a computer, causes the computer to perform the photographing method as provided by the first aspect or any of the designs of the first aspect.
The effect descriptions of the second aspect to the fifth aspect may refer to the effect descriptions of the first aspect, and are not described herein.
Drawings
FIG. 1 is a schematic diagram of an effect of HDR photographing;
FIG. 2 is a schematic flow diagram of an HDR synthesized image;
FIG. 3 is a timing diagram of an electronic device capturing images;
FIG. 4 is a second timing diagram of an electronic device capturing images;
FIG. 5 is a third timing diagram of an electronic device capturing images;
FIG. 6 is a schematic diagram of a pixel circuit;
FIG. 7 is a fourth timing diagram of an electronic device capturing images;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 9 is a schematic flow chart of a photographing method according to an embodiment of the present application;
fig. 10 is a schematic timing diagram of an electronic device capturing an image according to an embodiment of the present application;
FIG. 11 is a second timing diagram of an electronic device capturing images according to an embodiment of the present disclosure;
FIG. 12 is a second flowchart of a photographing method according to the embodiment of the present application;
FIG. 13 is a third flowchart of a photographing method according to the embodiment of the present disclosure;
FIG. 14 is a third timing diagram of an electronic device capturing images according to an embodiment of the present disclosure;
FIG. 15 is a flowchart of a photographing method according to an embodiment of the present disclosure;
FIG. 16 is a timing diagram of an electronic device capturing images according to an embodiment of the present disclosure;
FIG. 17 is a schematic diagram showing a timing sequence of capturing images by an electronic device according to an embodiment of the present disclosure;
FIG. 18 is a fifth flowchart of a photographing method according to the embodiment of the present application;
FIG. 19 is a flowchart illustrating a photographing method according to an embodiment of the present disclosure;
FIG. 20 is a timing diagram of an electronic device capturing images according to an embodiment of the present disclosure;
FIG. 21 is a flowchart of a photographing method according to an embodiment of the present disclosure;
FIG. 22 is a timing diagram of an electronic device capturing images according to an embodiment of the present disclosure;
FIG. 23 is a schematic diagram showing a timing sequence of capturing images by an electronic device according to an embodiment of the present disclosure;
FIG. 24 is a flowchart illustrating a photographing method according to an embodiment of the present disclosure;
FIG. 25 is a flowchart illustrating a photographing method according to an embodiment of the present disclosure;
FIG. 26 is a diagram illustrating a timing sequence of capturing images by an electronic device according to an embodiment of the present disclosure;
FIG. 27 is a schematic diagram of a timing sequence of capturing images by an electronic device according to an embodiment of the present disclosure;
FIG. 28 is a flowchart illustrating a photographing method according to an embodiment of the present disclosure;
FIG. 29 is a flowchart illustrating a photographing method according to an embodiment of the present disclosure;
fig. 30 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the embodiments of the present application, the words "first", "second", etc. are used to distinguish identical items or similar items having substantially identical functions and actions for the sake of clarity in describing the embodiments of the present application. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ. Meanwhile, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean that examples, illustrations or descriptions, and words such as "exemplary" or "such as" are used to present the relevant concepts in a concrete manner for easy understanding.
First, a description is given of the related art related to the embodiments of the present application:
1. high Dynamic Range (HDR)
HDR can be understood as a technique of fusing multiple frames of low-dynamic range (LDR) images of multiple exposure intensities into one frame of HDR image. More picture details may be included in the HDR image.
For example, fig. 1 is a schematic view of a photograph of an object on a table, where fig. 1 (a) is an LDR image with higher exposure brightness and fig. 1 (b) is an LDR image with lower exposure brightness. It can be seen that, in fig. 1 (a), since the exposure brightness is larger, on one hand, the picture details in the box with lower brightness can be clearly displayed, and on the other hand, since the basketball on the table has higher brightness, there is a problem of overexposure, only the outline of the basketball on the table can be displayed; in fig. 1 (b), since the exposure brightness is small, on the one hand, the picture details of the basketball part can be clearly displayed, and on the other hand, the picture details in the box with low brightness cannot be displayed. Further, fig. 1 (a) and (b) can be merged into fig. 1 (c) by HDR, and it can be seen that in fig. 1 (c), both the picture details of the basketball part and the picture details in the box can be displayed.
Specifically, as shown in fig. 2, a general HDR photographing flow includes:
s101, the electronic equipment starts a camera application, enters a photographing preview mode and displays a photographing preview picture.
S102, the electronic equipment receives photographing operation of a user.
S103, the electronic equipment responds to photographing operation, and exposure parameters are set.
For example, if four EV0 images, one EV-2 image, one EV-4 image are required for HDR. The electronic device sets exposure parameters so that the camera sequentially acquires four EV0 images, one EV-2 image and one EV-4 image according to the set exposure parameters.
For simplicity of description, in the embodiment of the present application, the exposure brightness of the normal exposure is denoted as EV0, and the exposure brightness is ev0×2 n Denoted EVn. For example, EV-1 represents exposure brightness as half of EV0, and EV-2 represents exposure brightness as half of EV-1; for another example, EV1 represents exposure luminance twice EV0, and EV2 represents exposure luminance twice EV 1.
S104, the camera acquires images according to the exposure parameters.
Continuing with the example above, the camera sequentially acquires 4 frames of EV0 images, one frame of EV-2 images, and one frame of EV-4 images.
S105, the electronic equipment synthesizes the images by using the acquired images to obtain an HDR image.
Specifically, fig. 3 is a timing chart in a photographing process according to an embodiment of the present application. The time sequence diagram comprises two time axes, wherein the upper time axis is used for reflecting the time of image exposure of a photosensitive element in the camera, and the lower time axis is used for reflecting the time of reading photosensitive data by the camera.
The electronic equipment receives photographing operation of a user at a time point t 1. After receiving the photographing operation, the electronic device needs a period of time to generate an exposure parameter, send the exposure parameter to the camera, and cause the camera to execute the exposure parameter. Specifically, assuming that the time taken by the electronic device to generate the exposure parameters, send the exposure parameters to the camera, and make the camera execute the exposure parameters is three acquisition periods T, the electronic device acquires 4 frames of EV0 images, one frame of EV-2 images, and one frame of EV-4 images in 6 acquisition periods in a dashed frame as shown in fig. 3. And then, the electronic equipment synthesizes an HDR image according to the acquired 6 frames of images.
2. Staggered high dynamic range (stagger HDR)
The trigger HDR can acquire multi-frame images with different exposure brightness in one acquisition period by improving the frame rate of the sensor. For example, a trigger HDR may generate a long shot image and a short shot image in one acquisition cycle, such as shown in FIG. 4, and the electronic device may acquire both EV0 and EV-4 frames in one acquisition cycle T. For another example, the trigger HDR may generate a long shot image, a medium shot image, and a short shot image in one acquisition cycle, such as that shown in FIG. 5, and the electronic device may acquire three frames of EV0, EV-2, and EV-4 images in one acquisition cycle T. In contrast, in a scene of a stagger HDR, since images of various exposure parameters are to be acquired, a blank frame interval (which may be referred to as VB) between two frame images may be shorter than the frame interval VB in a scene of a non-stagger HDR.
Currently, the trigger HDR is mainly applied in the preview mode and the video shooting mode, to increase the HDR effect of the video or the preview picture. Because the frame interval VB of the trigger HDR is short, the situation of ghosts in the video and preview process can be well reduced.
3. Dual conversion gain (dual conversion gain DCG)
DCG may be understood as having two capacitors storing photon energy in a corresponding photosensitive cell of a pixel, or as the ability to make two readings in a pixel cell circuit.
Exemplary, one pixel cell circuit structure of a conventional complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) is shown in FIG. 6 (a), wherein PD is a photodiode, TX is a transfer transistor, RST is a reset transistor, SF is a source follower, RS is a row select transistor, V AA PIX is the analog pixel supply voltage, V OUT For the pixel output voltage node, FD is a floating diffusion node, C FD Is the capacitance at the FD node. Fig. 6 (b) shows a pixel cell circuit structure using DCG. Compared to fig. 6 (a), the circuit structure shown in fig. 6 (b) has one capacitor and a transistor DCG added at the dashed line box, so that it has the ability to make two readings in the middle of the circuit.
Thus, in a scene using DCG, a readout like a trigger HDR can be made in one shot. Unlike the trigger HDR, DCG gets a superposition of exposures and trigger HDR gets a new exposure.
Illustratively, fig. 7 is a timing diagram of conventional HDR, trigger HDR, and DCG, three HDR in acquiring an image. It can be seen that images of EV0, EV-2, and EV-4 of the three exposure parameters are also acquired, and the DCG takes a shorter time, so that the DCG can further reduce the ghost phenomenon, and can also increase the time of long exposure.
The photographing method provided by the application is described in the following with reference to examples:
as can be seen from the schemes described in S101 to S105, in this scheme, at least 9 acquisition periods are required from the time when the user performs the photographing operation to the time when the electronic device acquires 6 frames of images to be fused. In addition, in addition to the time required by the electronic equipment to synthesize the HDR image, the time cost of the whole photographing process is relatively high, and the use experience of a user is affected.
In view of the above technical problems, an embodiment of the present application provides a photographing method, in which a user may trigger an image before photographing (abbreviated as a first acquired image) to generate a composite image. In this way, it is avoided that the electronic device spends time acquiring the image of the exposure parameter of the first acquired image after receiving the operation of triggering the photographing by the user.
In this way, the time required for capturing the image after receiving the photographing operation can be saved. Thereby accelerating the photographing speed and improving the user experience.
The following describes the solution provided in the embodiments of the present application with reference to specific examples:
the embodiment of the application provides a photographing method which can be applied to electronic equipment. After detecting the input operation of the user at the specific position of the touch screen, the electronic equipment can display the corresponding interface and information according to the operation.
For example, the electronic device in the embodiments of the present application may be an electronic device with a touch screen, such as a mobile phone, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, or the like, and the embodiments of the present application do not limit the specific form of the device in particular.
Taking an electronic device as an example of a mobile phone, referring to fig. 8, the electronic device may include a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (universal serial bus, USB) interface 330, a charge management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display screen 394, a subscriber identity module (subscriber identification module, SIM) card interface 395, and the like.
The sensor module 380 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 300. In other embodiments of the present application, electronic device 300 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 310 may include one or more processing units, such as: the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor neural network processor (neural-network processing unit, NPU), and/or a micro control unit (micro controller unit, MCU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 300, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that the processor 310 has just used or recycled. If the processor 310 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 310 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, a serial peripheral interface (serial peripheral interface, SPI), an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 340 is configured to receive a charge input from a charger. The power management module 341 is configured to connect the battery 342, the charge management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 to power the processor 310, the internal memory 321, the external memory, the display screen 394, the camera 393, the wireless communication module 360, and the like. In other embodiments, the power management module 341 and the charging management module 340 may also be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution for wireless communication, including 2G/3G/4G/5G, etc., applied on the electronic device 300. The wireless communication module 360 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared (IR), etc., applied on the electronic device 300.
The electronic device 300 implements display functions through a GPU, a display screen 394, an application processor, and the like. The GPU is a microprocessor for image processing, connected to the display screen 394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 394 is used for displaying images, videos, and the like. The display screen may be a touch screen. In some embodiments, the electronic device 300 may include 1 or N display screens 394, N being a positive integer greater than 1.
Electronic device 300 may implement capture functionality through an ISP, camera 393, video codec, GPU, display 394, and application processor, among others. The ISP is used to process the data fed back by camera 393. Camera 393 is used to capture still images or video. In some embodiments, electronic device 300 may include 1 or N cameras 393, N being a positive integer greater than 1.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 300 may be implemented by the NPU, for example: the method comprises the following steps of film state recognition, image restoration, image recognition, face recognition, voice recognition, text understanding and the like.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 300. The external memory card communicates with the processor 310 through an external memory interface 320 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 321 may be used to store computer executable program code that includes instructions. The processor 310 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321. The internal memory 321 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 300 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 321 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 300 may implement audio functionality through an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an ear-headphone interface 370D, and an application processor, among others. Such as music playing, recording, etc.
A touch sensor, also known as a "Touch Panel (TP)". The touch sensor may be disposed on the display screen 394, and the touch sensor and the display screen 394 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 394. In other embodiments, the touch sensor may also be disposed on a surface of the electronic device 300 at a different location than the display screen 394.
The keys 390 include a power on key, a volume key, etc. The motor 391 may generate a vibration alert. The indicator 392 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 395 is for interfacing with a SIM card.
In order to save time required for collecting images after receiving photographing operation in the photographing process, so as to accelerate photographing speed and improve user experience, the method provided by the embodiment of the application can be realized in two ways:
In the first mode, a composite image may be generated by using a first acquired image acquired in a photographing preview mode and a second acquired image acquired after receiving a first operation for photographing by a user.
And acquiring all images required for generating the composite image in a photographing preview mode, wherein the composite image can be generated without acquiring the images after the first operation is received in the second mode.
The following describes the implementation process of the two modes in detail with reference to examples:
mode one:
in one aspect, when the HDR image is synthesized, multiple frames of images with multiple exposure parameters need to be acquired first, and then the HDR image can be determined according to the multiple frames of images. On the other hand, considering that in some scenes, when the electronic device is in the photographing preview mode, the electronic device will collect the image according to the preset exposure parameters, and the exposure parameters of the image required for synthesizing the HDR image will not be generated until the first operation of photographing by the user is received.
For example, the electronic device may estimate the exposure parameter (referred to as a first exposure parameter) of the image under the condition of normal exposure of the image in the photographing preview mode, and then collect the image with the brightness EV0 according to the first exposure parameter, as shown in fig. 3, before receiving the first operation of the user (i.e., photographing operation in the figure), the electronic device collects the image EV 0; after receiving the first operation of the receiving user, the electronic device starts to generate exposure parameters of images required for synthesizing the HDR image, and acquires the required images according to the generated exposure parameters, namely four frames of EV0 images, one frame of EV-2 images and one frame of EV-4 images after 3 acquisition periods of the photographing operation in FIG. 3.
Therefore, the photographing method provided in the embodiment of the present application can synthesize an HDR image using two types of images. One type of image is an image (called a first acquired image) acquired according to preset exposure parameters in a photographing preview mode, and the other type of image is an image acquired by the electronic equipment according to the generated exposure parameters after receiving a first operation of photographing by a user. Therefore, the time for acquiring the images with the preset exposure parameters is saved because the time for acquiring the images is not needed after the first operation of the user is received, so that the photographing speed can be increased, and the use experience of the user can be improved.
Specifically, as shown in fig. 9, the photographing method provided in the present application may include the following steps:
s401, the electronic equipment starts a camera application and enters a photographing preview mode.
Specifically, the user may click on a camera icon in a display interface of the electronic device to trigger the electronic device to start a camera application in response to the click operation, and enter a photographing preview mode.
In this application, the photographing preview mode can be understood as: before receiving a first operation of a user for photographing, the electronic device acquires an image by using the camera so as to display the acquired image on an interface for the user to preview the operation mode. In the practical application process, the photographing preview mode may also be referred to as other names, such as a camera preview mode, a browsing mode, and the like, and as long as the electronic device has a function of capturing an image by using the camera before receiving the first operation of photographing by the user in the mode, the mode can be understood as the photographing preview mode in the application.
S402, the electronic device estimates a first exposure parameter of the image in the case of normal exposure.
The exposure parameter may be various parameters reflecting the exposure amount of the image. For example, the exposure parameters may include one or more of light-on time, shutter speed, light-on area, aperture size. One exposure parameter may correspond to the exposure brightness of an image, and may be understood as: when the electronic equipment collects images according to one exposure parameter, the electronic equipment can collect images with exposure brightness corresponding to the exposure parameter.
In addition, in the embodiment of the present application, normal exposure may be understood as exposure performed to make the display effect of the image meet the preset condition, and under the condition of normal exposure, the first exposure parameter of the image may be understood as an exposure parameter corresponding to make the display effect of the image meet the preset condition. The preset conditions can be set according to actual application scenes.
S403, the electronic equipment collects the image of the first exposure parameter.
In this embodiment, the first exposure parameter in the case of normal exposure is taken as an example. In some implementations, the electronic device may employ other exposure parameters to capture images in the photo preview mode, e.g., the electronic device may capture images according to preset exposure parameters, and in these implementations the electronic device may also capture images of other exposure parameters in the photo preview mode without capturing images of the first exposure parameter.
For example, as shown in fig. 10, the electronic device acquires an image of EV0 according to the first exposure parameter in the photographing preview mode before receiving the photographing operation. In the implementation process, the acquired image can be an image with other brightness, for example, an image such as EV1, EV2, EV-1 or EV-2.
Specifically, after the electronic device collects the image of the first exposure parameter, the collected image may be cached in a zero second delay cache (zero shutter lag buffer, ZSL buffer) for subsequent processing. The ZSL buffer may adopt a first-in first-out storage mode, and when a new image is acquired, the ZSL buffer deletes the old image and stores the new image.
S404, the electronic equipment receives a first operation for photographing by a user.
For example, in the photographing preview mode, the electronic device may display a preset control on the interface, where the preset control is used for photographing after the user clicks the preset control, and at this time, the first operation may be an operation that the user clicks the preset control.
S405, the electronic device collects a second collected image.
In an actual implementation process, after receiving the first operation, the electronic device may store the image of the first exposure parameter stored in the ZSL buffer into the memory, and then empty the ZSL buffer, so as to start to collect the second collected image and store the second collected image into the ZSL buffer.
Wherein the second acquired image includes images of other exposure parameters than the first exposure parameter of the N exposure parameters. Where N represents the number of exposure parameters of the image required to synthesize the HDR image.
Specifically, since the EV0 image of the first exposure parameter has been acquired in S403, only images of exposure parameters other than the first exposure parameter, such as an EV-2 image and an EV-4 image, of the N exposure parameters may be acquired here. Thus, the second acquired image may include only images of the other exposure parameters than the first exposure parameter out of the N exposure parameters. Of course, in order to achieve a better image effect, the second acquired image may further include an image of the first exposure parameter, which is not limited in the embodiment of the present application.
In one implementation, S405 may include:
s4051, the electronic device determines an exposure sequence.
Wherein the exposure sequence is used for reflecting the number and the sequence of the second acquired images.
S4052, the electronic equipment acquires a second acquired image according to the exposure sequence.
Illustratively, as in FIG. 10, the electronic device acquires images of EV-2 and EV-4 at the 4 th acquisition cycle after receiving the first operation. In the example shown in fig. 10, after receiving the first operation, the electronic device first executes actions of determining an exposure sequence, generating each exposure parameter corresponding to each image in the exposure sequence, and adjusting an aperture of the camera by the electronic device according to the exposure parameters, so that 3 acquisition periods are spaced from the time when the electronic device receives the first operation to the time when the electronic device acquires the second acquired image.
S406, the electronic equipment generates a composite image according to the first acquired image and the second acquired image.
The first collected image is an image collected in the photographing preview mode, that is, the first collected image is an image of the first exposure parameter collected in S403. Specifically, the first collected image may be an image recently stored in the ZSL buffer before the first operation of the user is received. For example, in fig. 10, the first captured image includes an image of 4 frames EV0 that the electronic device recently stored in the ZSL buffer before receiving the photographing operation.
Specifically, the electronic device may fuse the first acquired image and the second acquired image according to various HDR techniques to obtain a composite image (e.g., fuse 4 frames EV0, 1 frame EV-2, and 1 frame EV-4 in fig. 10 to obtain a composite image). In the embodiment of the present application, the technology adopted for fusing the first acquired image and the second acquired image by the electronic device may not be limited.
In addition, since some time is required to be spent for performing the actions of determining the exposure sequence, generating the exposure parameters corresponding to the images in the exposure sequence, and adjusting the aperture of the camera by the electronic device according to the exposure parameters before the electronic device collects the second collected image, for example, 3 collection periods are spaced between the time when the electronic device receives the first operation and the time when the electronic device collects the second collected image in fig. 10. Therefore, from the time the electronic device receives the first operation to the time the second captured image is captured, the electronic device may also capture some images, such as the image of EV0 of three frames of the first exposure parameters that may be captured after the electronic device receives the first operation in fig. 10.
Thus, in one implementation, S406 may include:
s4061, the electronic device generates a composite image according to the first acquired image, the second acquired image and the third acquired image.
The third acquired image is an image acquired by the electronic device after the electronic device receives the first operation and before the electronic device acquires the second acquired image. For example, as shown in fig. 11, the third captured image may include an image of three frames EV0 captured by the electronic device after receiving the first operation.
In addition, on the one hand, during the process of capturing an image by the electronic device, a ghost phenomenon may exist. The ghost phenomenon is understood as a phenomenon that the position of an object in an image changes too much in the process of acquiring the image, so that the ghost of the object appears in the image. On the other hand, in the process of acquiring an image by the electronic device, there may be a phenomenon that the auto-focusing function is not converged or the auto-exposure function is not converged. The auto-focusing function is not converged, which can be understood as a phenomenon that the electronic equipment adjusts the focal length before the auto-focusing is successful; the automatic exposure function is not converged, and the phenomenon that the electronic equipment adjusts the exposure parameters of the camera before the exposure parameters corresponding to the normal exposure are not determined can be understood.
Thus, in order to increase the success rate of image composition, in one implementation, after receiving the first operation of the user at S404, as shown in fig. 12, the method may further include:
s407, judging whether each image in the first acquired image meets a preset condition.
Wherein, the preset condition may include at least one of: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs.
Illustratively, in an actual implementation: for the above-mentioned "the movement speed of the object in the image is smaller than the first threshold value" in the preset condition, this may be achieved by any one of the following means:
firstly, detecting the movement speed of an object in an image by using a sensor for detecting the movement speed of the object in the electronic equipment, and judging whether the movement speed is smaller than a first threshold value or not;
and detecting whether the ghost exists in the image, and if the ghost does not exist, determining that the movement speed of an object in the image is smaller than a first threshold value.
For the above-mentioned "the shake amplitude of the electronic device at the time of capturing an image is smaller than the second threshold value" in the preset condition, this may be achieved by any one of the following methods:
firstly, detecting the jitter amplitude of the electronic equipment by using a sensor for detecting jitter in the electronic equipment, and then judging whether the jitter amplitude is smaller than a second threshold value;
and detecting whether the image has the ghost or not, and if the ghost does not exist, determining that the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot.
In other words, in some scenes, it may be determined whether the motion speed of the object in the image and the shake amplitude of the electronic device meet the conditions, that is, whether the motion speed is less than the first threshold value and the shake amplitude is less than the second threshold value, by detecting whether the ghost exists in the image.
The convergence of the auto-focusing function when shooting the image in the preset condition can be determined by judging whether the focal length of the camera is changed or whether the object in the image is clear or not in a preset time period.
The "convergence of the automatic exposure function when capturing an image" in the above-mentioned preset conditions may be determined by determining whether there is a change in parameters such as the light-on time, shutter speed, light-on area, and aperture size of the camera, or whether the exposure brightness of the image is normal, etc. in the preset period.
Further, on the one hand, after determining that each image in the first collected image meets the preset condition through S407, the electronic device executes S406, that is, generates a composite image according to the first collected image and the second collected image.
On the other hand, if it is determined through S407 that the first collected image includes an image that does not satisfy the preset condition, as shown in fig. 12, the method further includes:
s408, the electronic equipment acquires a fourth acquired image.
Wherein the fourth acquired image comprises images of N exposure parameters required for synthesizing the HDR image. For example, if an HDR image is synthesized, images of 3 exposure parameters are required, i.e., EV0, EV-2, and EV-4 images. The fourth acquired image may include 4-frame EV0 images, 1-frame EV-2 images, and 1-frame EV-4 images in the dashed box in the figure, as shown in fig. 3.
S409, the electronic equipment generates a composite image according to the fourth acquired image.
Similar to S406, the electronic device may generate a composite image from the fourth acquired image in accordance with various HDR techniques. In the embodiment of the present application, the HDR technology adopted by the electronic device may not be limited.
In the above implementation manner, on one hand, after determining that the first collected image meets the preset condition, a composite image may be generated according to the first collected image and the second collected image, so as to ensure the quality of the composite image; on the other hand, after the first collected image is determined to include the image which does not meet the preset condition, the fourth collected image can be collected again, and the composite image is generated according to the fourth collected image, so that smooth generation of the composite image is ensured.
In another implementation, after receiving the first operation of the user at S404, as shown in fig. 13, the method may further include:
s410, the electronic equipment determines a first acquired image meeting preset conditions from the P frame images.
The P-frame image is an image acquired in the shooting preview mode, that is, the P-frame image is an image of the first exposure parameter acquired in S403. Specifically, the P-frame image may be a P-frame image recently stored in the ZSL buffer before receiving the first operation of the user, for example, in fig. 14, the electronic device reads a 6-frame EV0 image recently stored in the ZSL buffer before receiving the first operation (i.e., the photographing operation in the drawing).
Wherein, the preset condition may include at least one of: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs. The specific implementation process for determining whether the image satisfies the preset condition may refer to the corresponding description in S407, which is not repeated herein.
For example, after receiving the first operation, the electronic device first reads the P-frame image before receiving the first operation from the ZSL buffer, for example, the electronic device in fig. 14 reads the 6-frame EV0 image acquired before receiving the first operation (i.e., the photographing operation in the drawing). Then, 4 frames of images satisfying a preset condition are selected as the first acquisition image from the 6 frames of images.
After determining the first acquired image in S410, the electronic device executes S406 to generate a composite image according to the first acquired image and the second acquired image.
In the implementation manner, the P frame image with a large number of frames is acquired first, and then the first acquired image meeting the preset condition is selected from the P frame images, so that the quality of the image included in the first acquired image is ensured, and the quality of the synthesized image is further ensured.
It will be understood that, in the embodiments of the present application, the electronic device may perform some or all of the steps in the embodiments of the present application, these steps or operations are merely examples, and in the embodiments of the present application, other operations or variations of various operations may also be performed. Furthermore, the various steps may be performed in a different order presented in accordance with embodiments of the present application, and it is possible that not all of the operations in the embodiments of the present application may be performed. For example, S407 may be executed before S405, or may be executed after S405; for another example, the above S410 may be performed before S405 or after S405, which may not be limited by the method provided in the present application.
In the case where the electronic device supports capturing images of multiple exposure parameters in the photographing preview mode, for example, in a scenario where the electronic device supports a trigger HDR or DCG, as shown in fig. 15, the photographing method provided in the present application may include the following steps:
S501, the electronic equipment starts a camera application and enters a photographing preview mode.
S502, the electronic device estimates a first exposure parameter of the image in the case of normal exposure.
The implementation process of S501 and S502 may refer to the corresponding descriptions in S401 and S402, and are not described herein.
S503, the electronic equipment collects images of W exposure parameters according to the first exposure parameters.
Specifically, after determining the first exposure parameter (i.e., the exposure parameter of the EV0 image), the electronic device may further sequentially determine exposure parameters of the EV-1, EV-2, EV-4, and so on images, and then collect the images of the W exposure parameters in the photographing preview mode by using the stabger HDR technology according to the determined exposure parameters.
The number of W may be determined according to the number of exposures in the same acquisition period supported by the trigger HDR. For example, the adopted trigger HDR technology supports performing two exposures in the same acquisition period, i.e. images of two exposure parameters can be acquired in the same acquisition period, W may be 2. For another example, the adopted trigger HDR technology supports three exposures in the same acquisition period, i.e. images with three exposure parameters can be acquired in the same acquisition period, and W may be 2 or 3. For another example, the adopted trigger HDR technology supports four exposures in the same acquisition period, i.e. images with four exposure parameters can be acquired in the same acquisition period, W can be 2, 3 or 4, and so on.
It should be noted that, in the present embodiment, the method is mainly described by taking a trigger HDR as an example, and it can be understood that the method may also be applicable to other technical scenarios, such as DCG, and the application may not be limited thereto.
For example, as shown in fig. 16, the electronic device acquires images of two exposure parameters, i.e., EV0 and EV-4, in a photographing preview mode before receiving a photographing operation.
Specifically, after the electronic device collects the images of the W exposure parameters, the collected images may be cached in the ZSL buffer for subsequent processing.
S504, the electronic equipment receives a first operation for photographing by a user.
The specific implementation process of S504 may refer to the content of S404, which is not described herein.
S505, the equipment collects a second collected image.
After receiving the first operation, the electronic device may store the image stored in the ZSL buffer (i.e., the image of the W exposure parameters described in S503) in the memory, and then empty the ZSL buffer, so as to start to collect the second collected image and store the second collected image in the ZSL buffer.
The second acquired image may include images of exposure parameters other than the W exposure parameters. Where N represents the number of exposure parameters of the image required to synthesize the HDR image.
Specifically, since the images of the W exposure parameters have been acquired in S403, only the images of the exposure parameters other than the W exposure parameters out of the N exposure parameters need to be acquired here. Thus, the second acquired image may include only images of the other exposure parameters than the W exposure parameters out of the N exposure parameters. Of course, in order to achieve a better image effect, the second acquired image may further include an image of W exposure parameters, which is not limited in the embodiment of the present application.
In one implementation, S505 may include:
s5051, the electronic equipment determines an exposure sequence.
Wherein the exposure sequence is used for reflecting the number and the sequence of the second acquired images.
S5052, the electronic equipment acquires a second acquired image according to the exposure sequence.
Illustratively, as shown in FIG. 16, the electronic device acquires an image of EV-2 at the 4 th acquisition cycle after receiving the first operation (i.e., the photographing operation in the figure). In addition, since images of two exposure parameters can be acquired in one acquisition period in a trigger HDR scene, an EV-2 image can be acquired and an EV-4 image can be acquired at the same time, so that the second acquired image can comprise an EV-2 image and an EV-4 image.
In addition, in the example shown in fig. 16, a certain time (3 acquisition cycles in the figure) is required for the electronic device to perform actions of determining the exposure sequence, generating each exposure parameter corresponding to each image in the exposure sequence, and adjusting the aperture of the camera according to the exposure parameters from the reception of the first operation to the start of acquisition of the second acquired image, as in the example described above.
S506, the electronic equipment generates a composite image according to the first acquired image and the second acquired image.
The first collected image is an image collected in a photographing preview mode, that is, the first collected image is an image of the W exposure parameters collected in S503. Specifically, the first collected image may be an image recently stored in the ZSL buffer before the first operation of the user is received. For example, in fig. 16, the first captured image includes 4-frame EV0 images and 4-frame EV-4 images captured by the electronic device before receiving the photographing operation. The second acquired image includes a 1-frame EV-2 image and a 1-frame EV-4 image.
Specifically, the electronic device may fuse the first acquired image and the second acquired image according to various HDR technologies to obtain a composite image. In the embodiment of the present application, the technology adopted for fusing the first acquired image and the second acquired image by the electronic device may not be limited.
In addition, in one implementation, the step S506 may include:
s5061, the electronic device generates a composite image according to the first acquired image, the second acquired image and the third acquired image.
The third acquired image is an image acquired by the electronic device after the electronic device receives the first operation and before the electronic device acquires the second acquired image. For example, as shown in fig. 17, the third captured image may include a 3-frame EV0 image and a 3-frame EV-2 image captured by the electronic device after receiving the first operation.
The specific implementation process and the achieved beneficial effects of S5061 may refer to the corresponding description of S4061 above, and are not described herein.
In addition, in order to improve the success rate of image synthesis, in one implementation, after receiving the first operation of the user at S504, as shown in fig. 18, the method further includes:
s507, judging whether each image in the first acquired image meets preset conditions.
Wherein, the preset condition may include at least one of: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs.
Further, on the one hand, after determining that each image in the first collected image meets the preset condition through S507, the electronic device executes S506, that is, generates a composite image according to the first collected image and the second collected image.
On the other hand, if it is determined through S507 that the first collected image includes an image that does not satisfy the preset condition, as shown in fig. 18, the method further includes:
s508, the electronic equipment acquires a fourth acquired image.
Wherein the fourth acquired image comprises images of N exposure parameters required for synthesizing the HDR image.
S509, the electronic device generates a composite image according to the fourth acquired image.
The specific implementation process and the achieved beneficial effects of S507-S509 may refer to the corresponding descriptions of S407-S409 above, and will not be repeated here.
In another implementation, after receiving the first operation of the user at S404, as shown in fig. 19, the method further includes:
s510, the electronic equipment determines a first acquisition image meeting preset conditions from the P frame images.
The P-frame image is an image acquired in the shooting preview mode, that is, the P-frame image is a P-frame of the images acquired in S503 with N exposure parameters. Specifically, the P frame image may be a P frame image recently stored in the ZSL buffer before the first operation of the receiving user.
Wherein, the preset condition may include at least one of: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs. The specific implementation process for determining whether the image satisfies the preset condition may refer to the corresponding description in S407, which is not repeated herein.
For example, after receiving the first operation, the electronic device first reads the P-frame image before receiving the first operation from the ZSL buffer, for example, the electronic device in fig. 20 reads 12 frames of images (including 6 frames of EV0 images and 6 frames of EV-4 images) acquired in 6 acquisition periods before receiving the first operation (i.e., the photographing operation in the drawing). Then, 4-frame EV0 images and 1-frame EV-4 images satisfying a preset condition are selected from the 12-frame images as the first acquired image so as to synthesize an HDR image with 1-frame EV-2 included in the second acquired image.
The beneficial effects achieved in S510 may be referred to the corresponding description of S410, and will not be repeated here.
Mode two:
in the second mode, considering that the electronic device supports capturing images of multiple exposure parameters in the photographing preview mode, the electronic device may capture all images required for the composite image in the photographing preview mode, and in this way, it may not be necessary to capture the images again after receiving the first operation.
Specifically, as shown in fig. 21, the method may include the steps of:
s601, the electronic equipment starts a camera application and enters a photographing preview mode.
S602, the electronic device estimates a first exposure parameter of the EV0 image in the case of normal exposure.
The implementation process of S601 and S602 may refer to the corresponding descriptions in S401 and S402, and are not described herein.
S603, the electronic equipment collects images of N exposure parameters according to the first exposure parameters.
The number of N may be determined according to the number of exposure parameters required for synthesizing the HDR image and the HDR technology (e.g., stabger HDR or DCG) employed by the electronic device.
Wherein, similar to the above S503, the electronic device may first determine N exposure parameters according to the first exposure parameter, for example, N exposure parameters including exposure parameters of EV0 image, EV-2 image and EV-4 image. An image of these N exposure parameters is then acquired.
For example, using a trigger HDR as shown in FIG. 22, the electronic device acquires images of the three exposure parameters EV0, EV-2, and EV-4 at each acquisition cycle prior to receiving a photographing operation.
For another example, taking DCG as an example, the electronic device acquires images of EV0, EV-2, and EV-4, three exposure parameters, at each acquisition cycle, before receiving a photographing operation, as shown in FIG. 23.
S604, the electronic equipment receives a first operation for photographing by a user.
The specific implementation process of S604 may refer to the content of S404, which is not described herein.
S605, the electronic device fuses the first acquired images to generate a composite image.
The first collected image is an image collected in a photographing preview mode, that is, the first collected image is an image of the N exposure parameters collected in S603. Specifically, the first collected image may be an image recently stored in the ZSL buffer before the first operation of the user is received. For example, in fig. 22, the first captured image includes a 4-frame EV0 image, a 4-frame EV-2 image, and a 4-frame EV-4 image that the electronic device recently stored in the ZSL buffer before receiving the photographing operation. As another example, in fig. 23, the first captured image includes a 4-frame EV0 image, a 4-frame EV-2 image, and a 4-frame EV-4 image that the electronic device recently stored in the ZSL buffer before receiving the photographing operation.
Specifically, the electronic device may fuse the first acquired image according to various HDR technologies to obtain a composite image. In the embodiment of the present application, the technology adopted for fusing the first acquired image by the electronic device may not be limited.
In addition, in one implementation, as shown in fig. 24, the method further includes:
s606, judging whether each image in the first acquired image meets a preset condition.
Wherein, the preset condition may include at least one of: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs.
Further, on the one hand, after determining that each image in the first collected image meets the preset condition through S606, the electronic device further executes S605, that is, fuses the first collected images to generate a composite image.
On the other hand, if it is determined that the first collected image includes an image that does not satisfy the preset condition in S606, as shown in fig. 24, the method further includes:
s607, the electronic equipment acquires a fourth acquired image.
Wherein the fourth acquired image comprises images of N exposure parameters required for synthesizing the HDR image.
And S608, the electronic equipment generates a composite image according to the fourth acquired image.
The specific implementation process and the achieved beneficial effects of S606-S608 may refer to the corresponding descriptions of S407-S409 above, and will not be repeated here.
In another implementation, as shown in fig. 25, the method further includes:
s609, the electronic device determines a first acquired image meeting preset conditions from the P frame images.
The P-frame image is an image acquired in the shooting preview mode, that is, the P-frame image is one of the N exposure parameters acquired in S603. Specifically, the P frame image may be a P frame image recently stored in the ZSL buffer before the first operation of the receiving user.
Wherein, the preset condition may include at least one of: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot. The first threshold and the second threshold can be determined according to actual needs. The specific implementation process for determining whether the image satisfies the preset condition may refer to the corresponding description in S407, which is not repeated herein.
For example, after receiving the first operation, the electronic device first reads the P-frame image before receiving the first operation from the ZSL buffer, for example, the electronic device in fig. 26 reads 18 frames (including 6 frames EV0 image, 6 frames EV-2 image, and 6 frames EV-4 image) acquired in 6 acquisition periods before receiving the first operation (i.e., the photographing operation in the drawing). Then, 4-frame EV0 images, 1-frame EV-2 images, and 1-frame EV-4 images satisfying preset conditions are selected from the 18-frame images as first acquisition images, so that the first acquisition images are fused to generate an HDR image.
Still further exemplary, the electronic device reads 18 frames of images (including 6 frames of EV0 images, 6 frames of EV-2 images, and 6 frames of EV-4 images) acquired in 6 acquisition cycles that were acquired prior to receiving the first operation (i.e., the photographing operation in the figure) as in FIG. 27. Then, 4-frame EV0 images, 1-frame EV-2 images, and 1-frame EV-4 images satisfying preset conditions are selected from the 18-frame images as first acquisition images, so that the first acquisition images are fused to generate an HDR image.
In addition, in one implementation, as shown in fig. 28, the method further includes:
s610, displaying an image generated by fusing the images with the Q exposure parameters in a photographing preview mode. Wherein Q is less than N.
In one possible design, the Q exposure parameters may include exposure parameters having the smallest exposure brightness and the largest exposure brightness of the N exposure parameters.
For example, as shown in fig. 26, before receiving a photographing operation, the electronic device may acquire images of 3 exposure parameters (i.e., N is 3) in a photographing preview mode. In this case, if the images obtained by fusing the images of the 3 exposure parameters are displayed as preview images on the interface, it is necessary to occupy a large amount of hardware resources, and there is a possibility that the display is not smooth.
Therefore, in the design, when the preview image is displayed in the photographing preview mode, the preview image is generated by adopting the mode of fusing the images with fewer types of exposure parameters and displaying the preview image, so that the effect of displaying the preview image more efficiently and smoothly in the photographing preview mode is achieved.
For example, continuing the above example, in the case where the electronic device collects images of 3 exposure parameters in the photographing preview mode, with the above design of the present application, the collected two exposure parameters may be fused to generate a preview image, and the preview image may be displayed. Specifically, the two exposure parameters used for the fusion may be the two exposure parameters with the minimum exposure brightness and the maximum exposure brightness (i.e., the exposure parameters of the EV0 image and the EV-4 image) among the 3 exposure parameters.
In the process of synthesizing an HDR image, as shown in fig. 29, first, an electronic device acquires a first acquired image in a photographing preview mode (i.e. S201 in the figure). Then, after receiving a first operation by the user for photographing, a composite image is generated using the first captured image (i.e., S202 in the figure). In this way, the time required for capturing the image after receiving the photographing operation can be saved. Thereby accelerating the photographing speed and improving the user experience.
It will be appreciated that the electronic device includes corresponding hardware structures and/or software modules that perform the functions in order to implement the corresponding functions. According to the embodiment of the application, the functional modules of the electronic equipment are divided according to the method. For example, each functional module may be divided corresponding to each function, or two or more functions may be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. Optionally, the division of the modules in the embodiments of the present application is schematic, which is merely a logic function division, and other division manners may be actually implemented.
Fig. 30 is a schematic diagram of an electronic device according to an embodiment of the present application. The electronic device 40 may be a chip or a system on a chip. The electronic device 70 includes:
an image acquisition unit 701, configured to acquire an image in a photographing preview mode;
the image processing unit 702 is configured to generate a composite image using an M-frame image including a first captured image after receiving a first operation for photographing by a user.
The first collected image is an image collected in a photographing preview mode, the first collected image comprises at least one image with exposure parameters, M frames of images comprise N images with exposure parameters, and M and N are positive integers larger than 1 respectively.
Optionally, the image acquisition unit 701 is further configured to acquire a second acquired image after receiving the first operation.
The image processing unit 702, configured to generate, after receiving a first operation for photographing by a user, a composite image using an M-frame image including a first acquired image, includes: the image processing unit is specifically used for generating a composite image by utilizing the first acquired image and the second acquired image; the first acquired image and the second acquired image comprise images of N exposure parameters.
Optionally, the first acquired image is an image of a first exposure parameter; the second acquired image includes an image of an exposure parameter other than the first exposure parameter among the N exposure parameters.
Optionally, the first exposure parameter is a first exposure parameter under the condition of normal exposure of the image.
Optionally, the first acquired image includes: an image of W exposure parameters; the second acquired image includes images of exposure parameters other than the W exposure parameters among the N exposure parameters.
Optionally, the image capturing unit 701 is configured to capture an image in a photographing preview mode, including: the image acquisition unit is specifically used for acquiring images of N exposure parameters in a photographing preview mode.
The image processing unit 702, configured to generate, after receiving a first operation for photographing by a user, a composite image using an M-frame image including a first acquired image, includes: the image processing unit is specifically used for fusing the first acquired image to generate a composite image; the first acquired image includes images of N exposure parameters.
Optionally, the electronic device 70 further includes: and a display unit 703 for displaying the preview image in the photographing preview mode. The preview image is an image generated by fusing the images of the Q exposure parameters. Wherein Q is less than N.
Optionally, the first acquired image is an image acquired using an interleaved high dynamic range Stagger HDR technique.
Optionally, the first acquired image is an image acquired using a dual conversion gain DCG technique.
Optionally, the image processing unit 702 is further configured to determine whether the first acquired image meets a preset condition; the preset conditions include at least one of the following: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot.
The image processing unit 702, configured to generate, after receiving a first operation for photographing by a user, a composite image using an M-frame image including a first acquired image, includes: and the image processing unit is used for generating a composite image by using the M frame image containing the first acquired image after determining that the first acquired image meets the preset condition.
Optionally, the first acquired image is an image stored in the ZSL buffer zero second delay buffer at last before receiving a first operation of a user for photographing.
Optionally, the image processing unit 702 is further configured to determine a first acquired image that meets a preset condition from the P-frame images.
The P frame image is an image acquired in a photographing preview mode, and the preset conditions comprise at least one of the following: the moving speed of the object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot.
Optionally, the P frame image is an image stored in the ZSL buffer zero second delay buffer recently after receiving the first operation of the user for photographing. The embodiments of the present application also provide a computer readable storage medium having instructions stored therein that, when executed, perform the methods provided by the embodiments of the present application.
Embodiments of the present application also provide a computer program product comprising instructions. Which when executed on a computer, causes the computer to perform the methods provided by the embodiments of the present application.
The functions or acts or operations or steps and the like in the embodiments described above may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Although the present application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to include such modifications and variations as well.
Claims (8)
1. A photographing method, comprising:
in a photographing preview mode, collecting an image;
determining a first acquisition image meeting preset conditions from the P frame images; the first acquired image is an image stored in a ZSL buffer zero second delay buffer before receiving a first operation of photographing by a user, the P frame image is an image acquired in the photographing preview mode, and the preset condition comprises at least one of the following: the motion speed of an object in the image is smaller than a first threshold value, the jitter amplitude of the electronic equipment is smaller than a second threshold value when the image is shot, the automatic focusing function is converged when the image is shot, and the automatic exposure function is converged when the image is shot;
After the first operation of photographing by a user is received, storing the first acquired image stored in the ZSL buffer into a memory, and emptying the ZSL buffer;
collecting a second collected image, and storing the second collected image into the ZSL buffer;
generating a composite image using the first acquired image and the second acquired image; the first acquired image and the second acquired image comprise images with N exposure parameters, and N is a positive integer greater than 1.
2. The method of claim 1, wherein the first acquired image is an image of a first exposure parameter; the second acquired image includes an image of an exposure parameter other than the first exposure parameter of the N exposure parameters.
3. The method of claim 2, wherein the first exposure parameter is a first exposure parameter under normal exposure of the image.
4. The method of claim 1, wherein the first acquired image comprises: an image of W exposure parameters; the second acquired image includes images of exposure parameters other than the W exposure parameters among the N exposure parameters.
5. The method of claim 4, wherein the first acquired image is an image acquired using an interleaved high dynamic range stabger HDR technique.
6. The method of claim 4, wherein the first acquired image is an image acquired using a dual conversion gain DCG technique.
7. An electronic device, comprising: one or more processors coupled with one or more memories, the one or more memories storing a computer program;
the computer program, when executed by the one or more processors, causes the electronic device to perform the photographing method as provided in any of claims 1-6.
8. A computer-readable storage medium, comprising: computer software instructions;
the computer software instructions, when run in a computer, cause the computer to perform the photographing method as provided in any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110681582.4A CN113382169B (en) | 2021-06-18 | 2021-06-18 | Photographing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110681582.4A CN113382169B (en) | 2021-06-18 | 2021-06-18 | Photographing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113382169A CN113382169A (en) | 2021-09-10 |
CN113382169B true CN113382169B (en) | 2023-05-09 |
Family
ID=77577968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110681582.4A Active CN113382169B (en) | 2021-06-18 | 2021-06-18 | Photographing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113382169B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114143461B (en) * | 2021-11-30 | 2024-04-26 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN116452475B (en) * | 2022-01-10 | 2024-05-31 | 荣耀终端有限公司 | Image processing method and related device |
CN115526787B (en) * | 2022-02-28 | 2023-10-20 | 荣耀终端有限公司 | Video processing method and device |
CN116723418B (en) * | 2022-02-28 | 2024-04-09 | 荣耀终端有限公司 | Photographing method and related device |
CN115297254B (en) * | 2022-07-04 | 2024-03-29 | 北京航空航天大学 | Portable high dynamic imaging fusion system under high radiation condition |
CN115499579B (en) * | 2022-08-08 | 2023-12-01 | 荣耀终端有限公司 | Zero second delay ZSL-based processing method and device |
CN116055890B (en) * | 2022-08-29 | 2024-08-02 | 荣耀终端有限公司 | Method and electronic device for generating high dynamic range video |
CN115842955A (en) * | 2022-09-20 | 2023-03-24 | Oppo广东移动通信有限公司 | Photographing method, photographing device and electronic device |
CN117135468B (en) * | 2023-02-21 | 2024-06-07 | 荣耀终端有限公司 | Image processing method and electronic equipment |
CN116389885B (en) * | 2023-02-27 | 2024-03-26 | 荣耀终端有限公司 | Shooting method, electronic equipment and storage medium |
CN117156261B (en) * | 2023-03-28 | 2024-07-02 | 荣耀终端有限公司 | Image processing method and related equipment |
CN116996762B (en) * | 2023-03-29 | 2024-04-16 | 荣耀终端有限公司 | Automatic exposure method, electronic equipment and computer readable storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI530911B (en) * | 2014-02-25 | 2016-04-21 | 宏碁股份有限公司 | Dynamic exposure adjusting method and electronic apparatus using the same |
CN105376473A (en) * | 2014-08-25 | 2016-03-02 | 中兴通讯股份有限公司 | Photographing method, device and equipment |
CN106060422B (en) * | 2016-07-06 | 2019-02-22 | 维沃移动通信有限公司 | A kind of image exposure method and mobile terminal |
CN107197169B (en) * | 2017-06-22 | 2019-12-06 | 维沃移动通信有限公司 | high dynamic range image shooting method and mobile terminal |
WO2019071613A1 (en) * | 2017-10-13 | 2019-04-18 | 华为技术有限公司 | Image processing method and device |
CN109951633B (en) * | 2019-02-18 | 2022-01-11 | 华为技术有限公司 | Method for shooting moon and electronic equipment |
-
2021
- 2021-06-18 CN CN202110681582.4A patent/CN113382169B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113382169A (en) | 2021-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113382169B (en) | Photographing method and electronic equipment | |
EP3893491A1 (en) | Method for photographing the moon and electronic device | |
JP7403551B2 (en) | Recording frame rate control method and related equipment | |
WO2023015981A1 (en) | Image processing method and related device therefor | |
CN113452898B (en) | Photographing method and device | |
CN115526787B (en) | Video processing method and device | |
CN109729274A (en) | Image processing method, device, electronic equipment and storage medium | |
EP4344240A1 (en) | Camera switching method, and electronic device | |
CN116095476B (en) | Camera switching method and device, electronic equipment and storage medium | |
CN113660408B (en) | Anti-shake method and device for video shooting | |
CN116055890B (en) | Method and electronic device for generating high dynamic range video | |
CN111953899B (en) | Image generation method, image generation device, storage medium, and electronic apparatus | |
CN110012227B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN115633262B (en) | Image processing method and electronic device | |
CN116055857A (en) | Photographing method and electronic equipment | |
CN113572948B (en) | Video processing method and video processing device | |
CN114666455A (en) | Shooting control method and device, storage medium and electronic device | |
CN117082340B (en) | High dynamic range mode selection method, electronic equipment and storage medium | |
CN117135257B (en) | Image display method, electronic equipment and computer readable storage medium | |
CN116668836B (en) | Photographing processing method and electronic equipment | |
WO2023160230A9 (en) | Photographing method and related device | |
JP2020184669A (en) | Image processing system, imaging apparatus, image processing method, program | |
CN116668837A (en) | Method for displaying thumbnail images and electronic device | |
CN115460343A (en) | Image processing method, apparatus and storage medium | |
CN106993138B (en) | Time-gradient image shooting device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |