CN113747072A - Shooting processing method and electronic equipment - Google Patents

Shooting processing method and electronic equipment Download PDF

Info

Publication number
CN113747072A
CN113747072A CN202111067133.7A CN202111067133A CN113747072A CN 113747072 A CN113747072 A CN 113747072A CN 202111067133 A CN202111067133 A CN 202111067133A CN 113747072 A CN113747072 A CN 113747072A
Authority
CN
China
Prior art keywords
multimedia material
preview
input
shooting
received
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111067133.7A
Other languages
Chinese (zh)
Other versions
CN113747072B (en
Inventor
邓智桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111067133.7A priority Critical patent/CN113747072B/en
Publication of CN113747072A publication Critical patent/CN113747072A/en
Priority to PCT/CN2022/117547 priority patent/WO2023036179A1/en
Application granted granted Critical
Publication of CN113747072B publication Critical patent/CN113747072B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Abstract

The application discloses a shooting processing method and electronic equipment, and belongs to the field of electronic equipment. Wherein the method comprises the following steps: receiving a first input for starting a delayed shooting mode; responding to the first input, and acquiring image data of N preview images corresponding to the current shooting interface, wherein N is a positive integer greater than 1; displaying the first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.

Description

Shooting processing method and electronic equipment
Technical Field
The application belongs to the field of electronic equipment, and particularly relates to a shooting processing method and electronic equipment.
Background
With the technical development of mobile phone cameras, mobile phones have more and more shooting functions. One of the common shooting methods is also implemented in a mobile phone, and is a delayed shooting. In the aspect of shooting videos, after people shoot a moving object, the delayed shooting mode can be processed at a later stage or add a motion track of the object by utilizing long exposure and add a gorgeous special effect on the motion track so as to increase the shock effect of the videos to viewers.
At present, in a delayed shooting mode, a user needs to wait until shooting is completed and then check a shooting effect, and if the user is not satisfied with the shooting effect, shooting needs to be performed again. This causes inconvenience to the user in photographing at the time of the delay photographing because the photographing time of the delay photographing is long. Therefore, the shooting convenience of the current time-delay shooting mode of the electronic equipment is low.
Disclosure of Invention
The embodiment of the application aims to provide a shooting processing method and electronic equipment, and the problem that shooting convenience of a delay shooting mode of the conventional electronic equipment is low can be solved.
In a first aspect, an embodiment of the present application provides a shooting processing method, including:
receiving a first input for starting a delayed shooting mode;
responding to the first input, and acquiring image data of N preview images corresponding to the current shooting interface, wherein N is a positive integer greater than 1;
displaying the first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.
In a second aspect, an embodiment of the present application provides a shooting processing apparatus, including:
the first receiving module is used for receiving a first input for starting a time-delay shooting mode;
the first acquisition module is used for responding to the first input and acquiring image data of N preview images corresponding to the current shooting interface, wherein N is a positive integer greater than 1;
the first display module is used for displaying the first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, the electronic device may acquire image data of N preview images corresponding to a current shooting interface when receiving a first input to start the delayed shooting mode, and generate and display a first preview multimedia material by performing synthesis processing on at least part of the image data of the N preview images in the delayed shooting mode. Therefore, the user can know the current preview effect of the delayed photography by watching the first preview multimedia material, so that the condition that the user can know the photographing effect only by waiting for the completion of the delayed photography after starting the photographing is avoided, and the convenience of the delayed photography mode is improved.
Drawings
Fig. 1 is one of schematic flow diagrams of a shooting processing method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a preview image provided by an embodiment of the present application;
FIG. 3 is a second schematic diagram of a preview image provided by the embodiment of the present application;
FIG. 4 is a schematic diagram of a shooting interface provided in an embodiment of the present application;
fig. 5 is a second schematic diagram of a shooting interface provided in the embodiment of the present application;
fig. 6 is a third schematic diagram of a shooting interface provided in the embodiment of the present application;
FIG. 7 is a fourth schematic view of a camera interface provided in the embodiments of the present application;
fig. 8 is a second schematic flowchart of a shooting processing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a shooting processing apparatus provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 11 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
In the case of delayed photography, a group of photos is usually taken, and the photos are serially connected to synthesize a video at a later stage, so that the process of minutes, hours or even days is compressed into the video or is displayed in a motion track effect diagram. Therefore, the electronic device usually continuously captures and stores a plurality of images after the user clicks the start shooting button, and when the user clicks the end shooting button, the plurality of images are combined to obtain a video or an effect picture. Because the shooting time is long, the user usually needs to wait for a long time to check the shooting effect after the delayed shooting.
Based on this, in order to improve convenience of the time-lapse photography, referring to fig. 1, an embodiment of the present application provides a shooting processing method, including:
step 101, a first input to start a delayed photography mode is received.
And 102, responding to the first input, and acquiring image data of N preview images corresponding to the current shooting interface, wherein N is a positive integer greater than 1.
Step 103, displaying the first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.
In step 101, the delayed shooting mode may be a function mode carried by a camera application program in the electronic device, and the user may start the delayed shooting mode by starting the camera application program, clicking a function menu, and the like.
Accordingly, the first input may be an input for a user to instruct the start of the delayed photography mode. For example, in some embodiments, the first input may be a touch operation such as a single click, a double click, or a long press of the user on a function menu of the camera application. In some embodiments, the first input may also be a non-touch input such as a voice input or a gesture input of the user for the camera application, so that the user starts the delayed photography mode when the user is not convenient to perform a touch operation.
Of course, in other alternative embodiments, the first input may also be a parameter change of the electronic device itself, for example, when the electronic device detects that the position, the orientation, or the tilt angle of the electronic device is in a preset condition, it is determined that the electronic device receives the first input and starts the delayed shooting mode, which is not further limited herein.
It should be noted that the first input is only an input indicating to start the delayed shooting mode, and at this time, the electronic device may display a preview image captured by the current camera on the shooting interface, so that the user may adjust shooting parameters.
Since the electronic device may display the preview image captured by the current camera on the shooting interface, in step 102, the electronic device may acquire image data of N preview images corresponding to the current shooting interface in response to the first input.
It should be understood that the step of acquiring the N preview images by the electronic device may be understood as acquiring the N preview images in the first time period. The first time period is a time period after the first input. In some embodiments, the first time period may be a preset time period, for example, 3s, 5s, or 10 s. That is, the electronic device may acquire the N pieces of preview image data in each cycle of the first time period until the electronic device receives an input indicating to start shooting. In some embodiments, the first time period may also be a time period from the first input to the electronic device receiving an input instructing to start shooting.
The N preview images may be images that are acquired by the electronic device at intervals or continuously within a first time period. In some embodiments, the electronic device may continuously or intermittently obtain the preview image displayed on the current display interface, for example, obtain the preview image of the current display interface every 0.3s and save the preview image. Referring to fig. 2 to 3, fig. 2 may be a preview image acquired first in a first time period, and fig. 3 may be a preview image acquired nth in the first time period.
After the image data of the N preview images are acquired, the electronic device may perform synthesis processing on the image data of the N preview images based on a processing method in a delayed shooting mode, and synthesize the N preview images into a video or an effect graph.
Of course, in some embodiments, the electronic device may filter the N preview images, for example, delete blurred images therein, so as to improve the preview effect of the synthesized preview images.
In step 103, the first preview multimedia material may be a video or an effect image obtained by synthesizing the image data of the N preview images in the delayed shooting mode.
Referring to fig. 4, in some embodiments, the first preview multimedia material may be displayed in a target area of a current shooting interface, and the display manner may be a stacked display, and of course, in some embodiments, the electronic device may also generate a new display area for the first preview multimedia material.
It can be understood that, in order to ensure the real-time effectiveness of the first preview multimedia material, in some embodiments, when the first time period is a preset time period, the electronic device may perform an update operation on the first preview multimedia material every other first time period, that is, perform synthesis processing on image data of N recently acquired preview images, and update and display an image after the synthesis processing. The updated display may be an alternative display of the first preview multimedia material generated last time, or may be a display of each generated first preview multimedia material in a material list generated in the current shooting interface, which is not limited herein.
Of course, after each preview image is acquired, the electronic device may perform a synthesizing process on image data of all currently acquired preview images to obtain and display a first preview multimedia material, so that the first preview multimedia material can be updated and displayed in real time.
In this embodiment of the application, the electronic device may acquire image data of N preview images corresponding to a current shooting interface when receiving a first input to start the delayed shooting mode, and generate and display a first preview multimedia material by performing synthesis processing on at least part of the image data of the N preview images in the delayed shooting mode. Therefore, the user can know the current preview effect of the delayed photography by watching the first preview multimedia material, so that the condition that the user can know the photographing effect only by waiting for the completion of the delayed photography after starting the photographing is avoided, and the convenience of the delayed photography mode is improved.
Optionally, after the step 103, the method further includes:
receiving a second input indicating to start photographing;
acquiring image data of M images shot based on the second input in response to the second input, wherein M is a positive integer greater than 1;
displaying a second preview multimedia material if a third input is received for the first preview multimedia material before the second input is received;
if the third input is not received before the second input is received, displaying a third preview multimedia material;
if a fourth input for the first preview multimedia material is received after the second input is received, displaying the second preview multimedia material;
the second preview multimedia material is obtained by synthesizing at least part of image data of the M images in the delayed photography mode; and the third preview multimedia material is obtained by synthesizing the image data of at least part of the N preview images and the image data of at least part of the M preview images in the delayed shooting mode.
In this embodiment of the application, the second input may be a second input that the user instructs to start shooting, and it is understood that the second input is similar to the first input, for example, the second input may be a touch operation that the user clicks a shooting button, or a non-touch operation or a parameter change of the electronic device itself, and details are not described herein.
After receiving the second input, the electronic device may acquire the image data of the M images based on the second input, that is, the electronic device may acquire the image data of the M images through M times of shooting in a second time period, which is similar to the first time period, and the second time period may be a preset time period or a time period from the start of shooting to the stop of shooting. The M images may be images captured continuously or at intervals in the second time period, and are not described herein again.
It can be understood that, after the electronic device displays the first preview multimedia material, the user may select to retain or delete the image data of the N preview images corresponding to the first preview image material, so as to select whether to perform shooting based on the image data of the N preview images corresponding to the first preview multimedia material.
The third input may be an input indicating that the user gives up using the first preview multimedia material before the electronic device receives the second input. Referring to fig. 4, in a specific embodiment, a "x" button for deletion may be displayed on the first preview multimedia material, and the user may click the "x" button to execute, for example, deletion or image data of N preview images corresponding to the first preview image material.
The electronic device may delete the image data of the first preview multimedia material and the N preview images corresponding thereto when receiving the third input before receiving the second input. And when the second input is received, performing time-lapse photography shooting. In the process, the electronic device can display a second preview multimedia material obtained by synthesizing the image data of the M images in the delayed shooting mode in real time, so that a user can know the shooting effect in the current shooting time. The specific display and update mode may refer to the display and update mode of the first preview multimedia material, and is not described herein again to avoid repetition.
Of course, the electronic device may also screen the M images to remove the image with the low shooting quality, so as to improve the final shooting effect.
If the electronic device does not receive the third input before receiving the second input, the electronic device may perform delay shooting based on the image data of the N preview images corresponding to the first preview multimedia material when receiving the second input. Specifically, the image data of at least a partial image of the N preview images and the image data of at least a partial image of the M preview images may be combined to obtain the third preview multimedia material.
The fourth input may be understood as an instruction from the user to abandon the input of the image data using the N preview images after the electronic device receives the second input. The specific input type of the fourth input is similar to that of the third input, and is not described herein again.
The electronic device may receive a fourth input of the user for the first preview multimedia material after receiving the second input, so as to abandon the use of the image data of the N preview images corresponding to the first preview multimedia material. At this time, the electronic device may delete the image data of the N preview images and terminate the display of the first preview multimedia material.
It should be understood that the fourth input may trigger the display of the second multimedia material when the electronic device does not receive the third input. For example, in a specific embodiment, the electronic device may display the first multimedia material first, detect whether a third input exists before the second input is performed by the user, and if the third input does not exist, display the third preview multimedia material while displaying the first preview multimedia material. Thereafter, if a fourth input is received, the display of the first preview multimedia material and the third preview multimedia material may be terminated and the second preview multimedia material may be displayed, or the second multimedia material may be displayed simultaneously with the display of the third preview multimedia material.
That is, the electronic device may simultaneously display at least one of the first multimedia material, the second multimedia material, and the third multimedia material during the photographing process
If the electronic device receives the third input before the user performs the second input, and the electronic device may terminate displaying the first preview multimedia material, after the second input, the electronic device may only display the second multimedia material, and the user may not perform the fourth input.
It should be noted that, when the first time period is a preset time period, and the length of the time period from the start of the delayed shooting mode to the start of shooting is greater than the length of the first time period, at this time, the user may select from the first preview multimedia materials corresponding to the plurality of first time periods, and the electronic device determines the N preview images according to the selected first preview multimedia material, and then performs the combining process with the M preview images.
The display and update mode of the third preview multimedia material is similar to that of the first preview multimedia material and the second preview multimedia material, and is not repeated herein to avoid repetition.
In the embodiment of the application, the electronic device can judge whether the user needs to shoot based on the first preview multimedia material before starting shooting according to whether the third input is received before shooting and whether the fourth input is received after shooting when receiving the second input indicating to start shooting, so that the second preview multimedia material and the third preview multimedia material can be displayed respectively, the user can select a shooting mode according to the self needs, the shooting effect can be watched in real time in the shooting process, and the experience of the user in shooting is further improved.
Optionally, the method further comprises:
saving the first preview multimedia material upon receiving a fifth input for the first preview multimedia material;
saving the second preview multimedia material upon receiving a sixth input for the second preview multimedia material;
saving the third preview multimedia material upon receiving a seventh input by the user for the third preview multimedia material.
According to the content, the electronic equipment can display the shooting effect in the current shooting time length in real time in the shooting process through the second preview multimedia material and the third preview multimedia material. In this embodiment, the user may save the second preview multimedia material through the sixth input, and save the third preview multimedia material through the seventh input.
For example, in a specific embodiment, the sixth input may be a touch operation on a display area of the second preview multimedia material. Similarly, the seventh input may be a touch operation on a display area of the third preview multimedia material. After the second preview multimedia material or the third preview multimedia material is stored, the user can view and acquire the corresponding multimedia material through an application program such as a gallery.
Of course, in some embodiments, the electronic device may also continue to display the first preview multimedia material during the shooting process, and the user may save the first preview multimedia material through a fifth input.
In the embodiment of the application, the user can store the first preview multimedia material, the second preview multimedia material and the third preview multimedia material in real time in the shooting process, so that the user can be prevented from missing a required shooting effect, and the shooting experience is further improved.
Optionally, after the step of acquiring image data of M images corresponding to the current shooting interface in the second time period, the method further includes:
receiving an eighth input indicating termination of photographing;
in response to the eighth input, in the event that a ninth input for the first preview multimedia material is received, outputting a first multimedia material corresponding to the first preview multimedia material;
under the condition that a tenth input aiming at the second preview multimedia material is received, outputting the second multimedia material corresponding to the second preview multimedia material;
and outputting a third multimedia material corresponding to the third preview multimedia material if an eleventh input for the third preview multimedia material is received.
In the embodiment of the present application, the eighth input may be, similar to the second input, an input for instructing the user to terminate the shooting, for example, a touch operation on a shooting termination key, a non-touch operation, a change of a parameter of the user, or the like. As can be seen from the above, the electronic device may simultaneously display at least one of the first multimedia material, the second multimedia material, and the third multimedia material during the shooting process, and after the electronic device receives the eighth input, the electronic device may receive an input performed by a user on the displayed first multimedia material, the displayed second multimedia material, or the displayed third multimedia material, so as to determine a finally output multimedia material.
It will be appreciated that the first multimedia material that is ultimately output may be the first preview multimedia material that is displayed by the user when performing input on the first preview multimedia material. Of course, the electronic device may also display the first preview multimedia material corresponding to the multiple shooting time nodes, and the user selects the first multimedia material to be output. Of course, before outputting the first multimedia material, the electronic device may also process itself or edit the first preview multimedia material by receiving an input from a user, and finally output the first multimedia material with a desired effect, where the output mode includes but is not limited to sending the first multimedia material to other terminal devices, storing the first multimedia material in a preset storage path, or displaying the first multimedia material on a preset display interface. The output modes of the second multimedia material and the third multimedia material are similar to the first multimedia material, and are not repeated herein to avoid repetition.
In the embodiment of the application, the user can click to terminate the shooting operation in the shooting process, and can select to output the required multimedia material by inputting the first preview multimedia material, the second preview multimedia material and the third preview multimedia material, so that the user can know the current shooting effect in real time and select whether to output, the technical difficulty of shooting is reduced, and the shooting experience of the user is improved.
Optionally, after the step 102, the method further includes:
displaying a fourth preview multimedia material, wherein the fourth preview multimedia material is generated by predicting image data of at least part of images in the N images through Artificial Intelligence (AI), and is used for describing a multimedia material obtained by synthesizing a current shooting interface in the delayed shooting mode after a third time period;
under the condition that a twelfth input of the user for the fourth preview multimedia material is received, shooting is started, and the residual shooting duration and the residual shooting number in the delayed shooting mode are displayed;
outputting a fourth multimedia material corresponding to the fourth preview multimedia material when the residual shooting duration is 0;
and determining the residual shooting duration according to the third time period and the current shooting duration, and determining the residual shooting number according to the third time period and the current shooting interval.
In the embodiment of the present application, the electronic device may predict the motion trajectory of the object by using an AI technique according to the N preview images acquired in step 102, so as to generate a multimedia material describing the effect of the predicted trajectory.
The electronic device may predict, based on image data of at least a part of the images in the N preview images, by using an AI technique to generate the fourth preview multimedia material, which is used to describe a multimedia material obtained by performing a synthesizing process in the delayed shooting mode after a third time period corresponding to the current shooting interface.
For example, if the first time period is 5s and the third time period is 20min, the electronic device may predict a multimedia material that should be generated after 20min based on a preview image acquired within 5s, so as to generate the fourth preview multimedia material. Referring to fig. 5, the image displayed at the lower side in fig. 5 is the fourth preview multimedia material, and it can be seen that the fourth preview multimedia material performs prediction processing on the motion trajectory of the object according to the N preview images, that is, the motion trajectory is automatically added. It should be understood that the fourth preview multimedia material is only a preview effect image generated based on AI technology prediction, and the end user can still obtain the multimedia material corresponding to the fourth preview multimedia material through delayed shooting when the shooting duration is the third time period.
It can be understood that the number of the fourth preview multimedia materials may be one or multiple, the electronic device may display multiple fourth preview multimedia materials, and the multiple fourth preview multimedia materials correspond to the predicted effect graphs corresponding to different time periods, respectively, the electronic device may determine the fourth preview multimedia material required by the user according to a twelfth input of the user, where the twelfth input may be a click operation on the fourth preview multimedia material. Referring to fig. 5, the multimedia material shown in fig. 5 may be a prediction effect map corresponding to different time periods, and the user may perform time-lapse shooting at different shooting times by clicking different multimedia materials.
After the electronic device receives the twelfth input, the shooting in the delayed shooting mode may be performed based on the fourth preview multimedia material. Since the fourth preview multimedia material predicts the multimedia material after the third time period, the electronic device may calculate the remaining shooting time according to a difference between the time of the third time period and the current shooting time. Meanwhile, the time intervals for acquiring the images in the delayed shooting mode are generally equal, so that the number of the residual shot can be calculated according to the preset shooting interval and the residual shooting duration. The current shooting progress of the user is prompted by displaying the residual shooting duration and the residual shooting number, so that the user can carry out time-delay shooting more efficiently, and the experience during shooting is improved.
The various optional implementations described in the embodiments of the present application may be implemented in combination with each other or implemented separately without conflicting with each other, and the embodiments of the present application are not limited to this.
For ease of understanding, examples are illustrated below:
referring to fig. 8, fig. 8 is a schematic diagram of a possible process according to an embodiment of the present application, including the following steps:
step 801, reading the gyroscope data of the mobile phone after the user opens the camera and selects the delayed photography mode. When the spirometer data shows that the mobile phone is stable, image data of the preview image starts to be stored, as shown in fig. 2 to 3, fig. 2 is a first preview image, and fig. 3 is an nth preview image.
And step 802, synthesizing the stored N pieces of preview image data to form a delayed photography effect image. And displaying the effect picture on a preview interface in real time, as shown in the lower right corner of the mobile phone preview interface in FIG. 4.
Step 803, when the user clicks the delayed photography effect image of the preview interface, the image can be directly obtained.
Step 8041, when the user clicks shooting, the user can select to close the preview effect (e.g. the effect window at the lower right corner of the preview interface in fig. 2), that is, the preview stored data is discarded (the preview frame still displays the effect during shooting), and the delayed shooting effect starts to be synthesized after shooting. During this photographing process, the current preview effect can be saved if the user clicks the preview effect.
Step 8042, when the user clicks to shoot, the preview effect is not closed, the data stored in advance for previewing is retained to continue to synthesize the image and the effect is displayed on the shooting interface until the shooting is finished. During this photographing process, the current preview effect can be saved if the user clicks the preview effect.
Further, the above step flow may further include:
step 805, after storing the image data of the preview image, automatically adding the motion trail by using AI, and placing the image with the motion trail on a shooting interface, as shown in fig. 5.
In step 806, the user may select different effects provided by the AI, and after selection, the user may be prompted on the capture interface as to how many remaining images will be available to complete the capture, as shown in fig. 6. In addition, the user may be prompted on the capture interface as to how long it will be left to complete the capture, as shown in fig. 7.
The embodiment of the application makes up the technical vacancy that no method is available for seeing the delayed shooting effect in real time in the delayed shooting, and reduces the technical difficulty of shooting. The method can prompt the user of the residual shooting number, reduce the time and energy of later-period adjustment and promote the user to create in a delayed shooting mode.
In the shooting processing method provided by the embodiment of the present application, the execution subject may be a shooting processing apparatus, or a control module of the shooting processing apparatus for executing the shooting processing method. The embodiment of the present application describes a shooting processing apparatus provided in the embodiment of the present application, taking a method for executing shooting processing by a shooting processing apparatus as an example.
Referring to fig. 9, the present application provides a photographing processing apparatus 900 including:
a first receiving module 901, configured to receive a first input for starting a delayed shooting mode;
a first obtaining module 902, configured to obtain, in response to the first input, image data of N preview images corresponding to a current shooting interface, where N is a positive integer greater than 1;
a first display module 903, configured to display a first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.
In this embodiment of the application, the electronic device may, when receiving a first input for starting the delayed shooting mode through the first receiving module 901, obtain, through the first obtaining module 902, image data of N preview images corresponding to the current shooting interface, perform synthesis processing on at least part of the image data of the N preview images through the delayed shooting mode, and generate and display a first preview multimedia material through the first display module 903. Therefore, the user can know the current preview effect of the delayed photography by watching the first preview multimedia material, so that the condition that the user can know the photographing effect only by waiting for the completion of the delayed photography after starting the photographing is avoided, and the convenience of the delayed photography mode is improved.
Optionally, the shooting processing apparatus 900 further includes:
a second receiving module for receiving a second input indicating a start of photographing;
a second acquisition module for acquiring, in response to the second input, image data of M images photographed based on the second input, M being a positive integer greater than 1;
a second display module, configured to display a second preview multimedia material if a third input for the first preview multimedia material is received before the second input is received;
a third display module, configured to display a third preview multimedia material if the third input is not received before the second input is received;
a fourth display module, configured to display the second preview multimedia material if a fourth input for the first preview multimedia material is received after the second input is received;
the second preview multimedia material is obtained by synthesizing at least part of image data of the M images in the delayed photography mode; and the third preview multimedia material is obtained by synthesizing the image data of at least part of the N preview images and the image data of at least part of the M preview images in the delayed shooting mode.
Optionally, the shooting processing apparatus 900 further includes:
the first storage module is used for saving the second preview multimedia material under the condition that a sixth input aiming at the second preview multimedia material is received;
and the second storage module is used for saving the third preview multimedia material under the condition that a seventh input aiming at the third preview multimedia material by a user is received.
Optionally, the shooting processing apparatus 900 further includes:
a third receiving module for receiving an eighth input indicating termination of photographing;
the first output module is used for outputting a first multimedia material corresponding to the first preview multimedia material under the condition that a ninth input aiming at the first preview multimedia material is received;
the second output module is used for outputting a second multimedia material corresponding to the second preview multimedia material under the condition that a tenth input aiming at the second preview multimedia material is received;
and the third output module outputs a third multimedia material corresponding to the third preview multimedia material under the condition that an eleventh input aiming at the third preview multimedia material is received.
Optionally, the shooting processing apparatus 900 further includes:
the fourth display module is used for displaying a fourth preview multimedia material, wherein the fourth preview multimedia material is generated by predicting at least part of image data in the N images through Artificial Intelligence (AI), and is used for describing the multimedia material obtained by synthesizing the current shooting interface in the delayed shooting mode after a third time period;
the shooting display module is used for starting to shoot and displaying the residual shooting duration and the residual shooting number in the delayed shooting mode under the condition of receiving twelfth input of a user for the fourth preview multimedia material;
a fourth output module, configured to output a fourth multimedia material corresponding to the fourth preview multimedia material when the remaining shooting duration is 0;
and determining the residual shooting duration according to the third time period and the current shooting duration, and determining the residual shooting number according to the third time period and the current shooting interval.
The shooting processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The shooting processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 8, and is not described here again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 1000 is further provided in this embodiment of the present application, and includes a processor 1001, a memory 1002, and a program or an instruction stored in the memory 1002 and executable on the processor 1001, where the program or the instruction is executed by the processor 1001 to implement each process of the above-mentioned shooting processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1107 is configured to receive a first input to start the delayed shooting mode.
And the processor 1110 is configured to, in response to the first input, obtain image data of N preview images corresponding to the current shooting interface, where N is a positive integer greater than 1.
A display unit 1106 for displaying the first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.
Optionally, the user input unit 1107 is further configured to receive a second input indicating the start of shooting.
Processor 1110 is further configured to, in response to the second input, obtain image data for M images captured based on the second input, M being a positive integer greater than 1.
A display unit 1106, further configured to display a second preview multimedia material if a third input for the first preview multimedia material is received before the second input is received; if the third input is not received before the second input is received, displaying a third preview multimedia material; displaying the second preview multimedia material if a fourth input for the first preview multimedia material is received after the second input is received.
The second preview multimedia material is obtained by synthesizing at least part of image data of the M images in the delayed photography mode; and the third preview multimedia material is obtained by synthesizing the image data of at least part of the N preview images and the image data of at least part of the M preview images in the delayed shooting mode.
Optionally, the processor 1110 is further configured to save the first preview multimedia material if a fifth input for the first preview multimedia material is received; saving the second preview multimedia material upon receiving a sixth input for the second preview multimedia material; saving the third preview multimedia material upon receiving a seventh input by the user for the third preview multimedia material.
Optionally, a user input unit 1107 for receiving an eighth input indicating termination of shooting;
processor 1110, further responsive to the eighth input, if a ninth input for the first preview multimedia material is received, outputting a first multimedia material corresponding to the first preview multimedia material; under the condition that a tenth input aiming at the second preview multimedia material is received, outputting the second multimedia material corresponding to the second preview multimedia material; and outputting a third multimedia material corresponding to the third preview multimedia material if an eleventh input for the third preview multimedia material is received.
Optionally, the method further comprises:
the display unit 1106 is further configured to display a fourth preview multimedia material, where the fourth preview multimedia material is generated by predicting, through an artificial intelligence AI, image data of at least a part of the N images, and is used to describe a multimedia material obtained by performing synthesis processing in the delayed shooting mode after a third time period corresponding to the current shooting interface.
A display unit 1106, configured to start shooting and display the remaining shooting time length and the remaining number of shots in the delayed shooting mode if a twelfth input by the user for the fourth preview multimedia material is received.
The processor 1110 is further configured to output a fourth multimedia material corresponding to the fourth preview multimedia material when the remaining shooting duration is 0.
And determining the residual shooting duration according to the third time period and the current shooting duration, and determining the residual shooting number according to the third time period and the current shooting interval.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1109 may be used for storing software programs and various data including, but not limited to, application programs and an operating system. Processor 1110 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A shooting processing method, characterized by comprising:
receiving a first input for starting a delayed shooting mode;
responding to the first input, and acquiring image data of N preview images corresponding to the current shooting interface, wherein N is a positive integer greater than 1;
displaying the first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.
2. The method of claim 1, wherein after the step of displaying the first preview multimedia material, the method further comprises:
receiving a second input indicating to start photographing;
acquiring image data of M images shot based on the second input in response to the second input, wherein M is a positive integer greater than 1;
displaying a second preview multimedia material if a third input is received for the first preview multimedia material before the second input is received;
if the third input is not received before the second input is received, displaying a third preview multimedia material;
if a fourth input for the first preview multimedia material is received after the second input is received, displaying the second preview multimedia material;
the second preview multimedia material is obtained by synthesizing at least part of image data of the M images in the delayed photography mode; and the third preview multimedia material is obtained by synthesizing the image data of at least part of the N preview images and the image data of at least part of the M preview images in the delayed shooting mode.
3. The method of claim 2, further comprising:
saving the first preview multimedia material upon receiving a fifth input for the first preview multimedia material;
saving the second preview multimedia material upon receiving a sixth input for the second preview multimedia material;
saving the third preview multimedia material upon receiving a seventh input by the user for the third preview multimedia material.
4. The method according to claim 2, wherein after the step of acquiring the image data of the M images corresponding to the current shooting interface in the second time period, the method further comprises:
receiving an eighth input indicating termination of photographing;
in response to the eighth input, in the event that a ninth input for the first preview multimedia material is received, outputting a first multimedia material corresponding to the first preview multimedia material;
under the condition that a tenth input aiming at the second preview multimedia material is received, outputting the second multimedia material corresponding to the second preview multimedia material;
and outputting a third multimedia material corresponding to the third preview multimedia material if an eleventh input for the third preview multimedia material is received.
5. The method according to any one of claims 1-4, wherein after the step of obtaining the image data of the N preview images corresponding to the current shooting interface, the method further comprises:
displaying a fourth preview multimedia material, wherein the fourth preview multimedia material is generated by predicting at least part of image data in the N preview images through Artificial Intelligence (AI) and is used for describing a multimedia material obtained by synthesizing the current shooting interface in the delayed shooting mode after a third time period;
under the condition that a twelfth input of the user for the fourth preview multimedia material is received, shooting is started, and the residual shooting duration and the residual shooting number in the delayed shooting mode are displayed;
outputting a fourth multimedia material corresponding to the fourth preview multimedia material when the residual shooting duration is 0;
and determining the residual shooting duration according to the third time period and the current shooting duration, and determining the residual shooting number according to the third time period and the current shooting interval.
6. A shooting processing apparatus characterized by comprising:
the first receiving module is used for receiving a first input for starting a time-delay shooting mode;
the first acquisition module is used for responding to the first input and acquiring image data of N preview images corresponding to the current shooting interface, wherein N is a positive integer greater than 1;
the first display module is used for displaying the first preview multimedia material; and the first preview multimedia material is obtained by synthesizing at least part of image data in the N preview images in the delayed photography mode.
7. The shooting processing apparatus according to claim 6, characterized by further comprising:
a second receiving module for receiving a second input indicating a start of photographing;
a second acquisition module for acquiring, in response to the second input, image data of M images photographed based on the second input, M being a positive integer greater than 1;
a second display module, configured to display a second preview multimedia material if a third input for the first preview multimedia material is received before the second input is received;
a third display module, configured to display a third preview multimedia material if the third input is not received before the second input is received;
a fourth display module, configured to display the second preview multimedia material if a fourth input for the first preview multimedia material is received after the second input is received;
the second preview multimedia material is obtained by synthesizing at least part of image data of the M images in the delayed photography mode; and the third preview multimedia material is obtained by synthesizing the image data of at least part of the N preview images and the image data of at least part of the M preview images in the delayed shooting mode.
8. The apparatus of claim 7, further comprising:
a first storage module, configured to store the first preview multimedia material when a fifth input for the first preview multimedia material is received;
a second storage module, configured to store the second preview multimedia material when a sixth input for the second preview multimedia material is received;
and the third storage module is used for saving the third preview multimedia material under the condition that a seventh input aiming at the third preview multimedia material by a user is received.
9. The apparatus of claim 7, further comprising:
a third receiving module for receiving an eighth input indicating termination of photographing;
a first output module, configured to output a first multimedia material corresponding to the first preview multimedia material when a ninth input for the first preview multimedia material is received;
the second output module is used for outputting a second multimedia material corresponding to the second preview multimedia material under the condition that a tenth input aiming at the second preview multimedia material is received;
and the third output module is used for outputting a third multimedia material corresponding to the third preview multimedia material under the condition that an eleventh input aiming at the third preview multimedia material is received.
10. The apparatus of any one of claims 6-9, further comprising:
the fourth display module is used for displaying a fourth preview multimedia material, wherein the fourth preview multimedia material is generated by predicting at least part of image data in the N images through Artificial Intelligence (AI), and is used for describing the multimedia material obtained by synthesizing the current shooting interface in the delayed shooting mode after a third time period;
the shooting display module is used for starting to shoot and displaying the residual shooting duration and the residual shooting number in the delayed shooting mode under the condition of receiving twelfth input of a user for the fourth preview multimedia material;
a fourth output module, configured to output a fourth multimedia material corresponding to the fourth preview multimedia material when the remaining shooting duration is 0;
and determining the residual shooting duration according to the third time period and the current shooting duration, and determining the residual shooting number according to the third time period and the current shooting interval.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the shooting processing method of any one of claims 1 to 5.
12. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the photographing processing method according to claims 1-5.
CN202111067133.7A 2021-09-13 2021-09-13 Shooting processing method and electronic equipment Active CN113747072B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111067133.7A CN113747072B (en) 2021-09-13 2021-09-13 Shooting processing method and electronic equipment
PCT/CN2022/117547 WO2023036179A1 (en) 2021-09-13 2022-09-07 Photographing processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111067133.7A CN113747072B (en) 2021-09-13 2021-09-13 Shooting processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113747072A true CN113747072A (en) 2021-12-03
CN113747072B CN113747072B (en) 2023-12-12

Family

ID=78738311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111067133.7A Active CN113747072B (en) 2021-09-13 2021-09-13 Shooting processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN113747072B (en)
WO (1) WO2023036179A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023036179A1 (en) * 2021-09-13 2023-03-16 维沃移动通信有限公司 Photographing processing method and electronic device
WO2023151609A1 (en) * 2022-02-10 2023-08-17 维沃移动通信有限公司 Time-lapse photography video recording method and apparatus, and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713738A (en) * 2015-11-12 2017-05-24 Lg电子株式会社 Mobile terminal and method for controlling the same
CN109688331A (en) * 2019-01-10 2019-04-26 深圳市阿力为科技有限公司 A kind of time-lapse photography method and device
CN110363293A (en) * 2018-03-26 2019-10-22 腾讯科技(深圳)有限公司 The training of neural network model, time-lapse photography video generation method and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103384310A (en) * 2013-07-09 2013-11-06 华晶科技股份有限公司 Image acquisition device and image acquisition method
JP6957131B2 (en) * 2016-03-01 2021-11-02 オリンパス株式会社 Information terminal device, image pickup device, image information processing system and image information processing method
CN112351207A (en) * 2020-10-30 2021-02-09 维沃移动通信有限公司 Shooting control method and electronic equipment
CN113747072B (en) * 2021-09-13 2023-12-12 维沃移动通信有限公司 Shooting processing method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713738A (en) * 2015-11-12 2017-05-24 Lg电子株式会社 Mobile terminal and method for controlling the same
CN110363293A (en) * 2018-03-26 2019-10-22 腾讯科技(深圳)有限公司 The training of neural network model, time-lapse photography video generation method and equipment
CN109688331A (en) * 2019-01-10 2019-04-26 深圳市阿力为科技有限公司 A kind of time-lapse photography method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023036179A1 (en) * 2021-09-13 2023-03-16 维沃移动通信有限公司 Photographing processing method and electronic device
WO2023151609A1 (en) * 2022-02-10 2023-08-17 维沃移动通信有限公司 Time-lapse photography video recording method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2023036179A1 (en) 2023-03-16
CN113747072B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
WO2023036179A1 (en) Photographing processing method and electronic device
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN112954214B (en) Shooting method, shooting device, electronic equipment and storage medium
CN111756995A (en) Image processing method and device
CN112954199B (en) Video recording method and device
CN113794834B (en) Image processing method and device and electronic equipment
CN113794829B (en) Shooting method and device and electronic equipment
CN111669495B (en) Photographing method, photographing device and electronic equipment
CN112492214A (en) Image shooting method and device, electronic equipment and readable storage medium
CN111953900B (en) Picture shooting method and device and electronic equipment
CN114025092A (en) Shooting control display method and device, electronic equipment and medium
CN114025093A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113709368A (en) Image display method, device and equipment
CN111586305A (en) Anti-shake method, anti-shake device and electronic equipment
CN113596331B (en) Shooting method, shooting device, shooting equipment and storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN113891018A (en) Shooting method and device and electronic equipment
CN114500844A (en) Shooting method and device and electronic equipment
CN112637491A (en) Photographing method and photographing apparatus
CN114245018A (en) Image shooting method and device
CN113037996A (en) Image processing method and device and electronic equipment
CN112291474A (en) Image acquisition method and device and electronic equipment
CN112312024A (en) Photographing processing method and device and storage medium
CN112399092A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant