CN111917979B - Multimedia file output method and device, electronic equipment and readable storage medium - Google Patents

Multimedia file output method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111917979B
CN111917979B CN202010733380.5A CN202010733380A CN111917979B CN 111917979 B CN111917979 B CN 111917979B CN 202010733380 A CN202010733380 A CN 202010733380A CN 111917979 B CN111917979 B CN 111917979B
Authority
CN
China
Prior art keywords
target
input
image
user
multimedia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010733380.5A
Other languages
Chinese (zh)
Other versions
CN111917979A (en
Inventor
崔晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010733380.5A priority Critical patent/CN111917979B/en
Publication of CN111917979A publication Critical patent/CN111917979A/en
Application granted granted Critical
Publication of CN111917979B publication Critical patent/CN111917979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a multimedia file output method and device, electronic equipment and a readable storage medium, and belongs to the technical field of computers. The multimedia file output method comprises the following steps: receiving a first input to a target preset control under the condition of displaying the multimedia image; wherein the multimedia image comprises a target subject; in response to the first input, determining a target background based on a target background theme associated with the target preset control; receiving a second input of the user; outputting the target multimedia file in response to the second input; the target multimedia file includes a target subject and a target background. By using the multimedia file output method, the multimedia file output device, the electronic equipment and the readable storage medium, the multimedia file meeting the background requirements of the user can be output.

Description

Multimedia file output method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a multimedia file output method and device, electronic equipment and a readable storage medium.
Background
At present, when multimedia files such as images or videos are output, only multimedia images with shooting effects adjusted by methods such as filtering, background blurring and beautifying can be output, but the multimedia images with background images changed cannot be directly output.
If a user wants to obtain a multimedia image under a target background, the user can only go to a corresponding scene to shoot or perform post-processing, so that the operation is complex when the multimedia image with the changed background image is obtained, and the user experience is low.
Disclosure of Invention
An embodiment of the present application provides a multimedia file output method, an apparatus, an electronic device, and a readable storage medium, which are capable of outputting a multimedia file satisfying a user background requirement.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a multimedia file output method, where the method may include:
receiving a first input of a target preset control under the condition of displaying the multimedia image; wherein the multimedia image comprises a target subject;
in response to the first input, determining a target background based on a target background theme associated with the target preset control;
receiving a second input of the user;
outputting the target multimedia file in response to the second input; the target multimedia file includes a target subject and a target background.
In a second aspect, an embodiment of the present application provides a multimedia file output apparatus, including:
the first receiving module is used for receiving first input of a target preset control under the condition of displaying the multimedia image; wherein the multimedia image comprises a target subject;
a first determination module, configured to determine, in response to a first input, a target background based on a target background theme associated with the target preset control;
the second receiving module is used for receiving a second input of the user;
an output module for outputting the target multimedia file in response to the second input; the target multimedia file includes a target subject and a target background.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, under the condition that a multimedia image including a target main body is displayed, a first input to a target preset control is received, and then the target background can be determined based on a target background theme associated with the target preset control, so that when a second input of a user is received and a multimedia file is output, the target multimedia file including the target main body and the target background can be output, and the multimedia file meeting the background requirement of the user can be output without the help of other complex processing modes such as shooting in a corresponding scene or post-processing, and further the operation complexity of background adjustment in the multimedia file is simplified.
Drawings
The present application may be better understood from the following description of specific embodiments of the application taken in conjunction with the accompanying drawings, in which like or similar reference numerals identify like or similar features.
Fig. 1 is a schematic flowchart of a multimedia file output method according to an embodiment of the present application;
FIG. 2 is a schematic view of an interface provided in a first embodiment of the present application;
FIG. 3 is a schematic view of an interface provided by a second embodiment of the present application;
FIG. 4 is a schematic view of an interface provided by a third embodiment of the present application;
FIG. 5 is a schematic view of an interface provided by a fourth embodiment of the present application;
FIG. 6 is a schematic view of an interface provided in a fifth embodiment of the present application;
fig. 7 is a flowchart illustrating a multimedia file output method according to another embodiment of the present application;
FIG. 8 is a schematic view of an interface provided by a seventh embodiment of the present application;
FIG. 9 is a schematic view of an interface provided by an eighth embodiment of the present application;
fig. 10 is a flowchart illustrating a multimedia file output method according to another embodiment of the present application;
FIG. 11 is a schematic interface diagram provided in a ninth embodiment of the present application;
FIG. 12 is a schematic interface diagram provided in a tenth embodiment of the present application;
fig. 13 is a schematic structural diagram of a multimedia file output device according to an embodiment of the present application;
fig. 14 is a schematic hardware structure diagram of an electronic device implementing various embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be implemented in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The method, the apparatus, the electronic device, and the readable storage medium for outputting the multimedia file provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings and application scenarios thereof.
In the related art, a user can beautify a shot object only through image optimization modes such as a filter of shooting software, a virtual background, or adding small stickers and the like. But cannot directly change the background image of the multimedia image on the photographed preview interface.
Similarly, if a user records a video, only some video special effects can be added through video recording software, and the background image cannot be directly changed on a recording interface for recording the video.
Therefore, whether the picture is taken or the video is recorded, if the background image needs to be replaced, the background image can be replaced only through later image processing, such as Photoshop software. However, the operation of the software is complex, so that the operation of the user is cumbersome, and the user experience is low.
Therefore, in order to solve the above-mentioned problems, embodiments of the present application provide a multimedia file output method, an apparatus, an electronic device, and a readable storage medium, which can meet a requirement that a user can adjust a background image in real time when outputting a target multimedia file such as an image or a video.
Fig. 1 is a flowchart illustrating a multimedia file output method according to an embodiment of the present application.
As shown in fig. 1, the multimedia file output method includes S110 to S140, which are explained in detail below:
s110, receiving a first input to a target preset control under the condition that the multimedia image is displayed.
Wherein the multimedia image comprises a target subject.
In some embodiments of the present application, the multimedia image may be a preview image displayed in a shooting preview interface, a preview video frame displayed in a video recording interface, or a photograph obtained after shooting.
For example, when a girl is photographed, a photographing preview image displayed on a screen of the electronic device may be a multimedia image; similarly, when recording a girl, the preview video frame in the video recording interface displayed on the screen of the electronic device during the recording process can also be a multimedia image.
In some embodiments, the first input may be a click input of the user or a long-press input, which is not limited herein. The target preset control is displayed on a display screen of the electronic equipment.
Taking an electronic device with a waterfall screen as an example, as shown in fig. 2, fig. 2 is a schematic interface diagram provided in the first embodiment of the present application. The electronic device in fig. 2 comprises a main display 21 and a secondary display 22, and a target preset control 23 is displayed on the secondary display 22. The target preset control may be in the form of an arrow as shown in fig. 2, or may be an identifier of another shape, where no limitation is made on the shape of the target preset control.
And S120, responding to the first input, and determining a target background based on a target background theme associated with the target preset control.
In some embodiments of the present application, the preset control may be associated with a background theme, for example, a background theme such as weather, a place, and the like, so that the target background may be determined in response to a first input of the target preset control by a user.
And S130, receiving a second input of the user.
In some embodiments, the second input may be a click input of the user or a long-press input, which is not limited herein.
And S140, responding to the second input, and outputting the target multimedia file.
Wherein the target multimedia file comprises a target subject and a target background.
In some embodiments of the present application, the user may perform a second input on the shooting key, and the electronic device may output the target multimedia file including the target subject and the target background after receiving the second input on the shooting key by the user.
In the embodiment of the application, under the condition that a multimedia image including a target main body is displayed, a first input to a target preset control is received, and then the target background can be determined based on a target background theme associated with the target preset control, so that when a second input of a user is received and a multimedia file is output, the target multimedia file including the target main body and the target background can be output, and the multimedia file meeting the background requirement of the user can be output without the help of other complex processing modes such as shooting in a corresponding scene or post-processing, and further the operation complexity of background adjustment in the multimedia file is simplified.
In the following, by some embodiments, before S110, the multimedia file output method may further include a step of determining a target subject in the multimedia image, specifically as follows:
receiving a third input of the first object in the multimedia image by the user;
in response to a third input, calculating first depth information for the first object;
acquiring a second object meeting a first preset condition in the multimedia image;
wherein the first preset condition is associated with the first depth information;
the second is determined to be the target subject.
In the embodiment of the present application, as shown in fig. 3, fig. 3 is a schematic interface diagram provided in the second embodiment of the present application. The multimedia image as shown in fig. 3 includes a girl 31 and a ball 32. After the user performs the third input on the first object where the head of the girl 31 is located, and the electronic device receives the third input, the depth-of-field information of the head of the first object girl 31 is automatically calculated, and the first depth-of-field information is obtained.
Next, after determining the first depth information, the electronic device further obtains a second object satisfying a first preset condition in the multimedia image. The first preset condition may be that the depth of field is the same as the first depth of field, or that a difference between the first depth of field and the first depth of field information is smaller than a preset threshold. The preset threshold value may be set as required, and is not described herein again.
For example, with continued reference to fig. 3, after the user makes a third input on the head of the girl 31, the electronic device obtains a second object including the girl 31 and satisfying the first preset condition through the depth of field calculation.
In other embodiments, after determining the second object, the second object may also be identified by a boundary identifier.
In the embodiment of the application, through calculation of depth of field information of the first object by the user, the electronic device identifies people, animals or objects based on selection input of the user, and obtains a shooting subject and a corresponding background of the shooting subject, of which the user desires to change a background image.
In other embodiments of the present application, the user may also customize a shape by a tool customization function such as a brush, and the position and size may also be identified by the boundary identifier.
In further embodiments of the present application, before S110, the step of determining the target subject in the multimedia image included in the multimedia file output method may be as follows:
acquiring a second object meeting a second preset condition in the multimedia image based on the preset depth of field range;
wherein the second preset condition is associated with a preset depth of field range;
the second object is determined as the target subject.
In the embodiment of the present application, the camera component of the electronic device may be a depth camera based on a Time of Flight (TOF) scheme. Therefore, the electronic device can directly acquire the depth information of each object in the image. Therefore, when the target subject in the multimedia image is identified, the second object meeting the second preset condition can be obtained only within the preset depth range. The preset depth of field range may be set according to different scenes in the image, and no limitation is made to specific values of the preset depth of field range.
Referring to fig. 3, when the user records the video of the scene in which the girl 31 is playing the ball, the electronic device may directly acquire the depth-of-field information of the girl 31 and the depth-of-field information of the ball 32 because the electronic device may be based on the TOF scheme of the camera assembly, so that the electronic device may determine the girl 31 in the multimedia image as the second object when the depth-of-field information of the girl 31 satisfies the second preset condition. The second preset condition may be that the depth information is within a preset depth information range.
In some embodiments of the present application, the second object may also be identified by a boundary identifier.
In the embodiment of the application, the depth information of each object in the image can be automatically acquired through the TOF structure of the camera, so that the depth information of each object is further acquired, the electronic equipment can automatically identify people, animals or objects, and the use experience of a user is improved.
In other embodiments of the present application, if multiple second objects are included in the image
After identifying each second object in the image by the TOF structure of the camera, a plurality of second objects may be obtained. With continued reference to fig. 3, when video recording is performed by using a camera based on the TOF scheme, if the depth information of both the girl 31 and the ball 32 satisfies the second preset condition, the second object girl 31 and the second object ball 32 can be obtained. Thus, when determining the target subject, the user is also required to make a click or press input on the second object girl 31 or the second object ball 32, and then the electronic device will determine the target second object (girl 31 or ball 32) as the target subject.
In some embodiments of the present application, a multimedia file output method is applied to an electronic device having a first display area and a second display area. The first display area may be located on a front surface of the electronic device, and the second display area may be located on a side surface of the electronic device, for example, an electronic device having a waterfall screen, and the like.
When the target multimedia file is output by using the electronic device with the second display area, before the second object is determined as the target subject, the method further comprises the following steps:
receiving a fourth input that the user drags the second object from the first display area to the second display area;
next, when the target subject is to be determined, the electronic device may determine the second object as the target subject in response to a fourth input.
In an embodiment of the present application, as shown in fig. 4, fig. 4 is a schematic interface diagram provided in the third embodiment of the present application. The second object 41 is displayed in the electronic device shown in fig. 4, and when the target subject is determined, the user needs to drag the second object 41 into the second display area 42. The electronic device may determine the second object as the target subject upon receiving a fourth input that the user drags the second object 41 to the second display area 42. Wherein the determined target subject can be identified by the boundary identifier 43. However, the boundary identifier may be in the form of a virtual line or in other forms, and the representation form of the boundary identifier is not limited to any form.
In some embodiments, after identifying each second object in the image through the TOF structure of the camera to obtain a plurality of second objects, the user may further merge the plurality of second objects, and the merging of the second objects may be specifically implemented through the following steps:
receiving a first target input of a user to at least two second objects;
in response to the first target input, a second object associated with the target input is merged into one second object.
The first target input may be a click input or a press input, a drag input, or the like.
As an example, as shown in fig. 5, fig. 5 is a schematic interface diagram provided in the fourth embodiment of the present application. The user can simultaneously double-click at least two second objects 51 by two or more fingers and drag the at least two objects to the same position; it is also possible that the user drags one second object to the location of another second object. At this time, the electronic device fuses the second object dragged to the same position into one second object. And a separator exists between two objects in the fused second object, and a user can release the fused second object to restore the fused second object to the object before fusion by long-pressing and dragging the fused second object at a preset position of a screen.
In the embodiment of the application, the user can independently merge the second object according to the own use requirement, so that the second object meeting the own use requirement of the user is obtained, and the use experience of the user is improved.
In other embodiments of the present application, the user may not only merge the second object but also split the second object before receiving a fourth input that the user drags the second object to the second display region of the screen. For example, the segmentation of the second object may be achieved by:
receiving a sixth input of the second object from the user;
in response to a sixth input, dividing the second object into a plurality of sub-objects according to the sliding track of the sixth input;
in this way, when the target subject is determined, a fourth input of the user dragging the target sub-object of the plurality of sub-objects from the first display area to the second display area may be received, and the target sub-object may be determined as the target subject in response to the fourth input.
As an example, as shown in fig. 6, fig. 6 is a schematic interface diagram provided in a fifth embodiment of the present application. The sixth input may be a sliding input, and the user may activate the edge delimiter of the second object by clicking or double-clicking the second object, so that the user may pass the sixth input, so that the electronic device may divide the second object into a plurality of sub-objects according to the sliding trajectory of the sixth input.
In the embodiment of the application, the user can also segment the second object according to the own use requirement, the electronic device identifies the sliding input of the user and automatically adsorbs the second object according to the acquired depth information boundary, so that the segmentation accuracy is higher, the second object meeting the own use requirement of the user can be obtained, and the use experience of the user is improved.
In some embodiments of the present application, before S110, an association relationship between a preset control and a background theme may also be established, specifically including the following steps:
receiving eleventh input of a user to a preset control;
in response to an eleventh input, an association between the preset control and the background theme is determined.
The eleventh input may be a click input or a long-press input, and is not limited herein.
As an example, the user may establish an association relationship between the preset control and the background theme by long pressing the preset control. For example, a preset control can be associated with weather, a preset control can be associated with a location, and so forth.
In addition, the user can also customize the association relationship between each preset control and the background theme in a self-defining mode.
In the embodiment of the application, the user can customize the incidence relation between the preset control and the background theme, so that the target background of the multimedia file can be directly obtained by the user through inputting the preset control in the later period, the operation complexity of the user is reduced, and the use experience of the user is improved.
In some embodiments of the present application, in order to better meet the use requirements of the user for the background image in different scenes, further embodiments of the present invention further provide a multimedia file output method applicable to an electronic device, as specifically shown in fig. 7.
Fig. 7 is a schematic flowchart of a multimedia file output method according to another embodiment of the present application, where the method includes:
s710, receiving a first input of a target preset control under the condition that a multimedia image is displayed;
the step of S710 is the same as S110 shown in fig. 1, and is not repeated herein.
S720, determining a first image based on the target background theme;
s730, acquiring pose information of the electronic equipment;
in some embodiments of the present application, the pose information may include: information such as the inclination angle of the electronic device, angle information from the light source of the electronic device, and distance. The electronic equipment can acquire the pose information of the electronic equipment through gravity and a gyroscope.
S740, adjusting scene information of the first image based on the corresponding relation between the pose information and the scene information, and taking the adjusted first image as a target background;
in some embodiments of the present application, the target context theme is a context theme associated with the target preset control. For example, if the target preset control is associated with weather, the target background theme is weather. The image under the subject background may be an image of a rainy day, an image of a snowy day, an image of a cloudy day, an image of haze, or the like.
S750, receiving a second input of the user;
and S760, responding to the second input, and outputting the target multimedia file.
Wherein the target multimedia file comprises a target subject and a target background.
In the embodiment of the application, a user can adjust the scene information of the first image by adjusting the pose information of the electronic device, so that the user can adjust the scene information of the first image in real time to obtain a target background which meets the use requirements of the user.
The following describes some embodiments in detail how S740 can be implemented.
In some embodiments, the user moves the electronic device horizontally, and may display the light and shadow scene information of the target at different lighting angles according to the angle information;
in some embodiments, a user moves the electronic device upward, and may determine whether the current shooting scene is an indoor or outdoor scene through luminance information or an artificial intelligence scene, so as to display or switch different sky or building upper scene information such as day/night/rainy and snowy days;
in some embodiments, the user moves the electronic device downward, and may display ground information to display corresponding different scene information;
in some embodiments, the user may customize an application or a keyword to link the horizontal direction rotation switching, for example, input a "weather" keyword or link a weather application, the user turns the terminal to switch the background to the background in different weather, and at the same time, the link place may switch the background to the background in the local weather, by continuing to turn the electronic device, the user may change the weather in different time points, by turning the electronic device fast, by turning the electronic device slow, the user may change the weather in different days, or by turning the electronic device slow, or by switching the first image to the background picture associated with the target background theme on the network.
In other embodiments, as shown in fig. 8, fig. 8 is a schematic interface diagram provided in the seventh embodiment of the present application. The multimedia image may further have axis-of-induction marks 81 and 82 displayed thereon for prompting a user to adjust scene information of the first image by rotating the electronic device.
For example: it may be that the upper and lower flips represent changes in time period of a quarter, and the left and right flips represent changes in time unit, so that the longer and shorter time changes are controlled by the upper and lower flips and the left and right flips, respectively.
For another example, the place may be turned upside down, and the climate may be turned upside down. The user can turn over from top to bottom to switch over the geographical positions with different longitudes and latitudes, and the climate change is represented by turning over from side to side. (e.g., switching to a different latitude shows a north-south pole or a climate change of a land ocean).
And the position change route can be quickly defined by turning the link to the map upside down.
In the embodiment of the application, a user can adjust the scene information of the first image by adjusting the pose information of the electronic device, so that the user can adjust the scene information of the first image in real time to obtain a target background which meets the use requirements of the user.
In order to increase user operability, in some embodiments of the present application, the user may also adjust the second object through the adjustment control before determining the second object as the target subject. The method specifically comprises the following steps:
receiving a fifth input of the second object by the user;
in response to a fifth input, displaying an adjustment control associated with the second phase; the adjusting control comprises: a first control; the first control is used to resize the second object and/or position the second object on the multimedia image.
In some embodiments of the present application, the fifth input may be a long-press input or a click input, which is not limited herein.
As shown in fig. 9, fig. 9 is a schematic interface diagram provided in the eighth embodiment of the present application. And after the user presses the second object for a long time, the adjusting control corresponding to the second object can be displayed. Among them, the first control 91 in the shape of an arrow as shown in fig. 9 may be included in the adjustment control. The user can press and drag the first control 91 to adjust the local position; the size of the whole body can be adjusted by double-clicking and holding the dragging first control 91; the two fingers hold down the border symbol at two locations simultaneously and drag the location where the second object can be dragged, and so on.
Further, in other embodiments, the adjustment controls may further include a second control 92 and a third control 93 as shown in FIG. 9. The second control 92 is used to cancel the previous operation of the user, or reset the second object to an initial state without adjustment. The third control 93 is used to deselect this selection and to re-determine the target subject based on a third input by the user.
In the embodiment of the application, the adjustment control is displayed, so that a user can adjust the second object through inputting the adjustment control, and the second object which meets the requirements of the user better is obtained.
In order to enable the user to realize the effect of video clipping during the video recording process, other embodiments of the present application further provide an output method of the target multimedia file, and a time adjustment axis can be displayed on the video recording interface, as shown in fig. 10.
Fig. 10 is a flowchart illustrating a multimedia file output method according to another embodiment of the present application, where the multimedia file output method includes:
s1010, receiving a first input of a target preset control under the condition that a multimedia image is displayed;
s1010 is the same as S110 in fig. 1, and is not limited herein.
S1020, determining a first image based on the target background theme;
s1030, acquiring pose information of the electronic equipment;
s1040, adjusting the scene information of the first image based on the corresponding relation between the pose information and the scene information;
s1050, receiving a seventh input of the user to the time adjustment axis;
s1060, responding to the seventh input, adjusting an effective time interval of the target scene information associated with the time adjustment axis in the video file;
s1070, determining the adjusted first image as a target background;
s1080, receiving a second input of the user;
s1090, responding to the second input, and outputting the multimedia file comprising the target multimedia file.
Wherein the target multimedia file comprises a target subject and a target background.
In some embodiments, as shown in fig. 11, fig. 11 is a schematic interface diagram provided in a ninth embodiment of the present application. Time adjustment axes 111 and 112 are displayed in the video recording interface. Wherein, the time adjustment axis 111 may be associated with the target scene information weather, and the time adjustment axis 112 may be associated with the target scene information location. The user sets the effective start time and effective interval of different target scene information in the video file by dragging the overlap ratio between the different time adjustment axes 111 and 112 and the start-stop time point.
In the embodiment of the application, the time adjustment axis displayed in the video recording interface enables a user to independently adjust the effective time of each scene information in the whole video file in the video recording process, so that the user can realize the video clipping effect in the video recording process, and the user experience is improved.
In other embodiments of the present application, as shown in fig. 11, a speed adjustment axis 113 may also be displayed in the video recording interface. The user can adjust the playing speed of the whole video file through the speed adjusting shaft. The playback speed is the speed at which the film is rotated in the camera, usually in second frames. By adjusting the speed adjusting shaft, the video recording effect of a fast cutting effect or a slow shot effect can be obtained, and the use experience of a user is improved.
In other embodiments of the present application, in order to make it more convenient for a user to view an output target multimedia file, the multimedia file output method provided in an embodiment of the present application may further include the following steps:
receiving an eighth input of the user;
in response to an eighth input, a first background theme of the target multimedia file is determined and an image associated with the first background theme is displayed.
The eighth input may be a click input or a long-press input, which is not limited herein.
As an example, as shown in fig. 12, fig. 12 is a schematic interface diagram provided in a tenth embodiment of the present application. The user may select the first background theme via an eighth input to control 121. For example, weather is selected as the first background theme, and an image associated with the weather is displayed on the screen of the electronic device in units of the weather.
Next, the user may also toggle to display images under the weather theme by sliding on the side 122 of the electronic device.
In some embodiments, after determining the first background theme, the electronic device may automatically play all images under the first background theme to achieve the effect of dynamic play.
In the embodiment of the application, the user can check the images under the same background theme, so that the user can check the images more conveniently, and the use experience of the user is improved.
In other embodiments of the present application, the user may also modify the already output image again. When the output image is modified, the re-output of the image can be realized by the multimedia file output method introduced in the above embodiment, which is not described herein again.
It should be noted that, in the multimedia file output method provided in the embodiment of the present application, the execution main body may be a multimedia file output device, or a control module in the multimedia file output device, configured to execute the loaded multimedia file output method. In the embodiment of the present application, a multimedia file output device executing a method for loading a multimedia file is taken as an example, and the multimedia file output device provided in the embodiment of the present application is described.
Fig. 13 is a schematic structural diagram of a multimedia file output device according to an embodiment of the present application. As shown in fig. 13, the multimedia file output apparatus includes:
a first receiving module 1310, configured to receive a first input to a target preset control in a case that a multimedia image is displayed; wherein the multimedia image comprises a target subject;
a first determining module 1320, configured to determine, in response to the first input, a target background based on a target background theme associated with the target preset control;
a second receiving module 1330, configured to receive a second input from the user;
an output module 1340 for outputting the target multimedia file in response to the second input; the target multimedia file includes a target subject and a target background.
Through the multimedia file output device provided by the embodiment of the application, under the condition that a multimedia image comprising a target main body is displayed, a first input of a target preset control is received, the target background can be determined based on a target background theme associated with the target preset control, so that when a second input of a user is received and the multimedia file is output, the target multimedia file comprising the target main body and the target background can be output, and the multimedia file meeting the background requirement of the user can be output without the help of other complex processing modes such as shooting under a corresponding scene or post-processing, and the like, so that the complexity of operation of background adjustment in the multimedia file is simplified.
In some embodiments of the present application, the multimedia file output apparatus may further include:
the third receiving module is used for receiving third input of a user to a first object in the multimedia image before receiving the first input of the target preset control under the condition of displaying the multimedia image;
the first acquisition module is used for responding to a third input and acquiring first depth information of the first object;
the second acquisition module is used for acquiring a second object meeting the first preset condition in the multimedia image; the first preset condition is associated with the first depth of field information;
and the second determination module is used for determining the second object as the target subject.
In some embodiments of the present application, the multimedia file output apparatus may further include:
the third acquisition module is used for acquiring a second object meeting a second preset condition in the multimedia image based on a preset depth of field range before receiving a first input to a target preset control under the condition of displaying the multimedia image; the second preset condition is associated with a preset depth range;
and the third determination module is used for determining the second object as the target subject.
In some embodiments of the present application, the multimedia file output apparatus may also be applied to an electronic device, a screen of which includes a first display region and a second display region; the second object is displayed in the first display area;
before determining the second object as the target subject, the multimedia file output apparatus may further include:
the fourth receiving module is used for receiving fourth input of dragging the second object from the first display area to the second display area by the user;
the second determination module may be specifically configured to determine, in response to the fourth input, the second object as the target subject.
In some embodiments of the present application, the multimedia file output apparatus may further include:
the fifth receiving module is used for receiving a fifth input of the second object from the user before the second object is determined as the target subject;
a first display module to display, in response to a fifth input, an adjustment control associated with the second object; the adjustment control comprises: a first control;
the first control is used to resize the second object and/or position the second object on the multimedia image.
In some embodiments of the present application, the multimedia file output apparatus may further include:
the sixth receiving module is used for receiving a sixth input of the second object from the user before receiving a fourth input that the user drags the second object from the first display area to the second display area;
the segmentation module is used for responding to a sixth input and segmenting the second object into a plurality of sub-objects according to the sliding track of the sixth input;
a sixth receiving module, specifically configured to receive a fourth input that the user drags a target sub-object in the plurality of sub-objects from the first display area to the second display area;
the second determining module may be specifically configured to determine, in response to the fourth input, the target sub-object as the target subject.
In some embodiments of the present application, the multimedia file output apparatus may be applied to an electronic device;
the first determining module 1320 may specifically include:
a first determining unit, configured to determine a first image based on a target background theme;
the first acquisition unit is used for acquiring pose information of the electronic equipment;
and the first adjusting unit is used for adjusting the scene information of the first image based on the corresponding relation between the pose information and the scene information, and determining the adjusted first image as the target background.
In some embodiments of the present application, the multimedia image is a preview image displayed in a capture preview interface, or a preview video frame displayed in a video recording interface.
In some embodiments of the present application, the multimedia image is a preview video frame displayed in a video recording interface, and the target multimedia file is a video file;
a time adjusting shaft is also displayed on the video recording interface;
the multimedia file output apparatus may further include:
a seventh receiving module, configured to receive a seventh input of the user on the time adjustment axis after adjusting the scene information of the first image;
and the first adjusting module is used for responding to a seventh input and adjusting the effective time interval of the target scene information associated with the time adjusting axis in the video file.
In some embodiments of the present application, the multimedia image is a preview image displayed in a shooting preview interface, and the target multimedia file is an image;
the multimedia file output apparatus may further include:
the eighth receiving module is used for receiving an eighth input of the target multimedia file by the user after the target multimedia file is output;
and the third determining module is used for responding to the eighth input, determining a first background theme of the target multimedia file and displaying an image associated with the first background theme.
Each module and each unit of the multimedia file output device provided in the embodiment of the present application have functions of implementing the multimedia file output method/step in the embodiment shown in fig. 1 to 12, and can achieve technical effects corresponding to the embodiments shown in fig. 1 to 12, and for brevity, no further description is given here.
The multimedia file output device in the embodiment of the present application may be a device, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The multimedia file output device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The multimedia file output device provided in the embodiment of the present application can implement each process implemented by the multimedia file output device in the method embodiments of fig. 1 to fig. 12, and for avoiding repetition, details are not repeated here.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 1410, a memory 109, and a program or an instruction stored in the memory 109 and executable on the processor 1410, where the program or the instruction is executed by the processor 1410 to implement each process of the above-mentioned embodiment of the multimedia file output method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing the embodiment of the present application.
The electronic device 1400 includes, but is not limited to: radio unit 1401, network module 1402, audio output unit 1403, input unit 1404, sensor 1405, display unit 1406, user input unit 1407, interface unit 1408, memory 1409, and processor 1410.
Those skilled in the art will appreciate that the electronic device 1400 may further comprise a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 1410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein the processor 1410 is configured to control the user input unit 1407 to receive a first input to the target preset control in a case where the multimedia image is displayed; wherein the multimedia image comprises a target subject; in response to the first input, determining a target background based on a target background theme associated with the target preset control; controlling the user input unit 1407 to receive a second input by the user; outputting the target multimedia file in response to the second input; the target multimedia file includes a target subject and a target background.
In the embodiment of the application, under the condition that a multimedia image including a target main body is displayed, a first input to a target preset control is received, and then the target background can be determined based on a target background theme associated with the target preset control, so that when a second input of a user is received and a multimedia file is output, the target multimedia file including the target main body and the target background can be output, and the multimedia file meeting the background requirement of the user can be output without other complicated processing modes such as shooting in a corresponding scene or post-processing, and further the complexity of background adjustment operation in the multimedia file is simplified.
In some embodiments of the application, the user input unit 1407 is further configured to receive a third input from the user to the first object in the multimedia image;
accordingly, the processor 1410 is configured to obtain first depth information of the first object in response to the third input; acquiring a second object meeting a first preset condition in the multimedia image; the first preset condition is associated with the first depth of field information;
the second object is determined as a target subject.
In some embodiments of the present application, the processor 1410 is configured to, before receiving the first input to the target preset control in the case of displaying the multimedia image, obtain, based on the preset depth of field range, a second object that satisfies a second preset condition in the multimedia image; the second preset condition is associated with a preset depth of field range; the second object is determined as the target subject.
In some embodiments of the present application, a multimedia file output method may be applied to an electronic device, a screen of which includes a first display region and a second display region; the second object is displayed in the first display area;
the user input unit 1407 may be further configured to receive a fourth input that the user drags the second object from the first display area to the second display area before the second object is determined to be the target subject;
accordingly, the processor 1410 may be further configured to determine the second object as the target subject in response to a fourth input.
In some embodiments of the present application, the user input unit 1407 may be further configured to receive a fifth input of the second object by the user;
the display unit 1406 may also be for displaying, in response to a fifth input, an adjustment control associated with the second object; the adjustment control comprises: a first control; the first control is used to resize the second object and/or position the second object on the multimedia image.
In some embodiments of the present application, the user input unit 1407 may be further configured to receive a sixth input of the second object by the user;
accordingly, the processor 1410 may be further configured to, in response to the sixth input, divide the second object into a plurality of sub-objects according to the sliding trajectory of the sixth input;
accordingly, the user input unit 1407 may be further configured to receive a fourth input that the user drags the target sub-object of the plurality of sub-objects from the first display area to the second display area;
accordingly, the processor 1410 may be further configured to determine the target sub-object as the target subject in response to a fourth input.
In some embodiments of the present application, the multimedia file output method may also be applied to an electronic device;
accordingly, the processor 1410 may be configured to determine a first image based on the target background theme; acquiring pose information of the electronic equipment;
and adjusting the scene information of the first image based on the corresponding relation between the pose information and the scene information, and determining the adjusted first image as a target background.
In some embodiments of the present application, the multimedia image is a preview image displayed in a capture preview interface, or a preview video frame displayed in a video recording interface.
In some embodiments of the present application, the multimedia image is a preview video frame displayed in a video recording interface, and the target multimedia file is a video file; a time adjusting shaft is also displayed on the video recording interface;
in some embodiments of the present application, the user input unit 1407 is for receiving a seventh input of the user to the time adjustment axis;
accordingly, the processor 1410 may be configured to adjust an effective time interval of the target scene information associated with the time adjustment axis in the video file in response to the seventh input.
In some embodiments of the present application, the multimedia image is a preview image displayed in a shooting preview interface, and the target multimedia file is an image;
the user input unit 1407 is used for receiving an eighth input of the target multimedia file by the user;
the processor 1410 may be configured to determine a first background theme of the target multimedia file in response to the eighth input, and display an image associated with the first background theme.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing multimedia file output method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. The method is characterized by being applied to electronic equipment, wherein a screen of the electronic equipment comprises a first display area and a second display area; the method comprises the following steps:
receiving a first input to a target preset control under the condition of displaying the multimedia image; wherein the multimedia image comprises a target subject;
in response to the first input, determining a target background based on a target background theme associated with the target preset control;
receiving a second input of the user;
outputting a target multimedia file in response to the second input; the target multimedia file comprises the target subject and the target background;
under the condition that the multimedia image is displayed, before the first input to the target preset control is received, the method further comprises the following steps:
acquiring a second object meeting a second preset condition in the multimedia image based on a preset depth-of-field range; the second preset condition is associated with the preset depth of field range;
determining the second object as the target subject; the second object is displayed in the first display area;
before the determining the second object as the target subject, further comprising:
receiving a fourth input that the user drags the second object from the first display area to the second display area;
the determining the second object as the target subject includes:
determining the second object as the target subject in response to the fourth input;
the determining, in response to the first input, a target context based on a target context theme associated with the target preset control includes:
determining a first image based on the target background subject;
acquiring pose information of the electronic equipment;
and adjusting the scene information of the first image based on the corresponding relation between the pose information and the scene information, and determining the adjusted first image as the target background.
2. The method of claim 1, wherein before receiving the first input to the target preset control while the multimedia image is displayed, further comprising:
receiving a third input of the user to a first object in the multimedia image;
acquiring first depth information of the first object in response to the third input;
acquiring a second object meeting a first preset condition in the multimedia image; the first preset condition is associated with the first depth of view information;
determining the second object as the target subject.
3. The method of claim 2, wherein prior to determining the second object as the target subject, further comprising:
receiving a fifth input to the second object by the user;
in response to the fifth input, displaying an adjustment control associated with the second object; the adjustment control includes: a first control;
the first control is used for adjusting the area size of the second object and/or the position of the second object on the multimedia image.
4. The method of claim 1, wherein prior to receiving a fourth input by the user dragging the second object from the first display area to the second display area, further comprising:
receiving a sixth input to the second object by the user;
in response to the sixth input, segmenting the second object into a plurality of sub-objects according to the sliding track of the sixth input;
the receiving a fourth input that the user drags the second object from the first display area to the second display area includes:
receiving a fourth input of the user for dragging a target sub-object in the plurality of sub-objects from the first display area to the second display area;
the determining the second object as the target subject includes:
in response to the fourth input, determining the target sub-object as the target subject.
5. The method of claim 1, wherein the multimedia image is a preview image displayed in a capture preview interface or a preview video frame displayed in a video recording interface.
6. The method of claim 1, wherein the multimedia image is a preview video frame displayed in a video recording interface, and the target multimedia file is a video file;
a time adjusting shaft is further displayed on the video recording interface;
after the adjusting the scene information of the first image, the method further includes:
receiving a seventh input to the time adjustment axis by the user;
in response to the seventh input, adjusting an effective time interval of the target scene information associated with the time adjustment axis in the video file.
7. The method of claim 1, wherein the multimedia image is a preview image displayed in a capture preview interface, and the target multimedia file is an image;
after the outputting the target multimedia file, the method further comprises:
receiving an eighth input of the target multimedia file by the user;
in response to the eighth input, a first background theme of the target multimedia file is determined and an image associated with the first background theme is displayed.
8. The multimedia file output device is applied to electronic equipment, and a screen of the electronic equipment comprises a first display area and a second display area; the method comprises the following steps:
the first receiving module is used for receiving first input of a target preset control under the condition of displaying the multimedia image; wherein the multimedia image comprises a target subject;
a first determination module, configured to determine, in response to the first input, a target background based on a target background theme associated with the target preset control;
the second receiving module is used for receiving a second input of the user;
an output module for outputting a target multimedia file in response to the second input; the target multimedia file comprises the target subject and the target background;
further comprising:
the third acquisition module is used for acquiring a second object meeting a second preset condition in the multimedia image based on a preset depth-of-field range; the second preset condition is associated with the preset depth of field range;
a third determination module for determining the second object as the target subject; the second object is displayed in the first display area;
further comprising:
a fourth receiving module, configured to receive a fourth input that the user drags the second object from the first display area to the second display area;
a second determination module to determine the second object as the target subject in response to the fourth input;
the first determining module specifically includes:
a first determining unit, configured to determine a first image based on the target background theme;
a first acquisition unit configured to acquire pose information of the electronic device;
and the first adjusting unit is used for adjusting the scene information of the first image based on the corresponding relation between the pose information and the scene information, and determining the adjusted first image as the target background.
9. The apparatus of claim 8, further comprising:
the third receiving module is used for receiving a third input of the user to the first object in the multimedia image;
a first obtaining module, configured to obtain first depth-of-field information of the first object in response to the third input;
the second acquisition module is used for acquiring a second object meeting a first preset condition in the multimedia image; the first preset condition is associated with the first depth of view information;
a second determination module to determine the second object as the target subject.
10. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the multimedia file output method according to any one of claims 1 to 7.
11. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the multimedia file output method according to any one of claims 1 to 7.
CN202010733380.5A 2020-07-27 2020-07-27 Multimedia file output method and device, electronic equipment and readable storage medium Active CN111917979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010733380.5A CN111917979B (en) 2020-07-27 2020-07-27 Multimedia file output method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010733380.5A CN111917979B (en) 2020-07-27 2020-07-27 Multimedia file output method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111917979A CN111917979A (en) 2020-11-10
CN111917979B true CN111917979B (en) 2022-09-23

Family

ID=73281648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010733380.5A Active CN111917979B (en) 2020-07-27 2020-07-27 Multimedia file output method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111917979B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113271377B (en) * 2021-04-25 2023-12-22 维沃移动通信有限公司 Image processing method, device, electronic equipment and medium
CN113706723A (en) * 2021-08-23 2021-11-26 维沃移动通信有限公司 Image processing method and device
CN114584704A (en) * 2022-02-08 2022-06-03 维沃移动通信有限公司 Shooting method and device and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100476828B1 (en) * 2004-09-24 2005-03-18 엠텍비젼 주식회사 Method and apparatus for creating compound image using a plurality of images
US20170171471A1 (en) * 2015-12-14 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method and device for generating multimedia picture and an electronic device
CN105376496A (en) * 2015-12-14 2016-03-02 广东欧珀移动通信有限公司 Photographing method and device
CN105933532A (en) * 2016-06-06 2016-09-07 广东欧珀移动通信有限公司 Image processing method and device, and mobile terminal
CN111316627B (en) * 2017-09-06 2022-09-16 深圳传音通讯有限公司 Shooting method, user terminal and computer readable storage medium
TWI698117B (en) * 2018-08-07 2020-07-01 宏碁股份有限公司 Generating method and playing method of multimedia file, multimedia file generation apparatus and multimedia file playback apparatus

Also Published As

Publication number Publication date
CN111917979A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111917979B (en) Multimedia file output method and device, electronic equipment and readable storage medium
CN112492209B (en) Shooting method, shooting device and electronic equipment
KR101557297B1 (en) 3d content aggregation built into devices
CN107580178B (en) Image processing method and device
US9781355B2 (en) Mobile terminal and control method thereof for displaying image cluster differently in an image gallery mode
WO2023151611A1 (en) Video recording method and apparatus, and electronic device
JP2023551264A (en) Photography methods, devices, electronic devices and storage media
CN108513641A (en) Unmanned plane filming control method, unmanned plane image pickup method, control terminal, unmanned aerial vehicle (UAV) control device and unmanned plane
CN112954210A (en) Photographing method and device, electronic equipment and medium
CN113794829B (en) Shooting method and device and electronic equipment
CN112637515B (en) Shooting method and device and electronic equipment
CN116156314A (en) Video shooting method and electronic equipment
CN113709377A (en) Method, device, equipment and medium for controlling aircraft to shoot rotation delay video
CN113905175A (en) Video generation method and device, electronic equipment and readable storage medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN108683847B (en) Photographing method, device, terminal and storage medium
CN104869283A (en) Shooting method and electronic equipment
DE102019133659A1 (en) Electronic device, control method, program and computer readable medium
WO2022161261A1 (en) Image display method and apparatus, and electronic device
WO2022262536A1 (en) Video processing method and electronic device
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN106488128B (en) Automatic photographing method and device
CN114071009B (en) Shooting method and equipment
CN113709376A (en) Method, device, equipment and medium for controlling aircraft to shoot rotating lens video
CN111522990A (en) Group sharing type photographing method, photographing device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant