CN115334242B - Video recording method, device, electronic equipment and medium - Google Patents
Video recording method, device, electronic equipment and medium Download PDFInfo
- Publication number
- CN115334242B CN115334242B CN202211003170.6A CN202211003170A CN115334242B CN 115334242 B CN115334242 B CN 115334242B CN 202211003170 A CN202211003170 A CN 202211003170A CN 115334242 B CN115334242 B CN 115334242B
- Authority
- CN
- China
- Prior art keywords
- target
- input
- video
- preview window
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 230000004044 response Effects 0.000 claims abstract description 38
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 12
- 230000000694 effects Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Television Signal Processing For Recording (AREA)
Abstract
The application discloses a video recording method, a video recording device, electronic equipment and a video recording medium, which belong to the technical field of shooting, wherein the video recording method comprises the following steps: receiving a first input of a user to a first preview window; in response to the first input, displaying a second preview window in the first display area and displaying a target editing window in the second display area; the display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining target image parameters corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window; generating a target video based on the first video and the second video; the video content of the first video is the same as the display content of the second preview window, the second video is a video containing a target object, and the image parameter corresponding to the target object in the second video is a target image parameter.
Description
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a video recording method, a video recording device, electronic equipment and a medium.
Background
Generally, in a scenario where a user records a video using an electronic device, if the user wants to process some shooting objects in the recorded video, the user may trigger the electronic device to display an editing interface of the video, and then perform multiple operations in the editing interface, so that the electronic device may perform editing processing on each frame of video frame of the video, so as to obtain a video required by the user.
However, since the user needs to edit the video frame by frame, the user's operation is cumbersome and time-consuming during the editing process.
In this way, the electronic device is caused to have low efficiency in editing the video.
Disclosure of Invention
The embodiment of the application aims to provide a video recording method, a video recording device and electronic equipment, which can solve the problem of low efficiency of editing processing of video.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video recording method, where the video recording method includes:
Receiving a first input of a user to a first preview window; in response to the first input, displaying a second preview window in the first display area and displaying a target editing window in the second display area; the display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining target image parameters corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window; generating a target video based on the first video and the second video; the video content of the first video is the same as the display content of the second preview window, the second video is a video containing a target object, and the image parameter corresponding to the target object in the second video is a target image parameter.
In a second aspect, an embodiment of the present application provides a video recording apparatus, including: the device comprises a receiving module, a display module and a processing module; the receiving module is used for receiving a first input of a user to the first preview window; the display module is used for responding to the first input received by the receiving module, displaying a second preview window in a first display area and displaying a target editing window in a second display area; the display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining target image parameters corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window; the processing module is used for generating a target video based on the first video and the second video displayed by the display module; the video content of the first video is the same as the display content of the second preview window, the second video is a video containing a target object, and the image parameter corresponding to the target object in the second video is a target image parameter.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions for implementing the steps of the method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executed by at least one processor to carry out the steps of the method as in the first aspect.
In the embodiment of the application, the electronic device can display a second preview window with an association relation with the first preview window in a first display area according to the first input of the user to the first preview window, and display a target editing window for editing the target object determined based on the first input in a second display area so as to generate a target video based on a first video with the same display content as the second preview window and a second video of the target object with the image parameter as the target image parameter. Because the electronic device can continue recording the first video in the second preview window according to the first input of the user to the first preview window, and edit the target object in the target editing window to generate the target video. Therefore, the user can directly input the target object in the first preview window once, so that the electronic equipment can directly edit the target object to obtain the target video without the need of performing frame-by-frame editing processing after the video is recorded, the operation of the user in the process of triggering the electronic equipment to edit the video can be simplified, the time consumption is reduced, and the efficiency of the electronic equipment to edit the video can be improved.
Drawings
Fig. 1 is a schematic diagram of a video recording method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface of a mobile phone according to an embodiment of the present application;
FIG. 3 is a second diagram of an interface of a mobile phone according to an embodiment of the present application;
FIG. 4 is a third diagram illustrating an interface of a mobile phone according to an embodiment of the present application;
FIG. 5 is a diagram illustrating an interface of a mobile phone according to an embodiment of the present application;
FIG. 6 is a fifth exemplary diagram of an interface of a mobile phone according to an embodiment of the present application;
FIG. 7 is a diagram illustrating an interface of a mobile phone according to an embodiment of the present application;
FIG. 8 is a diagram of a mobile phone interface according to an embodiment of the present application;
FIG. 9 is a diagram illustrating an interface of a mobile phone according to an embodiment of the present application;
FIG. 10 is a diagram illustrating a mobile phone interface according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
Fig. 13 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video recording method, the device, the electronic equipment and the medium provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
In the related art, in a scenario where a user records a video by using an electronic device, if the user wants to process some shooting objects in the recorded video, the user may trigger the electronic device to display an editing interface of the video, and then perform multiple operations in the editing interface, so that the electronic device may perform editing processing on each frame of video frame of the video, so as to obtain a video required by the user. However, since the user needs to perform multiple operations to make the electronic device perform editing processing on each frame of video frame of the video, the operation of the user is complicated and time-consuming in triggering the electronic device to perform editing processing on the video.
According to the embodiment of the application, the electronic device can display a second preview window with an association relation with the first preview window in a first display area according to the first input of the user to the first preview window, and display a target editing window for editing the target object determined based on the first input in a second display area so as to generate a target video based on a first video with the same display content as the second preview window and a second video containing the target object with the image parameter as the target image parameter. Because the electronic device can continue recording the video in the second preview window according to the first input of the user to the first preview window to obtain the first video, and edit the target object in the target editing window to generate the target video. Therefore, the user can directly input the target object in the first preview window once, so that the electronic equipment can directly edit the target object to obtain the target video without the need of performing frame-by-frame editing processing on the video by the user, the operation of the user in the process of triggering the electronic equipment to edit the video can be simplified, the time consumption is reduced, and the efficiency of the electronic equipment to edit the video can be improved.
An embodiment of the present application provides a video recording method, and fig. 1 shows a flowchart of the video recording method provided by the embodiment of the present application, where the method may be applied to an electronic device. As shown in fig. 1, the video recording method provided in the embodiment of the present application may include the following steps 201 to 203.
Step 201, the electronic device receives a first input of a user to a first preview window.
Optionally, in the embodiment of the present application, in a case where the electronic device displays an interface of the target application, the electronic device may display a first preview window according to an input of a user, collect a video picture through a camera of the electronic device, and display the video picture in the first preview window. In the embodiment of the application, the electronic device can be a folding screen device, a scroll screen device or other electronic devices with large display screens.
Wherein the target application may comprise any one of the following: a shooting class application, an image processing class application, a chat class application, a web page class application, and the like. The electronic device may display the first preview window in all or part of the screen portion of the display screen of the electronic device, so that the electronic device may display the video frame in the first preview window.
In the embodiment of the application, the first preview window is used for displaying video pictures acquired by a camera of the electronic device in real time.
It can be understood that the video frame includes N objects, where N is a positive integer.
Optionally, in an embodiment of the present application, the N objects may include at least one of the following: characters, animals, scenery, etc.
In an embodiment of the present application, the first input includes: the method comprises the steps of inputting a target control in a preview interface where a first preview window is located by a user and inputting at least one object of N objects in the first preview window by the user.
Optionally, in an embodiment of the present application, the first input may include any one of the following: single click input, double click input, long press input, slide input, etc.
The target control may be used to initiate a recording editing function.
It should be noted that, the "recording editing function" can be understood as follows: in the process that the electronic equipment collects the video picture through the camera, the video picture can be edited.
Optionally, in the embodiment of the present application, the target control may be a "record editing" control.
Optionally, in the embodiment of the present application, in a case where the electronic device displays an interface of the target application, the interface includes at least one control, and the at least one control includes a target control, so that a user may make a first input to the target control.
For example, as shown in fig. 2, in the case that the first preview window 06 is displayed on the display screen of the electronic device 01, the recording editing function is triggered to be started by the user inputting the recording editing control (i.e., the target control) 07.
Step 202, the electronic device responds to the first input, displays a second preview window in a first display area, and displays a target editing window in a second display area.
The display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining a target image parameter corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window.
Specifically, the second preview window is used for displaying a video picture acquired by the electronic device through the camera.
Optionally, in the embodiment of the present application, the second preview window is a preview window obtained after the user selects and inputs the target object in the first preview window.
Optionally, in the embodiment of the present application, the second preview window may or may not cancel displaying the target object selected by the user.
Optionally, in an embodiment of the present application, the target image parameter may specifically be at least one of the following: brightness, contrast, color, saturation, etc. The target image parameter may be determined according to a parameter value input by a user.
Optionally, in an embodiment of the present application, the first display area is a first screen or a second screen. Further optionally, in the embodiment of the present application, if the electronic device is a left-right folded display screen, the first screen is a left display screen, and if the first screen is a right display screen; if the first screen is a right display screen, the second screen is a left display screen.
Optionally, in the embodiment of the present application, if the electronic device is a display screen that is folded up and down, the second screen is a display screen that is folded down if the first screen is a display screen that is folded up and down; if the first screen is the lower half display screen, the second screen is the upper half display screen.
An example of how the electronic device displays the second preview window in the first display area and the target edit window in the second display area according to the first input will be described below.
Optionally, in an embodiment of the present application, the first input includes a first sub-input and a second sub-input. Specifically, the above step 202 may be specifically implemented by the following steps 202a to 202 c.
In step 202a, the electronic device displays a third preview window in the first display area and an initial editing window in the second display area in response to the first sub-input.
The display content of the third preview window is the same as the display content of the first preview window.
Optionally, in the embodiment of the present application, the third preview window may be a preview window in which the first preview window is reduced according to a preset ratio. Wherein, the preset proportion can be 0.5.
Optionally, in the embodiment of the present application, the first sub-input may be an input of a target control in a preview interface where the first preview window is located by a user.
Optionally, in an embodiment of the present application, the first sub-input may include any one of the following: single click input, double click input, long press input, slide input, etc.
Optionally, in an embodiment of the present application, the target control may be used to initiate a recording editing function.
Specifically, the target control may be a "record editing" control.
Optionally, in the embodiment of the present application, the electronic device may divide the display screen into a first display area and a second display area, and then display the third preview window in the first display area, and display the target editing window in the second display area.
The electronic device can divide the display screen into a first display area and a second display area according to a preset proportion. Wherein, the preset proportion can be specifically 1:1, i.e. the size of the first display area and the size of the second display area may be the same.
After the electronic device divides the display screen into the first display area and the second display area, the electronic device may adjust the sizes of the first display area and the second display area according to a drag input of the user to the target edge line. Wherein the first display area and the second display area are both adjacent to the target edge line.
For example, the electronic device may resize the first display area and resize the second display area in accordance with a seventh input by the user to the target edge line. Wherein the seventh input is a drag input towards the second display area.
Or the electronic device may reduce the size of the first display area and increase the size of the second display area according to the eighth input to the target edge line by the user. Wherein the eighth input is a drag input toward the first display area.
In the embodiment of the present application, the initial editing window is specifically an editing window that does not display an editing object.
For example, as shown in fig. 3, the electronic device 01 receives a first sub-input of a recording editing control (i.e., target control) 07 by a user and, in response to the first sub-input, divides the display into a left display and a right display. Wherein the left display screen is used for displaying the third preview window 02 and the right display screen is used for displaying the initial editing window 08. If the user needs to edit an object in the video frame, the object in the third preview window 02 of the left display screen can be dragged directly to the initial editing window 08 of the right display screen, so as to edit the object.
Step 202b, the electronic device receives a second sub-input from the user to the target object in the third preview window.
Optionally, in the embodiment of the present application, the second sub-input may be input by the user on the target object in the third preview window, and the target object is moved to the initial editing window.
Further alternatively, in an embodiment of the present application, the second sub-input may include any one of the following: single click input, double click input, long press input, slide input, drag input, and the like.
It should be noted that the target object may be an object selected by a user.
Specifically, the target object may include at least one object. In the case that the target object is one object, the first input and the drag input may be a drag input of the user to the one object. In the case where the target object is a plurality of objects, the first input may be a continuous input sequentially performed by the user on the plurality of objects.
For example, as shown in fig. 4, in the case where the target object is the building 03 (i.e., one target object), the electronic device moves the building 03 from the third preview window 02 into the initial editing window 08 by a drag input of the user to the building 03 to edit it. As shown in fig. 5, in the case where the target objects are the building 03 and the girl 05 (i.e., a plurality of target objects), the electronic device determines them as target objects by continuous click input of the building 03 and the girl 05 by the user, and moves the building 03 and the girl 05 from the third preview window 02 into the initial editing window 08 for editing by drag input of the building 03 or the girl 05 (i.e., one of the target objects) or any position in the first preview interface.
In response to the second sub-input, the electronic device updates the third preview window to the second preview window in the first display area and updates the initial editing window to the target editing window in the second display area 202 c.
Wherein the target editing window includes a thumbnail of the target object.
Optionally, in an embodiment of the present application, the display content of the second preview window is determined based on a user input to a target object in the third preview window.
Optionally, in the embodiment of the present application, the electronic device inputs according to a user selection of a target object in the N objects in the third preview window, and moves the input to the initial editing window, so as to obtain the second preview window and the target editing window.
Therefore, the electronic equipment can display the second preview window in the first display area and display the target editing window in the second display area, so that the electronic equipment can edit the target object in the video picture acquired by the camera under the condition of not interrupting recording video, and can edit the target object in the real-time video picture acquired by the camera under the condition that a user does not exit current video recording, thereby improving the efficiency of editing processing of the video by the electronic equipment.
Step 203, the electronic device generates a target video based on the first video and the second video.
The video content of the first video is the same as the display content of the second preview window, the second video is a video containing a target object, and the image parameter corresponding to the target object in the second video is a target image parameter.
Optionally, in the embodiment of the present application, the first video may be a video acquired by the electronic device through a camera, and the first video content may or may not include a target object.
It is understood that the electronic device can synthesize the target video based on the first video and the second video.
According to the video recording method provided by the embodiment of the application, the electronic equipment can display the second preview window with the association relation with the first preview window in the first display area according to the first input of the user to the first preview window, and display the target editing window for editing the target object determined based on the first input in the second display area so as to generate the target video based on the first video with the same display content as the second preview window and the second video with the image parameter as the target image parameter. Because the electronic device can continue recording the video in the second preview window according to the first input of the user to the first preview window to obtain the first video, and edit the target object in the target editing window to generate the target video. Therefore, the user can directly input the target object in the first preview window once, so that the electronic equipment can directly edit the target object to obtain the target video without the need of performing frame-by-frame editing processing on the video by the user, the operation of the user in the process of triggering the electronic equipment to edit the video can be simplified, the time consumption is reduced, and the efficiency of the electronic equipment to edit the video can be improved.
The display states of the N objects in the third preview window are updated as will be exemplified below.
Optionally, in the embodiment of the present application, specifically, after the third preview window is displayed in the first display area in the step 201a, the video recording method provided in the embodiment of the present application may be further implemented through steps 301 to 302 described below.
Step 301, the electronic device receives a second input from a user to the third preview window.
Optionally, in an embodiment of the present application, the preview screen displayed in the third preview window includes N objects.
Optionally, in an embodiment of the present application, the N objects may include: characters, animals, scenery, etc.
Optionally, in the embodiment of the present application, the second input may be an input of a user to an arbitrary position in the third preview window.
Optionally, in an embodiment of the present application, the second input may include any one of the following: single click input, double click input, long press input, slide input, etc.
Step 302, the electronic device updates the display states of the N objects in response to the second input.
Wherein the N objects are shooting objects in the third preview window, the N objects comprise target objects, and N is a positive integer
It should be noted that, according to the second input of the user, the electronic device identifies N objects first, and then updates the display states of the N objects.
Optionally, in the embodiment of the present application, the display state may specifically be marking N objects.
Optionally, the marking means includes at least one of: dotted line box marking, highlighting marking, shading marking, color marking, etc.
The following illustrates how the electronic device displays N identifiers corresponding to the N objects in the third preview window.
Optionally, in the embodiment of the present application, after the step 301, the video recording method in the embodiment of the present application may be further implemented by the following step 401, and before the step 202b, the video recording method in the embodiment of the present application may be further implemented by the following steps 501 to 502.
In step 401, the electronic device displays N object identifiers in corresponding areas of the N objects in response to the second input.
Wherein the N object identifiers have association relations with the N objects.
Optionally, in an embodiment of the present application, the N object identifiers are in one-to-one correspondence with the N objects.
Optionally, in the embodiment of the present application, in the case that the N object identifiers include N names and N thumbnails, the electronic device may identify the N objects to determine the N objects, and display N numbers and N thumbnails of the N objects.
After the electronic device determines N objects, the electronic device may identify the N objects to obtain N names of the N objects, and generate N thumbnails according to the N objects.
Specifically, the electronic device may identify N objects to obtain feature information of the N objects, and then determine N names corresponding to the feature information of the N objects to obtain N names of the N objects.
After the electronic device generates the N names and the N thumbnails, the electronic device may hover display the N names and the N thumbnails and the respective corresponding selection boxes on the screen area where the N objects are located.
As shown in fig. 5 and 6, when three objects, namely, a building 03, a boy 04 and a girl 05, are displayed on the third preview window 02 of the electronic device 01, the electronic device 01 recognizes the three objects according to a click input performed by a user on any position of the third preview window 02, and displays names, numbers and respective corresponding selection boxes of the three objects, respectively, and no thumbnail is displayed in the drawing. The selection box is used for selecting the object by a user. The user may select at least one object by clicking on a selection box of the at least one object.
Step 501, the electronic device receives a third input of a target identifier from the N object identifiers by a user.
Optionally, in an embodiment of the present application, the target identifier may be at least one object identifier.
Optionally, in the embodiment of the present application, the third input is used for a selection input of a selection box corresponding to at least one object of the N objects.
Further alternatively, in an embodiment of the present application, the third input may include any one of the following: single click input, double click input, long press input, slide input, etc.
Step 502, the electronic device responds to the third input to enable the target object corresponding to the target identifier to be in a selected state.
It is to be understood that, according to the input of the user to the selection box corresponding to the at least one object identifier, the electronic device makes the target object corresponding to the at least one object identifier be in the selected state.
Therefore, the electronic equipment can determine the target object according to the selection input of the user on the target identifier, so that the target object can be determined according to the user requirement, and the efficiency of editing the video by the electronic equipment is improved.
How the electronic device edits the target object in the editing window to generate a second video is illustrated below.
Alternatively, in the embodiment of the present application, after the "displaying the target editing window in the second display area" in the above step 301, the video recording method in the embodiment of the present application may be further implemented by the following steps 601 to 603.
Step 601, the electronic device receives a fourth input of a user to the target editing window.
Optionally, in an embodiment of the present application, the target editing window is a preview window for editing a target object. Specifically, the editing may specifically be adjusting the image parameters of the target object.
Further optionally, in an embodiment of the present application, the image parameter may specifically be at least one of the following: brightness, contrast, color, saturation, etc.
Optionally, in an embodiment of the present application, the fourth input is used for editing a target object of the target editing window.
Optionally, in an embodiment of the present application, the fourth input may include any one of the following: single click input, double click input, long press input, slide input, etc.
It can be appreciated that after the target object is moved from the third preview window to the target editing window, the electronic device may display at least one first control in the target editing window, where each first control corresponds to one image parameter processing manner, so that a user may perform fourth input on a second control in the at least one first control, so that the electronic device may process the image parameter of the target object. Optionally, in an embodiment of the present application, the at least one first control includes at least one of: brightness control, contrast control, color control, saturation control, etc.
In step 602, the electronic device adjusts the image parameter corresponding to the target object to the target image parameter in response to the fourth input.
For example, as shown in fig. 7, in the target editing window 08, the electronic device 01 triggers and adjusts the image parameters of the building 03 through the input of the editing control by the user, specifically, the image parameters of the building 03 can be adjusted through at least one item target parameter value corresponding to the brightness, contrast, color, saturation, etc. input by the user, so as to obtain the target image parameters corresponding to the building 03.
Step 603, the electronic device generates the second video based on the target image parameter.
Optionally, in the embodiment of the present application, the second video may be obtained by recording based on the target image parameter, or the image parameter of the image area where the target object is located in all the recorded video frames may be adjusted based on the target image parameter to obtain the second video.
Therefore, the electronic device can adjust the image parameter values of the target objects in all the video frames according to the input of the image parameters corresponding to the target objects by the user, so that the user can adjust the image parameters of the target objects in the video picture according to the needs of the user in the video recording process, multiple operations are not needed, and the video recording process meeting the needs of the user is simplified.
The electronic device will be illustrated below as to how the adjusted target object is displayed in the second preview window.
Optionally, in the embodiment of the present application, specifically, in the case where the second preview window cancels the display of the target object, after "the image parameter corresponding to the target object is adjusted to the target image parameter" in the above step 602, the video recording method in the embodiment of the present application may also be implemented through the following steps 701 to 702.
Step 701, the electronic device receives a fifth input of a user to the target editing window.
Optionally, in an embodiment of the present application, the fifth input is an input for moving the target object from the target editing window to the target display position in the second preview window.
Optionally, in an embodiment of the present application, the fifth input may include any one of the following: single click input, double click input, long press input, slide input, etc. In an embodiment of the present application, the target display position is determined according to a fifth input.
Optionally, in the embodiment of the present application, the target display position may be a position input by a user clicking in the second preview window, or may be a position at which the user drags the target object from the target editing window to the end position in the second preview window. It is understood that, by the user's input at an arbitrary position in the second preview window, the position corresponding to the input is determined as the display position of the target object. Or the user drags the target object from the target editing window to the input of the second preview window, and the end position corresponding to the input is determined as the display position of the target object.
It should be noted that the target display position may be any position in the second preview window.
In response to the fifth input, the electronic device displays the target object in a second preview window, step 702.
Wherein the display parameters of the target object are determined based on the target image parameters.
Optionally, in the embodiment of the present application, the electronic device moves the target object from the target editing window to the target display position in the second preview window according to the target display position determined by the user input, so as to display the target object in the second preview window.
For example, as shown in fig. 8 and 9, after the user inputs the restore control, a one-touch restore mode and a custom restore mode are displayed. If the user selects the one-touch reduction mode, the electronic device can move the building 03 to the original display position of the building in the second preview window. As shown in fig. 9, if the user selects the custom restore mode, the user can drag the building 03 to the right side of the girl 05 in the second preview window 02 so that the user can adjust the display position of the building 03 in the second preview window according to his/her needs.
Specifically, in the case that the display of the target object is not canceled in the second preview window, the electronic device may synchronously adjust the display parameters of the target object in the second preview window according to the target image parameters of the target object in the target editing window.
Therefore, the electronic device displays the target object of the target image parameter in the second preview window according to the fifth input of the user, so that the user can directly adjust the image parameter of the target object in the second preview window in the process of recording the video by the electronic device, and the process of adjusting the image parameter of the target object in the second preview window by the electronic device is simplified.
Optionally, in the embodiment of the present application, after the step 203, the video recording method in the embodiment of the present application may be further implemented through the following steps 801 to 802.
Step 801, the electronic device receives a sixth input from the user to the target editing window.
Step 802, the electronic device displays a time control in response to a sixth input.
The time control is used for determining the starting time and the ending time of a target video segment in a target video, the target video segment is a video segment containing a target object, and the image parameter corresponding to the target object in the target video segment is a target image parameter.
Optionally, in an embodiment of the present application, the target video segment is at least one video segment in the target video.
The target video clip indicates a video clip that updates the image parameter corresponding to the target object to the target image parameter.
Optionally, in an embodiment of the present application, the video content corresponding to the second video includes the video content of the target video segment, and the video content corresponding to the first video does not include the video content of the target video segment.
The start time indicates, for example, a start time of updating the image parameter corresponding to the target object to the target image parameter. The end time indicates a time when updating of the image parameter corresponding to the target object to the target image parameter is stopped.
It should be noted that, the period from the start time to the end time may be a period in which the image parameter corresponding to the target object is updated to the target image parameter. The time period may be automatically controlled by the electronic device according to a preset time period (i.e., an automatic mode), or may be readjusted according to a user's need (i.e., a custom mode).
For example, as shown in fig. 10, if the user selects the custom mode, the electronic device may adjust the display time of the edited target object in the second preview window according to the sliding input of the user to the time control.
Therefore, the starting time and the ending time can be adjusted through the user input, so that the display time period of the target object in the second preview window can be adjusted, that is, the time period of the target object disappearing from the second preview window can be adjusted according to the user requirement, and therefore the flexibility of displaying the target object on the video picture of the second preview window can be improved.
It should be noted that, in the video recording method provided by the embodiment of the present application, the execution subject may be a video recording device. In the embodiment of the present application, a method for executing video recording by a video recording device is taken as an example, and the video recording device provided by the embodiment of the present application is described.
Fig. 11 shows a schematic diagram of a possible configuration of a video recording apparatus according to an embodiment of the present application. As shown in fig. 11, the video recording apparatus 70 may include: a receiving module 71, a display module 72 and a processing module 73.
The receiving module 71 is configured to receive a first input from a user to the first preview window.
The display module 72 is configured to display a second preview window in the first display area and a target editing window in the second display area in response to the first input received by the receiving module 71; the display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining a target image parameter corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window.
The processing module 73 generates a target video based on the first video and the second video displayed by the display module 72; the video content of the first video is the same as the display content of the second preview window, the second video is a video containing a target object, and the image parameter corresponding to the target object in the second video is a target image parameter.
In one possible implementation, the first input includes a first sub-input and a second sub-input, and the video recording apparatus further includes: the module 74 is updated.
The display module 72 is specifically configured to display a third preview window in the first display area and display an initial editing window in the second display area in response to the first sub-input received by the receiving module 71, where a display content of the third preview window is the same as a display content of the first preview window.
The receiving module 71 is specifically configured to receive a second sub-input of the target object in the third preview window displayed by the display module 72 by the user.
The updating module 74 is configured to update the third preview window to the second preview window in the first display area displayed in the display module 72 and update the initial editing window to the target editing window in the second display area in response to the second sub-input, where the target editing window includes a thumbnail of the target object.
In a possible implementation manner, the receiving module 71 is further configured to receive a second input from the user to the third preview window.
The updating module 74 is further configured to update the display states of the N objects in response to the second input.
The N objects are shooting objects in the third preview window, the N objects include the target object, and N is a positive integer.
In a possible implementation manner, the display module 72 is further configured to display N identifiers in corresponding areas of the N objects in response to the second input received by the receiving module, where the N identifiers have an association relationship with the N objects.
The receiving module 71 is further configured to receive a third input of the target identifier from the N identifiers displayed by the display module 72.
The processing module 73 is further configured to, in response to the third input, enable the target object corresponding to the target identifier to be in the selected state.
In one possible implementation, the video recording apparatus further includes: an adjustment device 75.
The receiving module 71 is further configured to receive a fourth input from the user on the target editing window.
The adjustment module 75 is configured to adjust the image parameter corresponding to the target object to the target image parameter in response to the fourth input received by the receiving module 71.
The processing module 73 is further configured to generate a second video based on the target image parameter adjusted by the adjusting module 75.
In one possible implementation, the display of the target object is canceled in the second preview window.
The receiving module 71 is configured to receive a fifth input from a user to the target editing window;
the display module 72 is further configured to display a target object in the second preview window in response to the fifth input received by the receiving module, where a display parameter of the target object is determined based on the target image parameter.
In a possible implementation manner, the receiving module 71 is further configured to receive a sixth input from the user on the target editing window.
The display module 72 is further configured to display, in response to the sixth input received by the receiving module 71, a time control, where the time control is used to determine a start time and an end time of a target video segment in the target video, the target video segment is a video segment including a target object, and an image parameter corresponding to the target object in the target video segment is a target image parameter.
The embodiment of the application provides a video recording device, which can continuously record video in a second preview window according to a first input of a user to the first preview window to obtain a first video, and edit a target object in a target editing window to generate a target video. Therefore, the user can directly input the target object in the first preview window once, so that the video recording device can directly edit the target object to obtain the target video without the need of performing frame-by-frame editing processing on the video by the user, the operation of the user in the process of triggering the video recording device to edit the video can be simplified, the time consumption is reduced, and the efficiency of the video recording device in editing processing on the video can be improved.
The video recording device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in electronic equipment. The device may be a mobile electronic device or a non-mobile electronic device. The Mobile electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a personal digital assistant (personal DIGITAL ASSISTANT, PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The video recording apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video recording apparatus provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 10 to achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
Optionally, as shown in fig. 12, the embodiment of the present application further provides an electronic device 80, including a processor 81 and a memory 82, where the memory 82 stores a program or instructions that can be executed on the processor 81, and the program or instructions implement the steps of the embodiment of the video recording method when executed by the processor 81, and achieve the same technical effects, so that repetition is avoided and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the user input unit 107 is configured to receive a first input of a first preview window from a user.
A display unit 106 for displaying a second preview window in the first display area and displaying a target editing window in the second display area in response to the first input; the display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining a target image parameter corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window.
A processor 110 generating a target video based on the first video and the second video; the video content of the first video is the same as the display content of the second preview window, the second video is a video containing a target object, and the image parameter corresponding to the target object in the second video is a target image parameter.
The embodiment of the application provides electronic equipment, which can continuously record a first video in a second preview window according to a first input of a user to the first preview window, and edit a target object in a target editing window to generate a target video. Therefore, the user can directly input the target object in the first preview window once, so that the electronic equipment can directly edit the target object to obtain the target video without the need of performing frame-by-frame editing processing after the video is recorded, the operation of the user in the process of triggering the electronic equipment to edit the video can be simplified, the time consumption is reduced, and the efficiency of the electronic equipment to edit the video can be improved.
Optionally, in an embodiment of the present application, the first input includes a first sub-input and a second sub-input.
The display unit 106 is specifically configured to display a third preview window in response to the first sub-input, and display an initial editing window in the first display area, where the display content of the third preview window is the same as the display content of the first preview window.
The user input unit 107 is specifically configured to receive a second sub-input of the target object in the third preview window by the user.
The processor 110 is further configured to update the third preview window to a second preview window in the first display area and update the initial editing window to a target editing window in the second display area in response to the second sub-input, the target editing window including a thumbnail of the target object.
Optionally, in an embodiment of the present application, the user input unit 107 is further configured to receive a second input from the user on the third preview window.
The processor 110 is further configured to update the display states of the N objects in response to the second input.
Optionally, in the embodiment of the present application, the display unit 106 is further configured to display N identifiers in corresponding areas of the N objects in response to the second input received by the receiving module, where the N identifiers have an association relationship with the N objects.
The user input unit 107 is further configured to receive a third input of a target identifier of the N identifiers by a user.
The processor 110 is further configured to, in response to the third input, place the target object corresponding to the target identifier in the selected state.
Therefore, the electronic equipment can determine the target object according to the selection input of the user on the target identifier, so that the target object can be determined according to the user requirement, and the efficiency of editing the video by the electronic equipment is improved.
Optionally, in an embodiment of the present application, the user input unit 107 is further configured to receive a second input from the user on the target editing window.
The processor 110 is configured to adjust an image parameter corresponding to the target object to a target image parameter in response to the fourth input.
The processor 110 is further configured to generate a second video based on the target image parameter.
Therefore, the electronic device can adjust the image parameter values of the target objects in all the video frames according to the input of the image parameters corresponding to the target objects by the user, so that the user can adjust the image parameters of the target objects in the video picture according to the needs of the user in the video recording process, multiple operations are not needed, and the video recording process meeting the needs of the user is simplified.
Optionally, in the embodiment of the present application, in a case where the second preview window cancels the display of the target object.
A user input unit 107 for receiving a fifth input of a target editing window by a user;
The display unit 106 is further configured to display a target object in the second preview window in response to the fifth input, the display parameter of the target object being determined based on the target image parameter.
Therefore, the electronic device displays the target object of the target image parameter in the second preview window according to the fifth input of the user, so that the user can directly adjust the image parameter of the target object in the second preview window in the process of recording the video by the electronic device, and the process of adjusting the image parameter of the target object in the second preview window by the electronic device is simplified.
Optionally, in an embodiment of the present application, the user input unit 107 is further configured to receive a sixth input from the user on the target editing window.
The display unit 106 is further configured to display, in response to the sixth input, a time control, where the time control is used to determine a start time and an end time of a target video segment in the target video, the target video segment is a video segment including a target object, and an image parameter corresponding to the target object in the target video segment is a target image parameter.
The embodiment of the application provides electronic equipment, which can continuously record a first video in a second preview window according to a first input of a user to the first preview window, and edit a target object in a target editing window to generate a target video. Therefore, the user can directly input the target object in the first preview window once, so that the electronic equipment can directly edit the target object to obtain the target video without the need of performing frame-by-frame editing processing after the video is recorded, the operation of the user in the process of triggering the electronic equipment to edit the video can be simplified, the time consumption is reduced, and the efficiency of the electronic equipment to edit the video can be improved.
The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g. a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the above method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the video recording method embodiments described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (16)
1. A method of video recording, the method comprising:
receiving a first input of a user to a first preview window;
In response to the first input, displaying a second preview window in a first display area and displaying a target editing window in a second display area; the display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining a target image parameter corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window;
Generating a target video based on the first video and the second video; the video content of the first video is the same as the display content of the second preview window, the second video is a video containing the target object, and the image parameter corresponding to the target object in the second video is the target image parameter.
2. The method of claim 1, wherein the first input comprises a first sub-input and a second sub-input; the displaying, in response to the first input, a second preview window in a first display area and a target editing window in a second display area, includes:
In response to the first sub-input, displaying a third preview window in the first display area and displaying an initial editing window in the second display area, wherein the display content of the third preview window is the same as the display content of the first preview window;
receiving a second sub-input of a user to the target object in the third preview window;
and in response to the second sub-input, updating the third preview window to the second preview window in the first display area, and updating the initial editing window to the target editing window in the second display area, wherein the target editing window comprises a thumbnail of the target object.
3. The method of claim 2, wherein after the third preview window is displayed in the first display area, the method further comprises:
receiving a second input of a user to the third preview window;
updating display states of the N objects in response to the second input;
the N objects are shooting objects in the third preview window, the N objects include the target object, and N is a positive integer.
4. A method according to claim 3, wherein after the receiving the second input by the user to the third preview window, the method further comprises:
responding to the second input, displaying N identifications in corresponding areas of the N objects, wherein the N identifications and the N objects have an association relation;
before the receiving the second sub-input of the target object in the third preview window by the user, the method further includes:
Receiving a third input of a user to a target identifier in the N identifiers;
And responding to the third input, and enabling the target object corresponding to the target identifier to be in a selected state.
5. The method of claim 1, wherein after the displaying the target edit window in the second display area, the method further comprises:
receiving a fourth input of a user to the target editing window;
Responsive to the fourth input, adjusting an image parameter corresponding to the target object to the target image parameter;
the second video is generated based on the target image parameters.
6. The method according to claim 5, wherein in a case where the second preview window cancels the display of the target object, after the adjusting the image parameter corresponding to the target object to the target image parameter, the method further comprises:
receiving a fifth input of a user to the target editing window;
In response to the fifth input, the target object is displayed in the second preview window, display parameters of the target object being determined based on the target image parameters.
7. The method of claim 1, wherein prior to generating the target video based on the first video and the second video, the method further comprises:
Receiving a sixth input of a user to the target editing window;
And responding to the sixth input, displaying a time control, wherein the time control is used for determining the starting time and the ending time of a target video segment in a target video, the target video segment is a video segment containing the target object, and the image parameter corresponding to the target object in the target video segment is the target image parameter.
8. A video recording apparatus, the video recording apparatus comprising: the device comprises a receiving module, a display module and a processing module;
The receiving module is used for receiving a first input of a user to the first preview window;
The display module is used for responding to the first input received by the receiving module, displaying a second preview window in a first display area and displaying a target editing window in a second display area; the display content of the second preview window has an association relationship with the display content of the first preview window, the target editing window is used for determining a target image parameter corresponding to a target object, the target object is determined based on the first input, and the target object is a shooting object in the first preview window;
The processing module is used for generating a target video based on the first video and the second video displayed by the display module; the video content of the first video is the same as the display content of the second preview window, the second video is a video containing the target object, and the image parameter corresponding to the target object in the second video is the target image parameter.
9. The apparatus of claim 8, wherein the first input comprises a first sub-input and a second sub-input;
The video recording apparatus further includes: updating a module;
The display module is specifically configured to display a third preview window in the first display area and display an initial editing window in the second display area in response to the first sub-input received by the receiving module, where a display content of the third preview window is the same as a display content of the first preview window;
The receiving module is specifically configured to receive a second sub-input of the target object in the third preview window displayed by the display module by the user;
The updating module is configured to respond to the second sub-input, update the third preview window to the second preview window in the first display area displayed by the display module, and update the initial editing window to the target editing window in the second display area, where the target editing window includes a thumbnail of the target object.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
The receiving module is further configured to receive a second input from a user to the third preview window;
The updating module is further used for responding to the second input and updating the display states of the N objects;
The N objects are objects in a preview screen displayed in the third preview window, the N objects include the target object, and N is a positive integer.
11. The apparatus of claim 10, wherein the device comprises a plurality of sensors,
The display module is further configured to display N identifiers in corresponding areas of the N objects in response to the second input received by the receiving module, where the N identifiers have an association relationship with the N objects;
The receiving module is further used for receiving a third input of a user to a target identifier in the N identifiers displayed by the display module;
And the processing module is further used for responding to the third input to enable the target object corresponding to the target identifier to be in a selected state.
12. The apparatus of claim 8, wherein the video recording apparatus further comprises: an adjustment module;
The receiving module is further used for receiving a fourth input of a user to the target editing window;
The adjusting module is used for responding to the fourth input received by the receiving module and adjusting the image parameters corresponding to the target object into the target image parameters;
the processing module is further configured to generate the second video based on the target image parameter adjusted by the adjustment module.
13. The apparatus of claim 12, wherein the second preview window is in a condition of canceling the display of the target object;
the receiving module is used for receiving a fifth input of a user to the target editing window;
the display module is further configured to display the target object in the second preview window in response to the fifth input received by the receiving module, where a display parameter of the target object is determined based on the target image parameter.
14. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
The receiving module is further used for receiving a sixth input of a user to the target editing window;
The display module is further configured to display, in response to the sixth input received by the receiving module, a time control, where the time control is used to determine a start time and an end time of a target video segment in a target video, the target video segment is a video segment including the target object, and an image parameter corresponding to the target object in the target video segment is the target image parameter.
15. An electronic device comprising a processor, a memory and a computer executable program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the video recording method according to any one of claims 1 to 7.
16. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer executable program, which when executed by a processor, implements the steps of the video recording method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211003170.6A CN115334242B (en) | 2022-08-19 | 2022-08-19 | Video recording method, device, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211003170.6A CN115334242B (en) | 2022-08-19 | 2022-08-19 | Video recording method, device, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115334242A CN115334242A (en) | 2022-11-11 |
CN115334242B true CN115334242B (en) | 2024-06-18 |
Family
ID=83925818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211003170.6A Active CN115334242B (en) | 2022-08-19 | 2022-08-19 | Video recording method, device, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115334242B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113794835A (en) * | 2021-08-31 | 2021-12-14 | 维沃移动通信(杭州)有限公司 | Video recording method and device and electronic equipment |
CN114520877A (en) * | 2022-02-10 | 2022-05-20 | 维沃移动通信有限公司 | Video recording method and device and electronic equipment |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102114617B1 (en) * | 2014-01-08 | 2020-05-25 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
KR102216246B1 (en) * | 2014-08-07 | 2021-02-17 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
CN116112786A (en) * | 2019-11-29 | 2023-05-12 | 华为技术有限公司 | Video shooting method and electronic equipment |
CN112437232A (en) * | 2020-11-24 | 2021-03-02 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
CN113794831B (en) * | 2021-08-13 | 2023-08-25 | 维沃移动通信(杭州)有限公司 | Video shooting method, device, electronic equipment and medium |
CN113747073B (en) * | 2021-09-13 | 2024-02-02 | 维沃移动通信有限公司 | Video shooting method and device and electronic equipment |
CN113905175A (en) * | 2021-09-27 | 2022-01-07 | 维沃移动通信有限公司 | Video generation method and device, electronic equipment and readable storage medium |
CN114302009A (en) * | 2021-12-06 | 2022-04-08 | 维沃移动通信有限公司 | Video processing method, video processing device, electronic equipment and medium |
-
2022
- 2022-08-19 CN CN202211003170.6A patent/CN115334242B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113794835A (en) * | 2021-08-31 | 2021-12-14 | 维沃移动通信(杭州)有限公司 | Video recording method and device and electronic equipment |
CN114520877A (en) * | 2022-02-10 | 2022-05-20 | 维沃移动通信有限公司 | Video recording method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115334242A (en) | 2022-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111612873A (en) | GIF picture generation method and device and electronic equipment | |
CN112135046A (en) | Video shooting method, video shooting device and electronic equipment | |
CN111857512A (en) | Image editing method and device and electronic equipment | |
CN114422692B (en) | Video recording method and device and electronic equipment | |
CN114302009A (en) | Video processing method, video processing device, electronic equipment and medium | |
CN114520876A (en) | Time-delay shooting video recording method and device and electronic equipment | |
CN111866379A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113259743A (en) | Video playing method and device and electronic equipment | |
CN113794835A (en) | Video recording method and device and electronic equipment | |
CN114466232B (en) | Video processing method, device, electronic equipment and medium | |
CN113918522A (en) | File generation method and device and electronic equipment | |
CN114430460A (en) | Shooting method and device and electronic equipment | |
CN111885298B (en) | Image processing method and device | |
CN114025237B (en) | Video generation method and device and electronic equipment | |
WO2023143529A1 (en) | Photographing method and apparatus, and electronic device | |
CN115334242B (en) | Video recording method, device, electronic equipment and medium | |
CN114390205B (en) | Shooting method and device and electronic equipment | |
CN114500852B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN114286010B (en) | Shooting method, shooting device, electronic equipment and medium | |
CN115037874A (en) | Photographing method and device and electronic equipment | |
CN113961113A (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN114222069A (en) | Shooting method, shooting device and electronic equipment | |
CN113923392A (en) | Video recording method, video recording device and electronic equipment | |
CN112492205A (en) | Image preview method and device and electronic equipment | |
CN114520875B (en) | Video processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |