CN110636382A - Method and device for adding visual object in video, electronic equipment and storage medium - Google Patents

Method and device for adding visual object in video, electronic equipment and storage medium Download PDF

Info

Publication number
CN110636382A
CN110636382A CN201910878160.9A CN201910878160A CN110636382A CN 110636382 A CN110636382 A CN 110636382A CN 201910878160 A CN201910878160 A CN 201910878160A CN 110636382 A CN110636382 A CN 110636382A
Authority
CN
China
Prior art keywords
visual object
video
user
area
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910878160.9A
Other languages
Chinese (zh)
Inventor
任家锐
李鑫
王暖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910878160.9A priority Critical patent/CN110636382A/en
Publication of CN110636382A publication Critical patent/CN110636382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure relates to a method, an apparatus, an electronic device and a storage medium for adding a visual object in a video, wherein the method comprises: receiving an instruction of adding a visual object in a video; displaying a video editing interface; acquiring at least one target visual object selected from the visual object selection area by a user; receiving configuration information of at least one target visual object input by a user in a visual object configuration area; adding at least one target visual object to at least one frame image of the video to be edited based on the received configuration information; and displaying the current frame image added with at least one target visual object in the video preview area. Because the video editing interface comprises the video preview area, the visual object selection area and the visual object configuration area, a user can complete the selection and the configuration of the target visual object in the interface without repeatedly switching among different interfaces, thereby simplifying the process of adding the visual object and facilitating the operation of the user.

Description

Method and device for adding visual object in video, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a method and an apparatus for adding a visual object in a video, an electronic device, and a storage medium.
Background
In recent years, various short video apps have been rapidly started with the popularization of mobile terminals. Generally, short videos are video works with the duration of tens of seconds to several minutes, and due to the simple manufacturing method, more and more users are willing to share the video works of themselves on the short video App.
In order to increase the interest of the video, some short video APPs are provided with a function of adding a visual object. During the production process of the short video, a visual object selected by a user in a video editing interface, such as a sticker, can be added to the video. Specifically, after detecting that a user clicks a 'sticker' option of a main video editing interface, popping up a cover layer in the main video editing interface, wherein various optional stickers are displayed in the cover layer; and after detecting that a certain sticker is selected by the user, adding the sticker selected by the user to each video frame of the video, and returning to the main video editing interface. At this time, if the user wants to configure the effective time range of the sticker, the user needs to click an option for adjusting the effective time range on the video editing main interface, enter the configuration interface for adjusting the effective time range, and return to the video editing main interface after the adjustment is completed.
Therefore, in the related art, when a user adds a visual object to a video, the user needs to switch back and forth in different interfaces to respectively realize different functions such as selecting the visual object and configuring the effective time range, and the operation is very complex.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for adding a visual object in a video, so as to at least solve the problem in the related art that the process is complicated when adding a visual object to a video. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for adding a visual object in a video, including:
receiving an instruction of adding a visual object in a video; the instruction comprises a current frame image of a video to be edited;
displaying a video editing interface; the video editing interface comprises: the video preview area, the visual object selection area and the visual object configuration area; the video preview area is used for displaying the current frame image; the visual object selection area is used for displaying selectable visual objects; the visual object configuration area is used for receiving configuration information input by a user;
acquiring at least one target visual object selected from the visual object selection area by a user;
receiving configuration information of the at least one target visual object input by a user in the visual object configuration area;
adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information;
and displaying the current frame image added with the at least one target visual object in the video preview area.
Optionally, the visual object configuration area includes an effective time configuration sub-area; a video time axis comprising a first slider and a second slider is displayed in the effective time configuration subarea;
the step of receiving configuration information of the at least one target visual object input by a user in the visual object configuration area comprises:
detecting whether a user drags a first slider on the video timeline aiming at the selected target visual object; if so, when the user stops dragging the first slider, taking the time corresponding to the first slider as the effective starting time of the target visual object;
detecting whether a user drags a second slider on the video time axis; if so, when the user stops dragging the second slider, taking the time corresponding to the second slider as the effective end time of the target visual object; the time corresponding to the second sliding block is positioned after the time corresponding to the first sliding block;
determining the time period between the effective starting time and the effective ending time as the effective time range of the target visual object;
the step of adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information includes:
and determining at least one frame image corresponding to the effective time range of the target visual object in the video to be edited, and adding the target visual object to the determined at least one frame image.
Optionally, the selectable visual objects include: a dynamic visual object; the dynamic visual objects are: dynamic pictures that can be added to the video to be edited;
before the step of adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information, the method further comprises:
if the target visual object selected by the user is a dynamic visual object, reducing the video preview area to a preset size;
in the visual object configuration area, further displaying a dynamic visual object configuration subarea; the dynamic visual object configuration subarea is used for receiving configuration information input by a user aiming at the selected dynamic visual object;
receiving configuration information input by a user aiming at a selected dynamic visual object in a dynamic visual object configuration subarea;
the step of adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information includes:
and adding the at least one target visual object into at least one frame image of the video to be edited based on the configuration information input by the user in the visual object configuration area and the configuration information input by the user aiming at the selected dynamic visual object in the dynamic visual object configuration area.
Optionally, the dynamic visual object configuration sub-area includes: a positive sequence/reverse sequence play button and a play speed adjusting slider;
the step of receiving the configuration information input by the user aiming at the selected dynamic visual object in the dynamic visual object configuration subarea comprises the following steps:
detecting whether a user clicks a forward order/reverse order playing button in the dynamic visual object configuration sub-area or not; if so, playing the target visual object according to the sequence opposite to the current playing sequence of the target visual object;
detecting whether a user drags a play speed adjusting slider in the dynamic visual object configuration subarea or not; if so, adjusting the position of the slider according to the playing speed, and accelerating or slowing down the playing speed of the target visual object.
Optionally, the visual object configuration area includes a transparency configuration sub-area; a transparency adjusting sliding block is displayed in the transparency configuration subarea;
the step of receiving the configuration information input by the user in the visual object configuration area further comprises:
detecting whether a user drags a transparency adjusting slider in the transparency configuration subarea; and if so, adjusting the transparency of the target visual object according to the numerical value corresponding to the transparency adjusting slider.
Optionally, the method for adding a visual object in a video further includes:
detecting whether a user drags the target visual object in the video preview area; if so, moving the position of the target visual object along with the detected dragging track; and/or
Detecting whether a user selects the target visual object in the video preview area; if so, generating a visual object editing frame, and correspondingly zooming and/or rotating the target visual object according to the zooming and/or rotating operation performed by the user in the visual object editing frame.
Optionally, if the target visual object consists of a text part and an image part, the method for adding a visual object in a video further includes:
detecting whether a user selects a character part of a target visual object; if so, generating a text editing box, and correspondingly changing the text part of the target visual object according to the input operation of the user in the text editing box; and/or, detecting whether the user selects the image part of the target visual object; if so, generating an image editing frame, and correspondingly zooming and/or rotating the image part of the target visual object according to the zooming and/or rotating operation performed by the user in the image editing frame.
Optionally, the video editing interface further includes: confirming a return function area;
after the step of displaying the current frame image with the at least one target visual object added in the video preview area, the method further comprises:
detecting whether a user clicks a confirmation button or a cancel button in the confirmation return functional area; if the confirmation button is clicked, saving the video to be edited to which the at least one target visual object is added;
and if a cancel button is clicked, canceling the operation of adding the at least one target visual object to at least one frame image of the video to be edited.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for adding a visual object in a video, including:
an instruction receiving unit configured to execute receiving an instruction to add a visual object in a video; the instruction comprises a current frame image of a video to be edited;
a first display unit configured to perform displaying a video editing interface; the video editing interface comprises: the video preview area, the visual object selection area and the visual object configuration area; the video preview area is used for displaying the current frame image; the visual object selection area is used for displaying selectable visual objects; the visual object configuration area is used for receiving configuration information input by a user;
an acquisition unit configured to perform acquisition of at least one target visual object selected by a user from the visual object selection area;
a first receiving unit configured to perform receiving configuration information of the at least one target visual object input by a user in the visual object configuration area;
an adding unit configured to perform adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information;
a second display unit configured to perform displaying the current frame image to which the at least one target visual object is added in the video preview area.
Optionally, the visual object configuration area includes an effective time configuration sub-area; a video time axis comprising a first slider and a second slider is displayed in the effective time configuration subarea;
the first receiving unit includes:
a first detection module configured to perform detection of whether a user drags a first slider on the video timeline for a selected target visual object; if so, when the user stops dragging the first slider, taking the time corresponding to the first slider as the effective starting time of the target visual object;
a second detection module configured to perform detecting whether a user drags a second slider on the video timeline; if so, when the user stops dragging the second slider, taking the time corresponding to the second slider as the effective end time of the target visual object; the time corresponding to the second sliding block is positioned after the time corresponding to the first sliding block;
an effective time determining module configured to determine a time period between the effective starting time and the effective ending time as an effective time range of the target visual object;
the adding unit is specifically configured to determine at least one frame image corresponding to the effective time range of the target visual object in the video to be edited, and add the target visual object to the determined at least one frame image.
Optionally, the selectable visual objects include: a dynamic visual object; the dynamic visual objects are: dynamic pictures that can be added to the video to be edited;
the device for adding the visual object in the video further comprises:
a video preview area reducing unit configured to perform reducing the video preview area to a preset size before adding the at least one target visual object to the at least one frame image of the video to be edited based on the received configuration information if the target visual object selected by the user is a dynamic visual object;
a third display unit configured to execute further displaying a dynamic visual object configuration sub-area in the visual object configuration area; the dynamic visual object configuration subarea is used for receiving configuration information input by a user aiming at the selected dynamic visual object;
a second receiving unit configured to perform receiving of configuration information input by a user for the selected dynamic visual object in the dynamic visual object configuration sub-area;
the adding unit is specifically configured to add the at least one target visual object to the at least one frame image of the video to be edited based on the configuration information input by the user in the visual object configuration area and the configuration information input by the user for the selected dynamic visual object in the dynamic visual object configuration area.
Optionally, the dynamic visual object configuration sub-area includes: a positive sequence/reverse sequence play button and a play speed adjusting slider;
the second receiving unit includes:
a third detection module configured to perform detection on whether a user clicks a forward/reverse play button in the dynamic visual object configuration sub-area; if so, playing the target visual object according to the sequence opposite to the current playing sequence of the target visual object;
a fourth detection module configured to perform detection of whether a user drags a play speed adjustment slider in the dynamic visual object configuration sub-area; if so, adjusting the position of the slider according to the playing speed, and accelerating or slowing down the playing speed of the target visual object.
Optionally, the visual object configuration area includes a transparency configuration sub-area; a transparency adjusting sliding block is displayed in the transparency configuration subarea;
the first receiving unit further includes:
a transparency adjustment module configured to perform detecting whether a user drags a transparency adjustment slider in the transparency configuration sub-area; and if so, adjusting the transparency of the target visual object according to the numerical value corresponding to the transparency adjusting slider.
Optionally, the apparatus for adding a visual object in a video further includes:
the mobile unit is configured to execute the operation of detecting whether a user drags the target visual object in the video preview area; if so, moving the position of the target visual object along with the detected dragging track; and/or
A generating unit configured to perform detecting whether the target visual object in the video preview area is selected by a user; if so, generating a visual object editing frame, and correspondingly zooming and/or rotating the target visual object according to the zooming and/or rotating operation performed by the user in the visual object editing frame.
Optionally, if the target visual object consists of a text part and an image part, the apparatus for adding a visual object in a video further includes:
a text edit box generation unit configured to perform detection of whether a user selects a text portion of the target visual object; if so, generating a text editing box, and correspondingly changing the text part of the target visual object according to the input operation of the user in the text editing box; and/or the presence of a gas in the gas,
an image edit box generation unit configured to perform detection of whether or not a user selects an image portion of a target visual object; if so, generating an image editing frame, and correspondingly zooming and/or rotating the image part of the target visual object according to the zooming and/or rotating operation performed by the user in the image editing frame.
Optionally, the video editing interface further includes: confirming a return function area;
the device for adding the visual object in the video further comprises:
a saving unit configured to perform detecting whether a user clicks a confirmation button or a cancel button in the confirmation return functional area; if the confirmation button is clicked, saving the video to be edited to which the at least one target visual object is added;
an undoing unit configured to perform an operation of undoing the addition of the at least one target visual object to the at least one frame image of the video to be edited if a cancel button is clicked.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method for adding a visual object to a video according to any one of the above first aspects.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method for adding a visual object in a video according to any one of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to perform the method for adding a visual object in a video according to any one of the first aspect.
By receiving an instruction to add a visual object in a video; displaying a video editing interface; acquiring at least one target visual object selected from the visual object selection area by a user; receiving configuration information of at least one target visual object input by a user in a visual object configuration area; adding at least one target visual object to at least one frame image of the video to be edited based on the received configuration information; and displaying the current frame image added with at least one target visual object in the video preview area. Because the video editing interface comprises the video preview area, the visual object selection area and the visual object configuration area, a user can complete the selection and the configuration of the target visual object in the interface without repeatedly switching among different interfaces, thereby simplifying the process of adding the visual object and facilitating the operation of the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow diagram illustrating a method for adding visual objects in a video according to an example embodiment.
Fig. 2 is another flow diagram illustrating a method for adding visual objects in a video according to an example embodiment.
Fig. 3 is a diagram of an example of a video editing interface in the embodiment shown in fig. 2.
Fig. 4 is a diagram of another example of a video editing interface in the embodiment shown in fig. 2.
Fig. 5 is a block diagram illustrating an apparatus for adding a visual object in a video according to an example embodiment.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Fig. 7 is a block diagram illustrating an apparatus for adding visual objects in a video according to an example embodiment.
Fig. 8 is a block diagram illustrating an apparatus for adding visual objects in a video according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flow diagram illustrating a method for adding visual objects in a video according to an example embodiment. As shown in fig. 1, the method for adding a visual object in a video includes the following steps:
in step S101, an instruction to add a visual object in a video is received; the instruction contains a current frame image of a video to be edited.
In this embodiment, the visual objects may be static stickers, dynamic stickers, artistic words, etc.; the video to be edited can be imported from a local album, and can also be shot by a user after the authority of opening the camera is acquired.
In step S102, a video editing interface is displayed; the video editing interface comprises a video preview area, a visual object selection area and a visual object configuration area; the video preview area is used for displaying the current frame image; the visual object selection area is used for displaying selectable visual objects; the visual object configuration area is used for receiving configuration information input by a user.
In a possible implementation manner, in order to facilitate a user to see an editing effect of a video to be edited at any time, a video preview area, a visual object selection area and a visual object configuration area are sequentially arranged in a video editing interface from top to bottom. For example, after the user imports the video to be edited from the local album, the first video frame of the video to be edited may be displayed in the video preview area as the current frame image.
In step S103, at least one target visual object selected by the user from the visual object selection area is acquired.
In this step, the user may slide left and right in the visual object selection area to browse the selectable visual objects, and then select the target visual object by clicking.
In step S104, configuration information of at least one target visual object input by a user in the visual object configuration area is received.
For example, if the visual object is a static sticker or a dynamic sticker, the user may configure parameters such as transparency, effective range time range, etc. of the target visual object in the visual object configuration area.
In step S105, at least one target visual object is added to at least one frame image of the video to be edited based on the received configuration information.
Specifically, when the configuration information includes an effective time range, at least one frame image corresponding to the effective time range of the target visual object in the video to be edited is determined, and then the target visual object is added to the determined at least one frame image. For example: the effective time range of the target visual object set by the user is 00:12: 00-00: 12:15, and the frame image needing to be added with the target visual object is each frame image of the video to be edited in the time period of 00:12: 00-00: 12: 15. When the effective time range is not included in the configuration information, the target visual object may be added to all frame images in the video to be edited.
In step S106, in the video preview area, the current frame image to which at least one target visual object is added is displayed.
Therefore, the video editing interface comprises a video preview area, a visual object selection area and a visual object configuration area, so that a user can complete selection and configuration of a target visual object in the interface without repeatedly switching among different interfaces, the process of adding the visual object is simplified, and the operation of the user is facilitated.
Fig. 2 is another flow diagram illustrating a method for adding visual objects in a video according to an example embodiment. As shown in fig. 2, the method includes:
in step S201, after receiving an instruction to add a visual object to a video, a video editing interface is displayed.
Specifically, as in step S101 in the embodiment shown in fig. 1, the video editing interface may include a video preview area, a visual object selection area, and a visual object configuration area. The video preview area is used for displaying the current frame image, the visual object selection area is used for displaying the selectable visual object, and the visual object configuration area is used for receiving configuration information input by a user.
In step S202, at least one target visual object selected by the user from the visual object selection area is acquired.
In step S203, it is determined whether the selected target visual object is a dynamic visual object; if yes, executing step S204; if not, steps S207, S208, S209 and S210 are performed.
Specifically, the dynamic visual object may be a dynamic picture that can be added to the video to be edited.
In step S204, the video preview area is reduced to a preset size, and the dynamic visual object configuration sub-area is displayed in the visual object configuration area of the video editing interface.
Specifically, in the visual object configuration area of the video editing interface, a dynamic visual object configuration sub-area is further displayed, which is used for receiving configuration information input by a user for a selected dynamic visual object.
In step S205, when it is detected that the user clicks the forward/reverse play button in the dynamic visual object configuration sub-area, the target visual object is played in an order opposite to the current playing order of the target visual object.
In step S206, when it is detected that the user drags the playing speed adjustment slider in the dynamic visual object configuration sub-area, the playing speed of the target visual object is increased or decreased according to the position of the playing speed adjustment slider.
In one possible implementation, when the target visual object selected by the user is a dynamic sticker, the video editing interface is as shown in FIG. 3. The interface comprises a video preview area 1, a visual object selection area 2 and a visual object configuration area 3; wherein, visual object configuration area includes: the transparency configuration subarea 3-1, the dynamic visual object configuration subarea 3-2 and the effective time configuration subarea 3-3 enable a user to configure the four parameters of the transparency, the forward/reverse playing, the playing speed and the effective time range of the target visual object in the subareas.
For example, referring to FIG. 3, the forward/reverse play button may configure a triangle button in sub-section 3-2 for the dynamic visual object; the user can change the playing sequence of the target visual object by clicking the triangular button; the circular slider positioned on the right side of the forward/reverse order playing button is used for adjusting the playing speed of the target visual object, and when a user drags the circular slider to the right side, the playing speed of the target visual object is accelerated; when the user drags the circular slider to the left side, the playing speed of the target visual object is slowed down.
In step S207, when it is detected that the user drags the transparency adjustment slider in the transparency configuration sub-area 3-1, the transparency of the target visual object is adjusted according to the value corresponding to the transparency adjustment slider.
In step S208, when it is detected that the user has performed a drag operation on the target visual object in the video preview area 1, the position of the target visual object is moved along with the detected drag trajectory.
In step S209, when it is detected that the target visual object in the video preview area 1 is selected by the user, a visual object edit box is generated, and the target visual object is correspondingly zoomed and/or rotated according to the zoom and/or rotation operation performed by the user in the visual object edit box.
In step S210, when it is detected that the user stops dragging the first slider on the video time axis in the effective time configuration sub-area 3-3, the time corresponding to the first slider is taken as the effective starting time of the target visual object, when it is detected that the user stops dragging the second slider on the video time axis in the effective time configuration sub-area 3-3, the time corresponding to the second slider is taken as the effective ending time of the target visual object, and the time period between the effective starting time and the effective ending time is determined as the effective time range of the target visual object.
In addition, referring to fig. 3, the user may also perform effective time preview by clicking a triangle button located on the left side of the video time axis in the effective time configuration sub-area 3-3 to determine whether the effective range time setting of the target visual object in the video to be edited is reasonable.
In another possible embodiment, if the target visual object selected by the user is static, the video editing interface is shown in fig. 4, and the interface also includes a video preview area 1, a visual object selection area 2 and a visual object configuration area 3; and the visual object configuration area 3 only comprises a transparency configuration subarea 3-1 and an effective time configuration subarea 3-3, so that a user can configure two parameters of the transparency and the effective time range of the target visual object in the area 3.
In another possible implementation, if the target visual object is composed of a text part and an image part, when the visual object is added to the video, whether the user selects the text part of the target visual object or not can be detected; if so, generating a character editing box, and correspondingly changing the character part of the target visual object according to the input operation of the user in the character editing box; and/or, detecting whether the user selects the image part of the target visual object; if yes, generating an image editing frame, and correspondingly zooming and/or rotating the image part of the target visual object according to the zooming and/or rotating operation performed by the user in the image editing frame. For example, referring to fig. 3, when the user selects an image portion in the target visual object, an image edit box is generated around the expression "tongue open", and the user can individually adjust the image portion using the image edit box.
Obviously, the character part and the image part of the target visual object are respectively adjusted, so that the operability of a user and the flexibility in the video editing process are greatly enhanced.
In step S211, it is detected whether the user clicks a confirmation button or a cancel button in the confirmation return function area; if so, step S212 is performed.
In step S212, if it is detected that the user clicks the confirmation button, saving the video to be edited to which the target visual object is added; and if the user is detected to click the cancel button, cancelling the operation of adding the at least one target visual object into the at least one frame image of the video to be edited.
As can be seen from the embodiment shown in fig. 2, the video editing interface is simultaneously provided with the video preview area, the visual object selection area and the visual object configuration area, so that not only can the user conveniently add a visual object to the video to be edited, but also the editing effect can be previewed in real time during the process of editing the video. Moreover, for the situation that the target visual object is dynamic, as more parameters need to be configured by the user, the video preview area can be properly reduced, and the influence on the use experience of the user due to the fact that the operation area is too small when the user configures the parameters is avoided.
Fig. 5 is a block diagram illustrating an apparatus for adding a visual object in a video according to an example embodiment. Referring to fig. 5, the apparatus includes an instruction receiving unit 510, a first display unit 520, an acquisition unit 530, a first receiving unit 540, an adding unit 550, and a second display unit 560.
The instruction receiving unit 510 is configured to execute an instruction of adding a visual object in a video; the instruction comprises a current frame image of a video to be edited;
a first display unit 520 configured to perform displaying a video editing interface; the video editing interface comprises: the video preview area, the visual object selection area and the visual object configuration area; the video preview area is used for displaying the current frame image; the visual object selection area is used for displaying selectable visual objects; the visual object configuration area is used for receiving configuration information input by a user;
the obtaining unit 530 is configured to perform obtaining at least one target visual object selected by the user from the visual object selection area;
the first receiving unit 540 is configured to perform receiving configuration information of the at least one target visual object input by a user in the visual object configuration area;
the adding unit 550 is configured to perform adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information;
the second display unit 560 is configured to perform displaying the current frame image to which the at least one target visual object is added in the video preview area.
In a possible implementation manner, the visual object configuration area comprises an effective time configuration subarea; a video time axis comprising a first slider and a second slider is displayed in the effective time configuration subarea;
the first receiving unit 540 includes:
a first detection module configured to perform detection of whether a user drags a first slider on the video timeline for a selected target visual object; if so, when the user stops dragging the first slider, taking the time corresponding to the first slider as the effective starting time of the target visual object;
a second detection module configured to perform detecting whether a user drags a second slider on the video timeline; if so, when the user stops dragging the second slider, taking the time corresponding to the second slider as the effective end time of the target visual object; the time corresponding to the second sliding block is positioned after the time corresponding to the first sliding block;
an effective time determining module configured to determine a time period between the effective starting time and the effective ending time as an effective time range of the target visual object;
the adding unit is specifically configured to determine at least one frame image corresponding to the effective time range of the target visual object in the video to be edited, and add the target visual object to the determined at least one frame image.
In one possible embodiment, the selectable visual objects include: a dynamic visual object; the dynamic visual objects are: dynamic pictures that can be added to the video to be edited;
the device for adding the visual object in the video further comprises:
a video preview area reducing unit configured to perform reducing the video preview area to a preset size before adding the at least one target visual object to the at least one frame image of the video to be edited based on the received configuration information if the target visual object selected by the user is a dynamic visual object;
a third display unit configured to execute further displaying a dynamic visual object configuration sub-area in the visual object configuration area; the dynamic visual object configuration subarea is used for receiving configuration information input by a user aiming at the selected dynamic visual object;
a second receiving unit configured to perform receiving of configuration information input by a user for the selected dynamic visual object in the dynamic visual object configuration sub-area;
the adding unit 550 is specifically configured to perform adding the at least one target visual object to the at least one frame image of the video to be edited based on the configuration information input by the user in the visual object configuration area and the configuration information input by the user for the selected dynamic visual object in the dynamic visual object configuration area.
In one possible embodiment, the dynamic visual object configuration sub-area comprises: a positive sequence/reverse sequence play button and a play speed adjusting slider;
the second receiving unit includes:
a third detection module configured to perform detection on whether a user clicks a forward/reverse play button in the dynamic visual object configuration sub-area; if so, playing the target visual object according to the sequence opposite to the current playing sequence of the target visual object;
a fourth detection module configured to perform detection of whether a user drags a play speed adjustment slider in the dynamic visual object configuration sub-area; if so, adjusting the position of the slider according to the playing speed, and accelerating or slowing down the playing speed of the target visual object.
In a possible implementation, the visual object configuration area comprises a transparency configuration sub-area; a transparency adjusting sliding block is displayed in the transparency configuration subarea;
the first receiving unit further includes:
a transparency adjustment module configured to perform detecting whether a user drags a transparency adjustment slider in the transparency configuration sub-area; and if so, adjusting the transparency of the target visual object according to the numerical value corresponding to the transparency adjusting slider.
In a possible implementation, the apparatus for adding a visual object to a video further includes:
the mobile unit is configured to execute the operation of detecting whether a user drags the target visual object in the video preview area; if so, moving the position of the target visual object along with the detected dragging track; and/or
A generating unit configured to perform detecting whether the target visual object in the video preview area is selected by a user; if so, generating a visual object editing frame, and correspondingly zooming and/or rotating the target visual object according to the zooming and/or rotating operation performed by the user in the visual object editing frame.
In a possible implementation manner, if the target visual object is composed of a text part and an image part, the apparatus for adding a visual object in a video further includes:
a text edit box generation unit configured to perform detection of whether a user selects a text portion of the target visual object; if so, generating a text editing box, and correspondingly changing the text part of the target visual object according to the input operation of the user in the text editing box; and/or the presence of a gas in the gas,
an image edit box generation unit configured to perform detection of whether or not a user selects an image portion of a target visual object; if so, generating an image editing frame, and correspondingly zooming and/or rotating the image part of the target visual object according to the zooming and/or rotating operation performed by the user in the image editing frame.
In one possible implementation, the video editing interface further includes: confirming a return function area;
the device for adding the visual object in the video further comprises:
a saving unit configured to perform detecting whether a user clicks a confirmation button or a cancel button in the confirmation return functional area; if the confirmation button is clicked, saving the video to be edited to which the at least one target visual object is added;
an undoing unit configured to perform an operation of undoing the addition of the at least one target visual object to the at least one frame image of the video to be edited if a cancel button is clicked.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: by receiving an instruction to add a visual object in a video; displaying a video editing interface; acquiring at least one target visual object selected from the visual object selection area by a user; receiving configuration information of at least one target visual object input by a user in a visual object configuration area; adding at least one target visual object to at least one frame image of the video to be edited based on the received configuration information; and displaying the current frame image added with at least one target visual object in the video preview area. Because the video editing interface comprises the video preview area, the visual object selection area and the visual object configuration area, a user can complete the selection and the configuration of the target visual object in the interface without repeatedly switching among different interfaces, thereby simplifying the process of adding the visual object and facilitating the operation of the user.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides an electronic device, as shown in fig. 6, comprising a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602, and the memory 603 complete communication with each other through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
receiving an instruction of adding a visual object in a video; the instruction comprises a current frame image of a video to be edited;
displaying a video editing interface; the video editing interface comprises: the video preview area, the visual object selection area and the visual object configuration area; the video preview area is used for displaying the current frame image; the visual object selection area is used for displaying selectable visual objects; the visual object configuration area is used for receiving configuration information input by a user;
acquiring at least one target visual object selected from the visual object selection area by a user;
receiving configuration information of the at least one target visual object input by a user in the visual object configuration area;
adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information;
and displaying the current frame image added with the at least one target visual object in the video preview area.
Fig. 7 is a block diagram illustrating an apparatus 700 for adding visual objects in a video according to an example embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of apparatus 700, sensor assembly 714 may also detect a change in position of apparatus 700 or a component of apparatus 700, the presence or absence of user contact with apparatus 700, orientation or acceleration/deceleration of apparatus 700, and a change in temperature of apparatus 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the apparatus 700 to perform the method described above is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 8 is a block diagram illustrating an apparatus 800 for adding visual objects in a video according to an example embodiment. For example, the apparatus 800 may be provided as a server. Referring to FIG. 8, the apparatus 800 includes a processing component 822, which further includes one or more processors, and memory resources, represented by memory 832, for storing instructions, such as applications, that are executable by the processing component 822. The application programs stored in memory 832 may include one or more modules that each correspond to a set of instructions. Further, the processing component 822 is configured to execute instructions to perform any of the above-described methods for adding visual objects to a video.
The device 800 may also include a power component 826 configured to perform power management of the device 800, a wired or wireless network interface 850 configured to connect the device 800 to a network, and an input/output (I/O) interface 858. The apparatus 800 may operate based on an operating system stored in the memory 832, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for adding visual objects to a video, comprising:
receiving an instruction of adding a visual object in a video; the instruction comprises a current frame image of a video to be edited;
displaying a video editing interface; the video editing interface comprises: the video preview area, the visual object selection area and the visual object configuration area; the video preview area is used for displaying the current frame image; the visual object selection area is used for displaying selectable visual objects; the visual object configuration area is used for receiving configuration information input by a user;
acquiring at least one target visual object selected from the visual object selection area by a user;
receiving configuration information of the at least one target visual object input by a user in the visual object configuration area;
adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information;
and displaying the current frame image added with the at least one target visual object in the video preview area.
2. The method of claim 1, wherein the visual object configuration area comprises an effective time configuration sub-area; a video time axis comprising a first slider and a second slider is displayed in the effective time configuration subarea;
the step of receiving configuration information of the at least one target visual object input by a user in the visual object configuration area comprises:
detecting whether a user drags a first slider on the video timeline aiming at the selected target visual object; if so, when the user stops dragging the first slider, taking the time corresponding to the first slider as the effective starting time of the target visual object;
detecting whether a user drags a second slider on the video time axis; if so, when the user stops dragging the second slider, taking the time corresponding to the second slider as the effective end time of the target visual object; the time corresponding to the second sliding block is positioned after the time corresponding to the first sliding block;
determining the time period between the effective starting time and the effective ending time as the effective time range of the target visual object;
the step of adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information includes:
and determining at least one frame image corresponding to the effective time range of the target visual object in the video to be edited, and adding the target visual object to the determined at least one frame image.
3. The method of claim 1, wherein the selectable visual objects comprise: a dynamic visual object; the dynamic visual objects are: dynamic pictures that can be added to the video to be edited;
before the step of adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information, the method further comprises:
if the target visual object selected by the user is a dynamic visual object, reducing the video preview area to a preset size;
in the visual object configuration area, further displaying a dynamic visual object configuration subarea; the dynamic visual object configuration subarea is used for receiving configuration information input by a user aiming at the selected dynamic visual object;
receiving configuration information input by a user aiming at a selected dynamic visual object in a dynamic visual object configuration subarea;
the step of adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information includes:
and adding the at least one target visual object into at least one frame image of the video to be edited based on the configuration information input by the user in the visual object configuration area and the configuration information input by the user aiming at the selected dynamic visual object in the dynamic visual object configuration area.
4. The method of claim 3, wherein the dynamic visual object configuration subarea comprises: a positive sequence/reverse sequence play button and a play speed adjusting slider;
the step of receiving the configuration information input by the user aiming at the selected dynamic visual object in the dynamic visual object configuration subarea comprises the following steps:
detecting whether a user clicks a forward order/reverse order playing button in the dynamic visual object configuration sub-area or not; if so, playing the target visual object according to the sequence opposite to the current playing sequence of the target visual object;
detecting whether a user drags a play speed adjusting slider in the dynamic visual object configuration subarea or not; if so, adjusting the position of the slider according to the playing speed, and accelerating or slowing down the playing speed of the target visual object.
5. The method of claim 1, wherein the visual object configuration area comprises a transparency configuration sub-area; a transparency adjusting sliding block is displayed in the transparency configuration subarea;
the step of receiving the configuration information input by the user in the visual object configuration area further comprises:
detecting whether a user drags a transparency adjusting slider in the transparency configuration subarea; and if so, adjusting the transparency of the target visual object according to the numerical value corresponding to the transparency adjusting slider.
6. The method of claim 1, wherein the method further comprises:
detecting whether a user drags the target visual object in the video preview area; if so, moving the position of the target visual object along with the detected dragging track; and/or
Detecting whether a user selects the target visual object in the video preview area; if so, generating a visual object editing frame, and correspondingly zooming and/or rotating the target visual object according to the zooming and/or rotating operation performed by the user in the visual object editing frame.
7. The method of claim 1, wherein if the target visual object is composed of a text portion and an image portion, the method further comprises:
detecting whether a user selects a character part of a target visual object; if so, generating a text editing box, and correspondingly changing the text part of the target visual object according to the input operation of the user in the text editing box; and/or the presence of a gas in the gas,
detecting whether a user selects an image portion of a target visual object; if so, generating an image editing frame, and correspondingly zooming and/or rotating the image part of the target visual object according to the zooming and/or rotating operation performed by the user in the image editing frame.
8. An apparatus for adding visual objects to a video, comprising:
an instruction receiving unit configured to execute receiving an instruction to add a visual object in a video; the instruction comprises a current frame image of a video to be edited;
a first display unit configured to perform displaying a video editing interface; the video editing interface comprises: the video preview area, the visual object selection area and the visual object configuration area; the video preview area is used for displaying the current frame image; the visual object selection area is used for displaying selectable visual objects; the visual object configuration area is used for receiving configuration information input by a user;
an acquisition unit configured to perform acquisition of at least one target visual object selected by a user from the visual object selection area;
a first receiving unit configured to perform receiving configuration information of the at least one target visual object input by a user in the visual object configuration area;
an adding unit configured to perform adding the at least one target visual object to at least one frame image of the video to be edited based on the received configuration information;
a second display unit configured to perform displaying the current frame image to which the at least one target visual object is added in the video preview area.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of adding a visual object in a video according to any one of claims 1 to 7.
10. A storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the method of adding visual objects to a video of any one of claims 1 to 7.
CN201910878160.9A 2019-09-17 2019-09-17 Method and device for adding visual object in video, electronic equipment and storage medium Pending CN110636382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910878160.9A CN110636382A (en) 2019-09-17 2019-09-17 Method and device for adding visual object in video, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910878160.9A CN110636382A (en) 2019-09-17 2019-09-17 Method and device for adding visual object in video, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110636382A true CN110636382A (en) 2019-12-31

Family

ID=68971084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910878160.9A Pending CN110636382A (en) 2019-09-17 2019-09-17 Method and device for adding visual object in video, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110636382A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405344A (en) * 2020-03-18 2020-07-10 腾讯科技(深圳)有限公司 Bullet screen processing method and device
CN111629252A (en) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN111756952A (en) * 2020-07-23 2020-10-09 北京字节跳动网络技术有限公司 Preview method, device, equipment and storage medium of effect application
CN113365133A (en) * 2021-06-02 2021-09-07 北京字跳网络技术有限公司 Video sharing method, device, equipment and medium
CN113438532A (en) * 2021-05-31 2021-09-24 北京达佳互联信息技术有限公司 Video processing method, video playing method, video processing device, video playing device, electronic equipment and storage medium
CN113873329A (en) * 2021-10-19 2021-12-31 深圳追一科技有限公司 Video processing method and device, computer storage medium and electronic equipment
CN113873294A (en) * 2021-10-19 2021-12-31 深圳追一科技有限公司 Video processing method and device, computer storage medium and electronic equipment
CN114125555A (en) * 2021-11-12 2022-03-01 深圳麦风科技有限公司 Method, terminal and storage medium for previewing edited data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811629A (en) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 Method and system for acquiring video materials on same interface and conducting production on video materials
CN105657574A (en) * 2014-11-12 2016-06-08 阿里巴巴集团控股有限公司 Video file making method and device
US20170289643A1 (en) * 2016-03-31 2017-10-05 Valeria Kachkova Method of displaying advertising during a video pause
CN108363534A (en) * 2018-01-30 2018-08-03 优视科技新加坡有限公司 Global method for previewing, device and the electronic equipment of editable object
CN108628924A (en) * 2017-11-30 2018-10-09 昆山乌班图信息技术有限公司 A method of the html5 pages are generated based on JavaScript
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN109495790A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 Paster adding method, device, electronic equipment and readable medium based on editing machine

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657574A (en) * 2014-11-12 2016-06-08 阿里巴巴集团控股有限公司 Video file making method and device
CN104811629A (en) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 Method and system for acquiring video materials on same interface and conducting production on video materials
US20170289643A1 (en) * 2016-03-31 2017-10-05 Valeria Kachkova Method of displaying advertising during a video pause
CN108628924A (en) * 2017-11-30 2018-10-09 昆山乌班图信息技术有限公司 A method of the html5 pages are generated based on JavaScript
CN108363534A (en) * 2018-01-30 2018-08-03 优视科技新加坡有限公司 Global method for previewing, device and the electronic equipment of editable object
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN109495790A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 Paster adding method, device, electronic equipment and readable medium based on editing machine

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405344A (en) * 2020-03-18 2020-07-10 腾讯科技(深圳)有限公司 Bullet screen processing method and device
CN111629252A (en) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
KR102575848B1 (en) 2020-06-10 2023-09-06 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Video processing method and device, electronic device, and computer readable storage medium
KR20230016049A (en) * 2020-06-10 2023-01-31 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Video processing method and device, electronic device, and computer readable storage medium
CN111629252B (en) * 2020-06-10 2022-03-25 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
WO2021249168A1 (en) * 2020-06-10 2021-12-16 北京字节跳动网络技术有限公司 Video processing method and apparatus, electronic device, and computer readable storage medium
WO2022017450A1 (en) * 2020-07-23 2022-01-27 北京字节跳动网络技术有限公司 Previewing method and apparatus for effect application, and device and storage medium
CN111756952A (en) * 2020-07-23 2020-10-09 北京字节跳动网络技术有限公司 Preview method, device, equipment and storage medium of effect application
US11941728B2 (en) 2020-07-23 2024-03-26 Beijing Bytedance Network Technology Co., Ltd. Previewing method and apparatus for effect application, and device, and storage medium
CN113438532A (en) * 2021-05-31 2021-09-24 北京达佳互联信息技术有限公司 Video processing method, video playing method, video processing device, video playing device, electronic equipment and storage medium
CN113365133B (en) * 2021-06-02 2022-10-18 北京字跳网络技术有限公司 Video sharing method, device, equipment and medium
CN113365133A (en) * 2021-06-02 2021-09-07 北京字跳网络技术有限公司 Video sharing method, device, equipment and medium
CN113873294A (en) * 2021-10-19 2021-12-31 深圳追一科技有限公司 Video processing method and device, computer storage medium and electronic equipment
CN113873329A (en) * 2021-10-19 2021-12-31 深圳追一科技有限公司 Video processing method and device, computer storage medium and electronic equipment
CN114125555A (en) * 2021-11-12 2022-03-01 深圳麦风科技有限公司 Method, terminal and storage medium for previewing edited data
CN114125555B (en) * 2021-11-12 2024-02-09 深圳麦风科技有限公司 Editing data preview method, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN109120981B (en) Information list display method and device and storage medium
CN110636382A (en) Method and device for adding visual object in video, electronic equipment and storage medium
CN109600659B (en) Operation method, device and equipment for playing video and storage medium
US20170344192A1 (en) Method and device for playing live videos
US20170068380A1 (en) Mobile terminal and method for controlling the same
CN111381739B (en) Application icon display method and device, electronic equipment and storage medium
CN110602394A (en) Video shooting method and device and electronic equipment
CN109660873B (en) Video-based interaction method, interaction device and computer-readable storage medium
WO2022142871A1 (en) Video recording method and apparatus
EP3945490A1 (en) Method and device for processing video, and storage medium
EP3239827B1 (en) Method and apparatus for adjusting playing progress of media file
CN109451341B (en) Video playing method, video playing device, electronic equipment and storage medium
CN113207027B (en) Video playing speed adjusting method and device
WO2022205930A1 (en) Preview method for image effect, and preview apparatus for image effect
KR20180037235A (en) Information processing method and apparatus
CN112929561A (en) Multimedia data processing method and device, electronic equipment and storage medium
US20160124620A1 (en) Method for image deletion and device thereof
CN111736746A (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN108984098B (en) Information display control method and device based on social software
WO2022160699A1 (en) Video processing method and video processing apparatus
CN114282022A (en) Multimedia editing method and device, electronic equipment and storage medium
WO2022262211A1 (en) Content processing method and apparatus
CN110809184A (en) Video processing method, device and storage medium
CN110737373B (en) Application interface control method, device, terminal and storage medium
CN113919311A (en) Data display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191231