CN110572717A - Video editing method and device - Google Patents

Video editing method and device Download PDF

Info

Publication number
CN110572717A
CN110572717A CN201910941050.2A CN201910941050A CN110572717A CN 110572717 A CN110572717 A CN 110572717A CN 201910941050 A CN201910941050 A CN 201910941050A CN 110572717 A CN110572717 A CN 110572717A
Authority
CN
China
Prior art keywords
editing
preset
layer
video
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910941050.2A
Other languages
Chinese (zh)
Inventor
谷保震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Internet Security Software Co Ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201910941050.2A priority Critical patent/CN110572717A/en
Publication of CN110572717A publication Critical patent/CN110572717A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video editing method and a video editing device, wherein the method comprises the following steps: acquiring a video editing request, wherein the editing request comprises first video data to be edited; playing first video data in a first layer of a preset canvas; acquiring a video editing instruction, wherein the editing instruction comprises a target editing mode; generating an editing picture in a second layer of a preset canvas according to a target editing mode, wherein the second layer is positioned on the upper layer of the first layer and is a transparent layer; intercepting a picture displayed in a preset canvas according to a first preset time interval; and sequentially synthesizing the pictures according to the intercepting sequence of the pictures to generate edited second video data, wherein the playing time of each image in the second video data is a first preset time. Therefore, the efficiency of video editing is improved, and the cost of video editing is reduced.

Description

video editing method and device
Technical Field
the present application relates to the field of computer technologies, and in particular, to a video editing method and apparatus.
background
video is widely used in daily production and life of users as an information recording medium, and along with the wide application of video, the demand for video-based editing is becoming more diversified.
In the related art, the video content is edited by accessing the open source database of the third party, however, the method of editing the video by introducing the open source database of the third party requires the user to learn the open source database of the third party to master the editing method, which results in higher learning cost.
Disclosure of Invention
The application provides a video editing method and device, and aims to solve the technical problems that in the prior art, video editing learning cost is high and editing efficiency is low.
the embodiment of the application provides a video editing method, which comprises the following steps: acquiring a video editing request, wherein the editing request comprises first video data to be edited; playing the first video data in a first layer of a preset canvas; acquiring a video editing instruction, wherein the editing instruction comprises a target editing mode; generating an editing picture in a second layer of the preset canvas according to the target editing mode, wherein the second layer is positioned on the upper layer of the first layer and is a transparent layer; intercepting a picture displayed in the preset canvas according to a first preset time interval; and sequentially synthesizing the pictures according to the intercepting sequence of the pictures to generate edited second video data, wherein the playing time of each image in the second video data is the first preset time.
In addition, the video editing method of the embodiment of the application further includes the following additional technical features:
in a possible implementation manner of the present application, before obtaining the picture displayed in the preset canvas according to a first preset time interval, the method further includes: controlling the editing picture and the first video data to be played at a preset speed, wherein the preset speed is less than the original playing speed of the first video data; the capturing the picture displayed in the preset canvas comprises: and intercepting the picture displayed in the preset canvas according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
In a possible implementation manner of the present application, the original playing rate of the first video data is V1saidthe preset rate is V2The first preset time interval is t1said second predetermined time interval t2Comprises the following steps:
In a possible implementation manner of the present application, the preset canvas includes an editing tool; the acquiring of the video editing instruction comprises: generating an edit frame corresponding to the selected target tool in the second layer according to the obtained edit tool selection instruction; and determining the target editing mode according to the obtained editing frame operation instruction.
in a possible implementation manner of the present application, the preset canvas includes N editing tools, and N editing frames of the N editing tools correspond to M layers, where M is an integer less than or equal to N; the transparency of the M layers is different.
in a possible implementation manner of the present application, the capturing the picture displayed in the preset canvas according to a first preset time interval includes: sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas according to the first preset time interval; and acquiring a frame returned by the graphics processor.
in a possible implementation manner of the present application, the preset canvas includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and a display priority of the second area is higher than a display priority of the first area; before the screen displayed in the preset canvas is intercepted, the method further comprises the following steps: if the cutting instruction is obtained, adjusting the distribution mode of the first area and the second area in the preset canvas according to the cutting area in the cutting instruction; the capturing the picture displayed in the preset canvas comprises: and intercepting the picture displayed in the first area.
Another embodiment of the present application provides a video editing apparatus, including: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a video editing request, and the editing request comprises first video data to be edited; the playing module is used for playing the first video data in a first layer of a preset canvas; the acquisition module is further used for acquiring a video editing instruction, wherein the editing instruction comprises a target editing mode; the generating module is used for generating an editing picture in a second layer of the preset canvas according to the target editing mode, wherein the second layer is positioned on the upper layer of the first layer, and the second layer is a transparent layer; the screen capture module is used for capturing the picture displayed in the preset canvas according to a first preset time interval; and the synthesis module is used for sequentially synthesizing the pictures according to the interception sequence of the pictures to generate edited second video data, wherein the playing time of each image in the second video data is the first preset time.
In addition, the video editing apparatus according to the embodiment of the present invention further includes the following additional technical features:
In one possible implementation manner of the present application, the method further includes: the playing module is used for controlling the editing picture and the first video data to be played at a preset speed, wherein the preset speed is less than the original playing speed of the first video data; the screenshot module is specifically configured to: and intercepting the picture displayed in the preset canvas according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
In a possible implementation manner of the present application, the preset canvas includes an editing tool; the acquisition module is specifically configured to:
Generating an edit frame corresponding to the selected target tool in the second layer according to the obtained edit tool selection instruction; and determining the target editing mode according to the obtained editing frame operation instruction.
In a possible implementation manner of the present application, the intercepting module is specifically configured to: sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas according to the first preset time interval; and acquiring a frame returned by the graphics processor.
In a possible implementation manner of the present application, the preset canvas includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and a display priority of the second area is higher than a display priority of the first area; the device further comprises: the adjusting module is used for adjusting the distribution mode of the first area and the second area in the preset canvas according to a cutting area in a cutting instruction if the cutting instruction is obtained before the picture displayed in the preset canvas is cut; the screenshot module is specifically configured to: and intercepting the picture displayed in the first area.
yet another embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the video editing method according to the foregoing embodiment of the present application.
Yet another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video editing method according to the above embodiment of the present application.
the technical scheme provided by the embodiment of the application can have the following beneficial effects:
The method comprises the steps of obtaining a video editing request, playing first video data in a first layer of a preset canvas, and obtaining a video editing instruction, wherein the editing instruction comprises a target editing mode, generating an editing picture in a second layer of the preset canvas according to the target editing mode, the second layer is located on the upper layer of the first layer, the second layer is a transparent layer, further, pictures displayed in the preset canvas are intercepted according to a first preset time interval, and finally, according to the intercepting sequence of the pictures, the pictures are sequentially subjected to synthesis processing, and edited second video data are generated, wherein the playing time of each image in the second video data is the first preset time interval. Therefore, the efficiency of video editing is improved, and the cost of video editing is reduced.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a video editing method according to one embodiment of the present application;
FIG. 2 is a schematic diagram of a video editing request transmission scenario according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario of video editing according to an embodiment of the present application;
FIG. 4 is a schematic view of a first region and a second region distribution according to one embodiment of the present application;
FIG. 5 is a schematic view of a first region and a second region distribution according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of a video editing apparatus according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a video editing apparatus according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of a video editing apparatus according to another embodiment of the present application; and
FIG. 9 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
a video editing method and apparatus of an embodiment of the present application are described below with reference to the drawings.
The execution main body of the video editing method in the embodiment of the application can be a device with an image processor, such as a mobile phone, a tablet computer, a personal digital assistant and a wearable device, and the wearable device can be an intelligent bracelet, an intelligent watch, intelligent glasses and the like.
Fig. 1 is a flowchart of a video editing method according to an embodiment of the present application, as shown in fig. 1, the method including:
step 101, a video editing request is obtained, wherein the editing request includes first video data to be edited.
The first video data to be edited may be shot by the user in real time, or may be selected by the user from the shot videos, or downloaded by the video platform.
it should be noted that, in different application scenarios, the manner of obtaining the video editing request is different, and the following example is illustrated:
The first example:
In this example, the video editing request is sent in a voice manner.
specifically, after a user selects first video data, voice data of the user is collected through pickup equipment, and when keywords such as "edit video" are recognized in the collected voice data of the user, a video editing request for the selected first video data is acquired.
the second example is:
In this example, the user sends the video editing request in the form of an action.
specifically, after the user selects the first video data, a gesture action or a facial expression action of the user is collected through a camera or a touch screen, and the like, the collected action is matched with a preset action, and if the matching is successful, an editing request of the user for the first video data is obtained.
Of course, in this example, the manner in which the user selects the first video data may also be determined according to the motion, and is not described herein again.
The third example:
In this example, as shown in fig. 2, a selection control, an editing menu, and the like are provided on the video interface, and the user may trigger the selection control of the first video data to select the first video data, and further, the sending of the video editing request is realized by triggering the editing menu.
And 102, playing first video data in a first layer of a preset canvas.
It should be appreciated that in the present embodiment, a visual editing interface for video editing is provided that provides a preset canvas on which visual editing of video occurs.
specifically, in order to facilitate the user to edit the first video data, the first video data is played in the first layer of the preset canvas, so that the user can perform corresponding personalized editing according to the content of the first video data.
step 103, acquiring a video editing instruction, wherein the editing instruction comprises a target editing mode.
And 104, generating an editing picture in a second layer of the preset canvas according to the target editing mode, wherein the second layer is positioned on the upper layer of the first layer, and the second layer is a transparent layer.
Specifically, when the first video data is played, a video editing instruction is obtained, where the video editing instruction includes a target editing mode, and the target editing mode corresponds to specific editing contents of the video, such as text addition (format of text, and the like), animation addition, special effect addition (addition of particle effect such as fireworks, and the like), color change, filter addition energy, and the like, where different target editing modes may be combined, such as the text addition and animation addition modes, and an effect of continuously turning text left and right may be achieved.
Furthermore, according to the target editing mode, an editing picture is generated in a second layer of the preset canvas, where the second layer is located on an upper layer of the first layer, the second layer is a transparent layer, and a transparency of the second layer may be set according to a display requirement of a user, and if the user wants to display an effect that a picture content included in the first layer is relatively blurred, that is, the transparency may be relatively low, otherwise, the transparency may be relatively high, so that, in the preset canvas, the video picture in the first layer and the editing picture in the second layer may be displayed, for example, as shown in fig. 3, when the editing picture in the second layer is a character, the character and the video picture may be displayed together.
It should be noted that, in different application scenarios, the manner of triggering the video editing instruction by the user is different, and the following example is illustrated:
The first example:
In the present example, the video editing instruction is triggered in a voice manner, and a corresponding video editing instruction is determined based on the voice data of the user through keyword or semantic recognition.
the second example is:
In this example, an editing tool, such as a character editing tool, a special effect editing tool, and the like, is displayed in a preset canvas, and an editing frame corresponding to a selected target tool is generated in the second layer according to a selection instruction of a user for an editing work, for example, when the user selects the character editing tool, the corresponding character editing frame may be displayed in the second layer, a specific editing operation instruction manually input by a user may be received in the editing frame, the editing frame may also include a plurality of editing operation controls, and based on the selection of the operation controls by the user, the specific editing operation instruction may be determined, so that a target editing mode is determined according to the obtained editing frame operation instruction.
in actual execution, there may be a plurality of editing operation instructions of a user, for example, the user has an editing operation instruction for adding a plurality of animation special effects, so as to avoid that editing effects corresponding to different editing operation instructions conflict with each other and are difficult to display comprehensively, target editing modes corresponding to different editing operation instructions may be displayed in different layers, and conflict of editing effects corresponding to editing operations is avoided between different layers according to the distinction of the transparency.
Specifically, the preset canvas includes N editing tools, and N editing frames of the N editing tools correspond to M layers, where M is a positive integer less than or equal to N, transparency of the M layers is set to be different, and division of transparency of different layers may be determined according to an order of the layers, for example, transparency of a layer above the layer is higher, and division may also be performed according to priority of a corresponding editing operation instruction in the layer, for example, transparency of a layer corresponding to an editing operation instruction with higher priority is lower.
And 105, intercepting a picture displayed in a preset canvas according to a first preset time interval.
It can be understood that, in this embodiment, the picture displayed in the preset canvas is periodically captured according to the first preset time interval, so that the edited picture can be obtained in time.
As a possible implementation manner, a screenshot instruction may be sent to the graphics processor at a first preset time interval, the specified sending mode may be a voice trigger or a screenshot control trigger, and the like, and after receiving the screenshot instruction, the graphics processor captures a picture from a preset canvas at the first preset time interval and returns the captured picture to a corresponding picture. It should be noted that, in the present embodiment, the editing picture is generated according to the image processor, so that the problem of frame dropping of the first video playing data can be avoided.
The first preset time interval may be fixedly calibrated according to experimental data, and in order to ensure the integrity of captured images, the first preset time interval may also be set according to the play speed of the first video data, for example, if 24 frames of images are displayed per second, the corresponding first preset time interval is 1/24 s. When the playing speed of the first video data changes in real time, the first preset time interval may also change in real time.
However, when the screenshot is derived, the speed of storing the video frame is usually slower than the screenshot speed, and if the screenshot speed is not slowed down, the cached screenshot frames may be more and more, and finally, the memory may be overflowed.
Therefore, in an embodiment of the present application, the editing frame and the first video data may also be controlled to play at a preset rate, where the preset rate is smaller than an original playing rate of the first video data, that is, when the editing frame is exported, the playing rates of the video and the corresponding video editing frame are synchronously slowed down, for example, the preset rate is changed to one third of the original playing rate, thereby facilitating a screenshot operation of the editing frame. And then, pictures displayed in the preset canvas are intercepted according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval, the second preset time interval can be determined according to the storage speed, and every frame of picture is deleted in the cache, so that the memory overflow is avoided.
as a possible implementation manner, the original playing rate of the first video data is V1, the preset rate is V2, and the first preset time interval is t1, and the second preset time interval may be calculated according to the following formula (1). Thereby, it is ensured that the second predetermined time interval is obviously longer than the first predetermined time interval.
Of course, in an embodiment of the present invention, the time when the user edits the screen may also be determined, and the screenshot operation is performed only within the playing time corresponding to the editing screen at the time. Interception of video frames where the user does not edit the frame is avoided. Further meeting the personalized requirements of users.
And 106, sequentially synthesizing the pictures according to the capturing sequence of the pictures to generate edited second video data, wherein the playing time of each image in the second video data is a first preset time.
specifically, according to the capturing sequence of each picture, the pictures are sequentially subjected to synthesis processing, such as splicing processing, and the like, and edited second video data is generated, wherein the playing time of each image in the second video data is a first preset time, so that whether the playing rate of the first video data and the edited picture is slowed down in the process of exporting the picture or not, the original playing rate is still played in the second video data, and the playing effect is not affected.
For example, if the first preset time interval is x, and the first preset time interval is the same as the original playing speed (the switching time interval of each frame of picture), the playing speed of the first original video data may be changed to x/3 when the video is derived, wherein the playing speed of the edited picture is also displayed according to x/3, and further, the screenshot is performed according to the x/3 time interval, but finally the second video data is synthesized according to the time interval of x, although the playing and synthesizing time is slow. But the composite video is played at normal speed, so that the playing speed of the finally derived video is normal.
Considering that in some application scenarios, a user has a cropping requirement for a corresponding picture when editing the first video data, for example, the user only wants to keep a part of a character image in the picture, etc., in order to meet the personalized requirement of the user, in one embodiment of the present invention, the cropping instruction of the user is responded.
In this embodiment, the preset canvas includes a first area and a second area, wherein the number of the first area and the second area can be arbitrarily set, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area, that is, when the picture in the canvas moves from the first area to the second area, the non-transparent picture of the second area is displayed instead of the corresponding picture, before the picture displayed in the preset canvas is intercepted, if the cutting instruction is obtained, the distribution mode of the first area and the second area in the preset canvas is adjusted according to the cutting area in the cutting instruction, i.e. the area outside the cropped area in the first area is covered by the second area, so that, on subsequent screenshots, the picture in the second area is not displayed any more, and the cutting operation of the related picture is realized.
The triggering mode of the cutting instruction can be implemented through a touch screen action track or through a selection operation of transferring a cutting area selection tool. The cutting area may be any shape such as a circle, a square, etc., which is not exemplified herein.
In addition, in different application scenes, the distribution modes of the first area and the second area in the preset canvas are adjusted to be different according to the clipping area in the clipping instruction, as a possible implementation mode, when the number and the distribution modes of the second area and the first area are as shown in fig. 4, the moving operation of a user on the picture in the first area can be received, when the user moves the display picture lower than the area left and right, the display picture can be clipped, and all the display pictures shielded by the second area are equal to the clipped display pictures.
As another possible implementation manner, when the number and the distribution manner of the second areas and the first areas are as shown in fig. 5, the second areas can be covered in the corresponding screens of the first areas through a drag operation on the second areas, and all the display screens blocked by the second areas are equivalently cropped.
Therefore, according to the embodiment, the video is edited according to the visualization operation in the preset canvas, the synthesized second video data can be used for recording life fragments, dynamically publishing in social software, applying dynamic wallpaper and the like, and the video editing operation does not depend on the learning and integration of codes any more.
To sum up, the video editing method according to the embodiment of the present application obtains a video editing request, plays first video data in a first layer of a preset canvas, and obtains a video editing instruction, where the editing instruction includes a target editing mode, and generates an editing picture in a second layer of the preset canvas according to the target editing mode, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer, and further intercepts pictures displayed in the preset canvas according to a first preset time interval, and finally performs synthesis processing on the pictures in sequence according to an intercepting sequence of the pictures to generate edited second video data, where a playing time of each image in the second video data is the first preset time interval. Therefore, the efficiency of video editing is improved, and the cost of video editing is reduced.
in order to implement the above embodiments, the present application further provides a video editing apparatus.
Fig. 6 is a schematic structural diagram of a video editing apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus includes: an acquisition module 100, a play module 200, a generation module 300, a screenshot module 400, and a composition module 500, wherein,
the obtaining module 100 is configured to obtain a video editing request, where the editing request includes first video data to be edited.
The playing module 200 is configured to play the first video data in the first layer of the preset canvas.
the obtaining module 100 is further configured to obtain a video editing instruction, where the editing instruction includes a target editing mode.
The generating module 300 is configured to generate an editing picture in a second layer of the preset canvas according to the target editing mode, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer.
The screenshot module 400 is configured to capture a picture displayed in a preset canvas according to a first preset time interval.
and a synthesizing module 500, configured to sequentially synthesize the pictures according to the capturing order of the pictures, and generate edited second video data, where a playing time of each image in the second video data is a first preset time.
In one embodiment of the present application, as shown in fig. 7, on the basis of fig. 6, the apparatus further comprises: the play back module 600 may, among other things,
The playing module 600 is configured to control the editing frame and the first video data to be played at a preset rate, where the preset rate is smaller than an original playing rate of the first video data.
In this embodiment, the screenshot module 400 is specifically configured to:
And intercepting the picture displayed in the preset canvas according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
in an embodiment of the present application, the preset canvas includes an editing tool, and the obtaining module 100 is specifically configured to:
Generating an edit frame corresponding to the selected target tool in the second layer according to the obtained edit tool selection instruction;
And determining a target editing mode according to the obtained editing frame operation instruction.
in an embodiment of the present application, the screenshot module 400 is specifically configured to: sending a screenshot instruction to a graphics processor at a first preset time interval so that the graphics processor can intercept a picture from a preset canvas according to the first preset time interval;
and acquiring the frame returned by the graphics processor.
In one embodiment of the present application, as shown in fig. 8, on the basis of fig. 6, the apparatus further includes: the adjusting module 600, wherein the preset canvas comprises a first region and a second region, the first region is a transparent region, the second region is a non-transparent region, and the display priority of the second region is higher than that of the first region,
The adjusting module 600 is configured to, before capturing a picture displayed in the preset canvas, if the clipping instruction is obtained, adjust a distribution manner of the first area and the second area in the preset canvas according to the clipping area in the clipping instruction.
In this embodiment, the screenshot module 400 is specifically configured to: and intercepting the picture displayed in the first area.
It should be noted that the foregoing explanation on the embodiment of the video editing method is also applicable to the video editing apparatus of this embodiment, and is not repeated here.
to sum up, the video editing apparatus according to the embodiment of the present application obtains a video editing request, plays first video data in a first layer of a preset canvas, and obtains a video editing instruction, where the editing instruction includes a target editing mode, and generates an editing picture in a second layer of the preset canvas according to the target editing mode, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer, and further intercepts pictures displayed in the preset canvas according to a first preset time interval, and finally performs synthesis processing on the pictures in sequence according to an intercepting sequence of the pictures to generate edited second video data, where a playing time of each image in the second video data is the first preset time interval. Therefore, the efficiency of video editing is improved, and the cost of video editing is reduced.
in order to implement the foregoing embodiments, an electronic device is further provided in an embodiment of the present application, including a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the video editing method as described in the above embodiments.
FIG. 9 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device 12 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 9, and commonly referred to as a "hard drive"). Although not shown in FIG. 9, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only memory (CD-ROM), a Digital versatile disk Read Only memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
a program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
in order to implement the above embodiments, the present application also proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the video editing method as described in the above embodiments.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. a video editing method, comprising:
Acquiring a video editing request, wherein the editing request comprises first video data to be edited;
Playing the first video data in a first layer of a preset canvas;
Acquiring a video editing instruction, wherein the editing instruction comprises a target editing mode;
Generating an editing picture in a second layer of the preset canvas according to the target editing mode, wherein the second layer is positioned on the upper layer of the first layer and is a transparent layer;
intercepting a picture displayed in the preset canvas according to a first preset time interval;
And sequentially synthesizing the pictures according to the intercepting sequence of the pictures to generate edited second video data, wherein the playing time of each image in the second video data is the first preset time.
2. The method of claim 1, wherein before the capturing the picture displayed in the preset canvas at the first preset time interval, further comprising:
controlling the editing picture and the first video data to be played at a preset speed, wherein the preset speed is less than the original playing speed of the first video data;
the capturing the picture displayed in the preset canvas comprises:
And intercepting the picture displayed in the preset canvas according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
3. The method of claim 2, wherein the original playing rate of the first video data is V1, the preset rate is V2, the first preset time interval is t1, and the second preset time interval is t 2:
4. The method of claim 1, wherein the preset canvas includes an editing tool;
the acquiring of the video editing instruction comprises:
Generating an edit frame corresponding to the selected target tool in the second layer according to the obtained edit tool selection instruction;
and determining the target editing mode according to the obtained editing frame operation instruction.
5. the method according to claim 4, wherein the preset canvas comprises N editing tools, and N editing boxes of the N editing tools correspond to M layers, wherein M is an integer less than or equal to N;
The transparency of the M layers is different.
6. the method according to any one of claims 1 to 5, wherein the intercepting the picture displayed in the preset canvas at the first preset time interval comprises:
sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas according to the first preset time interval;
And acquiring a frame returned by the graphics processor.
7. the method according to any one of claims 1 to 5, wherein the preset canvas comprises a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area;
Before the screen displayed in the preset canvas is intercepted, the method further comprises the following steps:
If the cutting instruction is obtained, adjusting the distribution mode of the first area and the second area in the preset canvas according to the cutting area in the cutting instruction;
The capturing the picture displayed in the preset canvas comprises:
And intercepting the picture displayed in the first area.
8. A video editing apparatus, comprising:
The device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a video editing request, and the editing request comprises first video data to be edited;
the playing module is used for playing the first video data in a first layer of a preset canvas;
The acquisition module is further used for acquiring a video editing instruction, wherein the editing instruction comprises a target editing mode;
The generating module is used for generating an editing picture in a second layer of the preset canvas according to the target editing mode, wherein the second layer is positioned on the upper layer of the first layer, and the second layer is a transparent layer;
The screen capture module is used for capturing the picture displayed in the preset canvas according to a first preset time interval;
and the synthesis module is used for sequentially synthesizing the pictures according to the interception sequence of the pictures to generate edited second video data, wherein the playing time of each image in the second video data is the first preset time.
9. the apparatus of claim 8, further comprising:
The playing module is used for controlling the editing picture and the first video data to be played at a preset speed, wherein the preset speed is less than the original playing speed of the first video data;
the screenshot module is specifically configured to:
And intercepting the picture displayed in the preset canvas according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
10. The apparatus of claim 8, wherein the preset canvas comprises an editing tool; the acquisition module is specifically configured to:
Generating an edit frame corresponding to the selected target tool in the second layer according to the obtained edit tool selection instruction;
And determining the target editing mode according to the obtained editing frame operation instruction.
CN201910941050.2A 2019-09-30 2019-09-30 Video editing method and device Pending CN110572717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910941050.2A CN110572717A (en) 2019-09-30 2019-09-30 Video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910941050.2A CN110572717A (en) 2019-09-30 2019-09-30 Video editing method and device

Publications (1)

Publication Number Publication Date
CN110572717A true CN110572717A (en) 2019-12-13

Family

ID=68783602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910941050.2A Pending CN110572717A (en) 2019-09-30 2019-09-30 Video editing method and device

Country Status (1)

Country Link
CN (1) CN110572717A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935505A (en) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 Video cover generation method, device, equipment and storage medium
CN112634404A (en) * 2020-06-28 2021-04-09 西安诺瓦星云科技股份有限公司 Layer fusion method, device and system
CN112862927A (en) * 2021-01-07 2021-05-28 北京字跳网络技术有限公司 Method, apparatus, device and medium for publishing video
CN113138765A (en) * 2021-05-19 2021-07-20 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104023272A (en) * 2014-06-25 2014-09-03 北京奇艺世纪科技有限公司 Video screen editing method and device
CN104703042A (en) * 2015-03-18 2015-06-10 天脉聚源(北京)传媒科技有限公司 Method and device for editing videos
CN105744182A (en) * 2016-04-22 2016-07-06 广东小天才科技有限公司 Video production method and device
WO2018149175A1 (en) * 2017-02-20 2018-08-23 北京金山安全软件有限公司 Video-recording method and apparatus, and electronic device
CN109996109A (en) * 2019-03-19 2019-07-09 北京奇艺世纪科技有限公司 A kind of image processing method and device
EP3525403A1 (en) * 2013-01-29 2019-08-14 Huawei Technologies Co., Ltd. Video sms message sending and receiving methods and apparatuses thereof, and handheld electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3525403A1 (en) * 2013-01-29 2019-08-14 Huawei Technologies Co., Ltd. Video sms message sending and receiving methods and apparatuses thereof, and handheld electronic device
CN104023272A (en) * 2014-06-25 2014-09-03 北京奇艺世纪科技有限公司 Video screen editing method and device
CN104703042A (en) * 2015-03-18 2015-06-10 天脉聚源(北京)传媒科技有限公司 Method and device for editing videos
CN105744182A (en) * 2016-04-22 2016-07-06 广东小天才科技有限公司 Video production method and device
WO2018149175A1 (en) * 2017-02-20 2018-08-23 北京金山安全软件有限公司 Video-recording method and apparatus, and electronic device
CN109996109A (en) * 2019-03-19 2019-07-09 北京奇艺世纪科技有限公司 A kind of image processing method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634404A (en) * 2020-06-28 2021-04-09 西安诺瓦星云科技股份有限公司 Layer fusion method, device and system
CN111935505A (en) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 Video cover generation method, device, equipment and storage medium
CN111935505B (en) * 2020-07-29 2023-04-14 广州华多网络科技有限公司 Video cover generation method, device, equipment and storage medium
CN112862927A (en) * 2021-01-07 2021-05-28 北京字跳网络技术有限公司 Method, apparatus, device and medium for publishing video
CN112862927B (en) * 2021-01-07 2023-07-25 北京字跳网络技术有限公司 Method, apparatus, device and medium for publishing video
CN113138765A (en) * 2021-05-19 2021-07-20 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium
WO2022242380A1 (en) * 2021-05-19 2022-11-24 上海商汤智能科技有限公司 Method and apparatus for interaction, device, and storage medium

Similar Documents

Publication Publication Date Title
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
US11645804B2 (en) Dynamic emoticon-generating method, computer-readable storage medium and computer device
CN110572717A (en) Video editing method and device
US7084875B2 (en) Processing scene objects
US9852764B2 (en) System and method for providing and interacting with coordinated presentations
KR20210082232A (en) Real-time video special effects systems and methods
CN107005458B (en) Unscripted digital media message generation method, device, electronic equipment and readable medium
CN108845741B (en) AR expression generation method, client, terminal and storage medium
US20140193138A1 (en) System and a method for constructing and for exchanging multimedia content
WO2014182508A1 (en) Audio-video compositing and effects
CN112053449A (en) Augmented reality-based display method, device and storage medium
CN112053370A (en) Augmented reality-based display method, device and storage medium
WO2023151611A1 (en) Video recording method and apparatus, and electronic device
US20230326110A1 (en) Method, apparatus, device and media for publishing video
JP2023551670A (en) Page switching display method, device, storage medium and electronic equipment
CN113302622A (en) System and method for providing personalized video
CN113660528A (en) Video synthesis method and device, electronic equipment and storage medium
US20140282000A1 (en) Animated character conversation generator
CN114466232A (en) Video processing method, video processing device, electronic equipment and medium
CN110703973B (en) Image cropping method and device
CN113301356A (en) Method and device for controlling video display
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN111757177B (en) Video clipping method and device
CN114374872A (en) Video generation method and device, electronic equipment and storage medium
CN114025237A (en) Video generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213

RJ01 Rejection of invention patent application after publication