CN110636365B - Video character adding method and device, electronic equipment and storage medium - Google Patents

Video character adding method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110636365B
CN110636365B CN201910941059.3A CN201910941059A CN110636365B CN 110636365 B CN110636365 B CN 110636365B CN 201910941059 A CN201910941059 A CN 201910941059A CN 110636365 B CN110636365 B CN 110636365B
Authority
CN
China
Prior art keywords
area
video
preset
playing
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910941059.3A
Other languages
Chinese (zh)
Other versions
CN110636365A (en
Inventor
谷保震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Internet Security Software Co Ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201910941059.3A priority Critical patent/CN110636365B/en
Publication of CN110636365A publication Critical patent/CN110636365A/en
Application granted granted Critical
Publication of CN110636365B publication Critical patent/CN110636365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440227Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4888Data services, e.g. news ticker for displaying teletext characters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software

Abstract

The application discloses a video character adding method and device, wherein the method comprises the following steps: acquiring a video character adding request, wherein the adding request comprises a first video to be processed; playing a first video in a first layer of a preset canvas playing area; when a character input instruction is obtained, displaying a target character corresponding to the input instruction in a second layer of a preset canvas playing area, wherein the second layer is positioned on the upper layer of the first layer and is a transparent layer; intercepting a picture displayed in a preset canvas playing area according to a first preset time interval; and sequentially synthesizing the pictures according to the intercepting sequence of the pictures to generate a second video added with characters, wherein the playing time of each picture in the second video is a first preset time. Therefore, the video editing mode for adding the characters in the video with low learning cost and high efficiency is realized.

Description

Video character adding method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for adding video characters.
Background
Video is widely used in daily production and life of users as an information recording medium, and along with the wide application of video, the demand for video-based editing is becoming more diversified. Such as adding text to the video, etc.
In the related art, the video content is edited by accessing the open source database of the third party, however, the method of editing the video by introducing the open source database of the third party requires the user to learn the open source database of the third party to master the editing method, which results in higher learning cost.
Disclosure of Invention
The application provides a video character adding method and device, and aims to solve the technical problems that in the prior art, character adding learning cost in a video is high, and character adding efficiency is low.
The embodiment of the application provides a video character adding method, which comprises the following steps: acquiring a video character adding request, wherein the adding request comprises a first video to be processed; playing the first video in a first layer of a preset canvas playing area; when a character input instruction is obtained, displaying a target character corresponding to the input instruction in a second layer of the preset canvas playing area, wherein the second layer is located on the upper layer of the first layer and is a transparent layer; intercepting the picture displayed in the preset canvas playing area according to a first preset time interval; and sequentially synthesizing the pictures according to the intercepting sequence of the pictures to generate a second video added with characters, wherein the playing time of each picture in the second video is the first preset time.
In addition, the video character adding method of the embodiment of the application further comprises the following additional technical characteristics:
in a possible implementation manner of the present application, after the playing the first video in the first layer of the preset canvas playing area, the method further includes: displaying a character frame in a second layer of the preset canvas playing area, and moving a focus of a display interface into the character frame; or displaying a character control in a non-playing area of the preset canvas.
In a possible implementation manner of the present application, after displaying a target character corresponding to the input instruction in the second layer of the preset canvas play area, the method further includes: and when a character adjusting instruction is obtained, adjusting the display style of the target character according to the adjusting instruction.
In a possible implementation manner of the present application, before intercepting, according to a first preset time interval, a picture displayed in the preset canvas play area, the method further includes: controlling the editing picture and the first video to play at a preset speed, wherein the preset speed is less than the original playing speed of the first video; the capturing the picture displayed in the preset canvas playing area comprises the following steps: and intercepting the picture displayed in the preset canvas playing area according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
In a possible implementation manner of the present application, the original playing rate of the first video is V1, the preset rate is V2, the first preset time interval is t1, and the second preset time interval is t2
Figure BDA0002222914410000021
In a possible implementation manner of the present application, the capturing the picture displayed in the preset canvas play area according to a first preset time interval includes: sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas playing area according to the first preset time interval; and acquiring a frame returned by the graphics processor.
In a possible implementation manner of the present application, the preset canvas play area includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than the display priority of the first area; after the target character corresponding to the input instruction is displayed in the second layer of the preset canvas playing area, the method includes: if the cutting instruction is obtained, adjusting the distribution mode of the first area and the second area in the playing area according to the cutting area in the cutting instruction; the capturing the picture displayed in the preset canvas playing area comprises the following steps: and intercepting the picture displayed in the first area.
Another embodiment of the present application provides a video character adding apparatus, including: the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video character adding request, and the adding request comprises a first video to be processed; the playing module is used for playing the first video in a first layer of a preset canvas playing area; the display module is used for displaying a target character corresponding to the input instruction in a second layer of the preset canvas playing area when the character input instruction is obtained, wherein the second layer is located on the upper layer of the first layer and is a transparent layer; the screen capture module is used for capturing the picture displayed in the preset canvas playing area according to a first preset time interval; and the synthesis module is used for sequentially synthesizing the pictures according to the interception sequence of the pictures to generate a second video added with characters, wherein the playing time of each picture in the second video is the first preset time.
In addition, the video character adding device of the embodiment of the application further comprises the following additional technical features:
in a possible implementation manner of the present application, the display module is further configured to: displaying a character frame in a second layer of the preset canvas playing area, and moving a focus of a display interface into the character frame; or displaying a character control in a non-playing area of the preset canvas.
In one possible implementation manner of the present application, the method further includes: and the first adjusting module is used for adjusting the display style of the target character according to the adjusting instruction when the character adjusting instruction is obtained after the target character corresponding to the input instruction is displayed in the second layer of the preset canvas playing area.
In a possible implementation manner of the present application, the playing module is further configured to: before the picture displayed in the preset canvas playing area is intercepted according to the first preset time interval, controlling the editing picture and the first video to be played at a preset speed, wherein the preset speed is smaller than the original playing speed of the first video; the screenshot module is specifically configured to: and intercepting the picture displayed in the preset canvas playing area according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
In a possible implementation manner of the present application, the screenshot module is specifically configured to: sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas playing area according to the first preset time interval; and acquiring a frame returned by the graphics processor.
In a possible implementation manner of the present application, the screenshot module is specifically configured to: sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas playing area according to the first preset time interval; and acquiring a frame returned by the graphics processor.
In a possible implementation manner of the present application, the preset canvas play area includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than the display priority of the first area; the device further comprises: a second adjusting module, configured to, after a target character corresponding to the input instruction is displayed in a second layer of the preset canvas playing area, if a clipping instruction is obtained, adjust a distribution manner of the first area and the second area in the playing area according to the clipping area in the clipping instruction; the screenshot module is specifically configured to: and intercepting the picture displayed in the first area.
Yet another embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the video character adding method according to the above embodiment of the present application.
Yet another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video character adding method according to the above embodiment of the present application.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of obtaining a video character adding request, wherein the adding request comprises a first video to be processed, playing the first video in a first layer of a preset canvas playing area, when a character input instruction is obtained, displaying a target character corresponding to the input instruction in a second layer of the preset canvas playing area, wherein the second layer is located on the upper layer of the first layer, the second layer is a transparent layer, intercepting pictures displayed in the preset canvas playing area according to a first preset time interval, further, sequentially synthesizing the pictures according to the intercepting sequence of the pictures, and generating a second video after the character is added, wherein the playing time of each picture in the second video is first preset time. Therefore, the video editing mode for adding the characters in the video with low learning cost and high efficiency is realized.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a video character adding method according to one embodiment of the present application;
FIG. 2 is a diagram illustrating a scenario in which a video character addition request is sent according to an embodiment of the present application;
FIG. 3-1 is a schematic diagram of an application scenario of frequency character addition according to an embodiment of the present application;
3-2 is a schematic view of an application scenario of frequency character addition according to another embodiment of the present application;
FIG. 4 is a schematic view of a first region and a second region distribution according to one embodiment of the present application;
FIG. 5 is a schematic view of a first region and a second region distribution according to another embodiment of the present application;
FIG. 6 is a schematic diagram of a video character adding apparatus according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a video character adding apparatus according to another embodiment of the present application;
FIG. 8 is a schematic diagram of a video character adding apparatus according to another embodiment of the present application; and
FIG. 9 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The video character adding method and apparatus of the embodiments of the present application are described below with reference to the accompanying drawings.
The execution main body of the video character adding method in the embodiment of the application can be equipment with an image processor, such as a mobile phone, a tablet computer, a personal digital assistant and wearable equipment, and the wearable equipment can be an intelligent bracelet, an intelligent watch and intelligent glasses.
Fig. 1 is a flowchart of a video character adding method according to an embodiment of the present application, as shown in fig. 1, the method including:
step 101, a video character adding request is obtained, wherein the adding request comprises a first video to be processed.
The first video to be processed may be shot by the user in real time, or may be selected by the user from among already shot videos, or downloaded by the video platform.
It should be noted that, in different application scenarios, the manner of obtaining the video character adding request is different, and the following example is illustrated:
the first example:
in this example, the user sends a video character addition request in speech.
Specifically, after a user selects a first video, voice data of the user is collected through the sound pickup device, and when keywords such as "video character addition" are recognized in the collected voice data of the user, a video character addition request for the selected first video is acquired.
The second example is:
in this example, the user sends a video character addition request in the form of an action.
Specifically, after a user selects a first video, a gesture action or a facial expression action of the user is collected through a camera or a touch screen, the collected action is matched with a preset action, and if the matching is successful, a video character adding request of the user for the first video is obtained.
Of course, in this example, the manner in which the user selects the first video may also be determined according to the motion, and is not described here again.
In this example, as shown in fig. 2, a selection control, a video character adding menu, and the like are provided on the video interface, and the user may trigger the selection control of the first video to select the first video, and further, the video character adding menu is triggered to implement sending of the video character adding request.
And 102, playing a first video in a first layer of a preset canvas playing area.
It should be appreciated that in the present embodiment, a visual editing interface for video editing is provided that provides a preset canvas on which visual editing of video occurs.
Specifically, in order to facilitate the user to edit the first video, the first video is played in the first layer of the preset canvas, so that the user can perform corresponding personalized character addition editing according to the content of the first video.
Step 103, when the character input instruction is obtained, displaying a target character corresponding to the input instruction in a second layer of a preset canvas playing area, wherein the second layer is located on the upper layer of the first layer, and the second layer is a transparent layer.
Specifically, when a first video is played, a character input instruction is obtained, where the character input instruction includes a character, a size of the character, a color of the character, a dynamic effect of the character, and the like, and then a target character corresponding to the input instruction is displayed in a second layer of a preset canvas playing area, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer, where a transparency of the second layer may be set according to a display requirement of a user, and if the user wants to display an effect that a picture content included in the first layer is relatively blurred, that is, the transparency may be relatively low, otherwise, the transparency may be relatively high, and thus, in a visual effect of the preset canvas, as shown in fig. 3-1, a video picture in the first layer and a character in the second layer may be displayed.
It should be noted that, in different application scenarios, the manner of the user triggering the character input instruction is different, and the following example is illustrated:
the first example:
in the present example, the user triggers the character input instruction in a voice manner, and the corresponding character input instruction is determined based on the voice data of the user through keyword or semantic recognition.
The second example is:
in this example, a character frame is displayed in the second layer of the preset canvas play area, and the focus of the display interface is moved into the character frame, so that a user may input a corresponding character input instruction in the character frame, for example, inputting characters such as "beautiful" and the like, or may also display character controls in a non-play area of the preset canvas, each character input control corresponds to a target character corresponding to a specific character input instruction, at this time, a style of the target character and the like are already defined in the character input controls, and based on a trigger operation on the corresponding displayed character control, the character input instruction is triggered, and the corresponding target character is determined.
Certainly, in order to further improve flexibility of character addition in the embodiment of the present application, a user may customize an added character, and after a target character corresponding to an input instruction is displayed in a second layer of a preset canvas play area, a character adjustment instruction may also be obtained, where the character adjustment instruction may be triggered by voice or triggered by a corresponding adjustment control triggered by the user, where the character adjustment instruction corresponds to adjustment of a position, a color, an animation special effect, and the like of the character, and after the character adjustment instruction is obtained, a display style of the target character may be adjusted according to the adjustment instruction, where the display style corresponds to the character adjustment instruction and includes adjustment of the position, the color, the animation special effect, and the like of the character.
For example, referring to fig. 3-2, after clicking the target character in the second layer, a text edit box appears, and at this time, the text that has been added may be enlarged, reduced, rotated, dragged, etc., or the character may be removed by clicking the "x" number.
And 104, intercepting the picture displayed in the preset canvas playing area according to a first preset time interval.
It should be emphasized that, in this embodiment, the displayed picture in the preset canvas playing area is intercepted, so that even if the first video is not full of the preset canvas, no irrelevant picture, such as other display elements in the canvas, is intercepted.
It can be understood that, in this embodiment, according to a first preset time interval, a picture displayed in a preset canvas play area is periodically captured, so that an edited picture can be obtained in time. As a possible implementation manner, a screenshot instruction may be sent to the graphics processor at a first preset time interval, the specified sending mode may be a voice trigger or a screenshot control trigger, and the like, and after receiving the screenshot instruction, the image processor captures a picture from a preset canvas at the first preset time interval and returns the captured picture to a corresponding picture. It should be noted that, in the present embodiment, the editing picture is generated according to the image processor, so that the problem of frame dropping of the first video playing data can be avoided.
Wherein the first predetermined time interval may be calibrated according to experimental data,
the calibration is fixed, and in order to ensure the integrity of captured images, the first preset time interval may also be set according to the play speed of the first video data, for example, when 24 frames of images are displayed every second, the corresponding first preset time interval is 1/24 s. When the playing speed of the first video data changes in real time, the first preset time interval may also change in real time.
However, when the screenshot is derived, the speed of storing the video frame is usually slower than the screenshot speed, and if the screenshot speed is not slowed down, the cached screenshot frames may be more and more, and finally, the memory may be overflowed.
Therefore, in an embodiment of the present application, the editing frame and the first video may also be controlled to play at a preset rate, where the preset rate is smaller than an original playing rate of the first video, that is, when the editing frame is derived, the playing rates of the video and the corresponding video editing frame are synchronously slowed down, for example, the preset rate is changed to one third of the original playing rate, thereby facilitating the screenshot operation of the editing frame. And then, pictures displayed in the playing area in the preset canvas are intercepted according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval, the second preset time interval can be determined according to the storage speed, and every frame of picture is deleted in the cache, so that the memory overflow is avoided.
As a possible implementation manner, the original playing rate of the first video is V1, the preset rate is V2, and the first preset time interval is t1, and the second preset time interval may be calculated according to the following formula (1). Thereby, it is ensured that the second predetermined time interval is obviously longer than the first predetermined time interval.
Figure BDA0002222914410000071
Of course, in an embodiment of the present application, a time at which the user edits the screen may also be determined, and the screenshot operation is performed only within the playing time corresponding to the editing screen at the time. Interception of a video picture to which a user has no character addition is avoided. Further meeting the personalized requirements of users.
And 105, sequentially synthesizing the pictures according to the intercepting sequence of the pictures to generate a second video added with characters, wherein the playing time of each picture in the second video is a first preset time.
Specifically, according to the capturing sequence of each picture, the pictures are sequentially subjected to synthesis processing, such as splicing processing, and the like, to generate an edited second video, wherein the playing time of each picture in the second video data is the first preset time interval, so that whether the playing rate of the first video and the target character is slowed down or not in the deriving of the picture or not, the original playing rate is still played in the second video, and the playing effect is not affected.
For example, assuming that the first preset time interval is x, and the first preset time interval is the same as the original playing speed, the playing speed of the first original video data may be changed to x/3 when the video is derived, wherein the playing speed of the editing screen is also displayed according to x/3, and further, the screenshot is performed according to the x/3 time interval, but finally the second video is synthesized according to the time interval of x, although the playing and synthesizing time is slow. But the composite video is played at normal speed, so that the playing speed of the finally derived video is normal.
Considering that in some application scenarios, a user has a cropping requirement for a corresponding picture when editing a first video, for example, the user only wants to keep a part of a character image in the picture, etc., in order to meet the personalized requirement of the user, in one embodiment of the present application, a region is divided in a preset canvas to crop the picture by the distribution of the region, specifically,
in this embodiment, the preset canvas includes a first area and a second area, where the number of the first area and the second area may be arbitrarily set, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area, that is, after a picture in the canvas moves from the first area to the second area, the non-transparent picture of the second area is displayed instead of the corresponding picture, after a target character corresponding to an input instruction is displayed in a second layer of a play area of the preset canvas, if a clipping instruction is obtained, a distribution manner of the first area and the second area in the preset canvas is adjusted according to the clipping area in the clipping instruction, that is, an area outside the clipping area in the first area is covered by the second area, so that, in a subsequent screenshot, a picture displayed in the first area is clipped, the picture in the second area is not displayed any more, and the cutting operation of the related picture is realized.
The triggering mode of the cutting instruction can be implemented through a touch screen action track or through a selection operation of transferring a cutting area selection tool. The cutting area may be any shape such as a circle, a square, etc., which is not exemplified herein.
In addition, in different application scenes, the distribution modes of the first area and the second area in the preset canvas are adjusted to be different according to the clipping area in the clipping instruction, as a possible implementation mode, when the number and the distribution modes of the second area and the first area are shown in fig. 4, the moving operation of a user on the picture in the first area can be received, when the user moves the display picture of the first area left and right, the display picture can be clipped, and all the display pictures shielded by the second area are equal to the clipped display pictures.
As another possible implementation manner, when the number and the distribution manner of the second areas and the first areas are as shown in fig. 5, the second areas can be covered in the corresponding screens of the first areas through a drag operation on the second areas, and all the display screens blocked by the second areas are equivalently cropped.
Therefore, according to the embodiment, character addition editing of the video is realized according to the visualization operation in the preset canvas, the synthesized second video can be used for recording life fragments, dynamically publishing in social software, applying dynamic wallpaper and the like, and the video editing operation does not depend on code learning and integration any more.
To sum up, the video character adding method according to the embodiment of the present application obtains a video character adding request, where the adding request includes a first video to be processed, the first video is played in a first layer of a preset canvas playing area, when a character input instruction is obtained, a target character corresponding to the input instruction is displayed in a second layer of the preset canvas playing area, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer, and captures pictures displayed in the preset canvas playing area according to a first preset time interval, and then sequentially synthesizes the pictures according to a capturing order of the pictures to generate a second video after the character is added, where a playing time of each picture in the second video is a first preset time. Therefore, the video editing mode for adding the characters in the video with low learning cost and high efficiency is realized.
In order to implement the above embodiments, the present application further provides a video character adding apparatus.
Fig. 6 is a schematic structural diagram of a video character adding apparatus according to an embodiment of the present application, as shown in fig. 6, the apparatus includes: an acquisition module 100, a playing module 200, a display module 300, a screenshot module 400, and a composition module 500, wherein,
the obtaining module 100 is configured to obtain a video character adding request, where the adding request includes a first video to be processed.
The playing module 200 is configured to play a first video in a first layer of a preset canvas playing area.
The display module 300 is configured to display a target character corresponding to the input instruction in a second layer of a preset canvas playing area when the character input instruction is obtained, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer.
The screenshot module 400 is configured to capture a picture displayed in a preset canvas play area according to a first preset time interval.
And a synthesizing module 500, configured to sequentially synthesize the pictures according to the capturing order of the pictures, and generate a second video to which the characters are added, where a playing time of each picture in the second video is a first preset time.
In one embodiment of the present application, the display module 300 is further configured to:
displaying a character frame in a second layer of a preset canvas playing area, and moving a focus of a display interface into the character frame;
alternatively, the first and second electrodes may be,
and displaying the character control in a non-playing area of a preset canvas.
In one embodiment of the present application, as shown in fig. 7, on the basis of fig. 6, the apparatus further comprises: the first adjusting module 600 is configured to, after a target character corresponding to the input instruction is displayed in a second layer of the preset canvas playing area, adjust a display style of the target character according to the adjusting instruction when the character adjusting instruction is obtained.
In one embodiment of the present application, the playing module 200 is further configured to: before the picture displayed in the preset canvas playing area is intercepted according to a first preset time interval, the editing picture and the first video are controlled to be played at a preset speed, wherein the preset speed is smaller than the original playing speed of the first video.
In this implementation, the screenshot module 400 is specifically configured to:
and intercepting the picture displayed in the preset canvas playing area according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
In an embodiment of the present application, the screenshot module 400 is specifically configured to: sending a screenshot instruction to a graphics processor at a first preset time interval so that the graphics processor can intercept a picture from a preset canvas playing area according to the first preset time interval;
and acquiring the frame returned by the graphics processor.
In one embodiment of the present application, as shown in fig. 8, on the basis of fig. 6, the apparatus further includes: a second adjusting module 700, where the preset canvas playing area includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than the display priority of the first area, and the second adjusting module 700 is configured to, after displaying the target character corresponding to the input instruction in the second layer of the preset canvas playing area, if the clipping instruction is obtained, adjust the distribution manner of the first area and the second area in the playing area according to the clipping area in the clipping instruction.
In this implementation, the screenshot module 400 is specifically configured to: and intercepting the picture displayed in the first area.
It should be noted that the foregoing explanation on the embodiment of the video character adding method is also applicable to the video character adding apparatus of the embodiment, and details are not repeated here.
To sum up, the video character adding device according to the embodiment of the present application obtains a video character adding request, where the adding request includes a first video to be processed, the first video is played in a first layer of a preset canvas playing area, when a character input instruction is obtained, a target character corresponding to the input instruction is displayed in a second layer of the preset canvas playing area, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer, and captures frames displayed in the preset canvas playing area according to a first preset time interval, and then sequentially synthesizes the frames according to a capturing order of the frames to generate a second video after the character is added, where a playing time of each frame in the second video is a first preset time. Therefore, the video editing mode for adding the characters in the video with low learning cost and high efficiency is realized.
In order to implement the foregoing embodiments, an electronic device is further provided in an embodiment of the present application, including a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the video character adding method as described in the above embodiments.
FIG. 9 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device 12 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 9, and commonly referred to as a "hard drive"). Although not shown in FIG. 9, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In order to implement the above embodiments, the present application also proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the video character adding method as described in the above embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (13)

1. A method for adding video characters, comprising:
acquiring a video character adding request, wherein the adding request comprises a first video to be processed;
playing the first video in a first layer of a preset canvas playing area;
when a character input instruction is obtained, displaying a target character corresponding to the input instruction in a second layer of the preset canvas playing area, wherein the second layer is located on the upper layer of the first layer and is a transparent layer;
intercepting the picture displayed in the preset canvas playing area according to a first preset time interval;
sequentially synthesizing the pictures according to the intercepting sequence of the pictures to generate a second video added with characters, wherein the playing time of each picture in the second video is the first preset time;
before the capturing the picture displayed in the preset canvas playing area according to the first preset time interval, the method further includes:
controlling an editing picture and the first video to play at a preset speed, wherein the preset speed is less than the original playing speed of the first video;
the capturing the picture displayed in the preset canvas playing area comprises the following steps:
and intercepting the picture displayed in the preset canvas playing area according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
2. The method of claim 1, wherein after the playing the first video in the first layer of the preset canvas playing area, further comprising:
displaying a character frame in a second layer of the preset canvas playing area, and moving a focus of a display interface into the character frame;
or displaying a character control in a non-playing area of the preset canvas.
3. The method of claim 1, wherein after displaying the target character corresponding to the input instruction in the second layer of the preset canvas playing area, the method further comprises:
and when a character adjusting instruction is obtained, adjusting the display style of the target character according to the adjusting instruction.
4. The method of claim 1, wherein the original playing rate of the first video is V1, the preset rate is V2, the first preset time interval is t1, and the second preset time interval is t 2:
Figure FDA0003251065040000011
5. the method according to any one of claims 1 to 4, wherein the intercepting the picture displayed in the preset canvas play area at a first preset time interval comprises:
sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas playing area according to the first preset time interval;
and acquiring a frame returned by the graphics processor.
6. The method according to any one of claims 1 to 3, wherein the preset canvas playing area comprises a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area;
after the target character corresponding to the input instruction is displayed in the second layer of the preset canvas playing area, the method includes:
if the cutting instruction is obtained, adjusting the distribution mode of the first area and the second area in the playing area according to the cutting area in the cutting instruction;
the capturing the picture displayed in the preset canvas playing area comprises the following steps:
and intercepting the picture displayed in the first area.
7. A video character adding apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video character adding request, and the adding request comprises a first video to be processed;
the playing module is used for playing the first video in a first layer of a preset canvas playing area;
the display module is used for displaying a target character corresponding to the input instruction in a second layer of the preset canvas playing area when the character input instruction is obtained, wherein the second layer is located on the upper layer of the first layer and is a transparent layer;
the screen capture module is used for capturing the picture displayed in the preset canvas playing area according to a first preset time interval;
the synthesis module is used for sequentially synthesizing all the pictures according to the interception sequence of all the pictures to generate a second video added with characters, wherein the playing time of each picture in the second video is the first preset time;
the playing module is further configured to: before the picture displayed in the preset canvas playing area is intercepted according to the first preset time interval, controlling the edited picture and the first video to be played at a preset speed, wherein the preset speed is smaller than the original playing speed of the first video;
the screenshot module is specifically configured to:
and intercepting the picture displayed in the preset canvas playing area according to a second preset time interval, wherein the second preset time interval is longer than the first preset time interval.
8. The apparatus of claim 7, wherein the display module is further configured to:
displaying a character frame in a second layer of the preset canvas playing area, and moving a focus of a display interface into the character frame;
alternatively, the first and second electrodes may be,
and displaying a character control in a non-playing area of the preset canvas.
9. The apparatus of claim 7, further comprising:
and the first adjusting module is used for adjusting the display style of the target character according to the adjusting instruction when the character adjusting instruction is obtained after the target character corresponding to the input instruction is displayed in the second layer of the preset canvas playing area.
10. The apparatus of any one of claims 7-9, wherein the screenshot module is specifically configured to:
sending a screenshot instruction to a graphics processor at the first preset time interval so that the graphics processor can intercept a picture from the preset canvas playing area according to the first preset time interval;
and acquiring a frame returned by the graphics processor.
11. The apparatus according to any one of claims 7-9, wherein the preset canvas playing area comprises a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area; the device further comprises:
a second adjusting module, configured to, after a target character corresponding to the input instruction is displayed in a second layer of the preset canvas playing area, if a clipping instruction is obtained, adjust a distribution manner of the first area and the second area in the playing area according to the clipping area in the clipping instruction;
the screenshot module is specifically configured to:
and intercepting the picture displayed in the first area.
12. An electronic device comprising a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the video character adding method according to any one of claims 1 to 6.
13. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the video character adding method according to any one of claims 1 to 6.
CN201910941059.3A 2019-09-30 2019-09-30 Video character adding method and device, electronic equipment and storage medium Active CN110636365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910941059.3A CN110636365B (en) 2019-09-30 2019-09-30 Video character adding method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910941059.3A CN110636365B (en) 2019-09-30 2019-09-30 Video character adding method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110636365A CN110636365A (en) 2019-12-31
CN110636365B true CN110636365B (en) 2022-01-25

Family

ID=68974800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910941059.3A Active CN110636365B (en) 2019-09-30 2019-09-30 Video character adding method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110636365B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634404A (en) * 2020-06-28 2021-04-09 西安诺瓦星云科技股份有限公司 Layer fusion method, device and system
CN114598893B (en) * 2020-11-19 2024-04-30 京东方科技集团股份有限公司 Text video realization method and system, electronic equipment and storage medium
CN113347478B (en) * 2021-05-28 2022-11-04 维沃移动通信(杭州)有限公司 Display method and display device
CN113873294A (en) * 2021-10-19 2021-12-31 深圳追一科技有限公司 Video processing method and device, computer storage medium and electronic equipment
CN114690982B (en) * 2022-03-31 2023-03-31 呼和浩特民族学院 Intelligent teaching method for physics teaching
CN115348469B (en) * 2022-07-05 2024-03-15 西安诺瓦星云科技股份有限公司 Picture display method, device, video processing equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510539A (en) * 2011-12-02 2012-06-20 深圳市万兴软件有限公司 Method and system for displaying content on playing video
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
CN108109209A (en) * 2017-12-11 2018-06-01 广州市动景计算机科技有限公司 A kind of method for processing video frequency and its device based on augmented reality
CN108924647A (en) * 2018-07-27 2018-11-30 深圳众思科技有限公司 Video editing method, video editing apparatus, terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004038746A (en) * 2002-07-05 2004-02-05 Toshiba Corp Image editing method and image editing system
KR20130097266A (en) * 2012-02-24 2013-09-03 삼성전자주식회사 Method and apparatus for editing contents view in mobile terminal
CN106412708B (en) * 2016-10-21 2019-07-09 上海与德信息技术有限公司 A kind of video interception method and device
US11317028B2 (en) * 2017-01-06 2022-04-26 Appsure Inc. Capture and display device
CN108882007A (en) * 2018-07-11 2018-11-23 苏州明上系统科技有限公司 Audio/video information loads management system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510539A (en) * 2011-12-02 2012-06-20 深圳市万兴软件有限公司 Method and system for displaying content on playing video
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
CN108109209A (en) * 2017-12-11 2018-06-01 广州市动景计算机科技有限公司 A kind of method for processing video frequency and its device based on augmented reality
CN108924647A (en) * 2018-07-27 2018-11-30 深圳众思科技有限公司 Video editing method, video editing apparatus, terminal

Also Published As

Publication number Publication date
CN110636365A (en) 2019-12-31

Similar Documents

Publication Publication Date Title
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
US7084875B2 (en) Processing scene objects
CN110572717A (en) Video editing method and device
CN112822542A (en) Video synthesis method and device, computer equipment and storage medium
US9852764B2 (en) System and method for providing and interacting with coordinated presentations
KR20210082232A (en) Real-time video special effects systems and methods
WO2023151611A1 (en) Video recording method and apparatus, and electronic device
US10957285B2 (en) Method and system for playing multimedia data
CN112053370A (en) Augmented reality-based display method, device and storage medium
KR20210110852A (en) Image deformation control method, device and hardware device
US20230326110A1 (en) Method, apparatus, device and media for publishing video
CN112884908A (en) Augmented reality-based display method, device, storage medium, and program product
CN112954199A (en) Video recording method and device
US9412042B2 (en) Interaction with and display of photographic images in an image stack
CN113660528A (en) Video synthesis method and device, electronic equipment and storage medium
CN114422692A (en) Video recording method and device and electronic equipment
CN113301356A (en) Method and device for controlling video display
CN110703973B (en) Image cropping method and device
JP7427786B2 (en) Display methods, devices, storage media and program products based on augmented reality
CN114452645B (en) Method, apparatus and storage medium for generating scene image
CN114025237A (en) Video generation method and device and electronic equipment
CN114125297A (en) Video shooting method and device, electronic equipment and storage medium
CN111800663B (en) Video synthesis method and device
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN113350780A (en) Cloud game control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant