CN117376636A - Video processing method, apparatus, device, storage medium, and program - Google Patents

Video processing method, apparatus, device, storage medium, and program Download PDF

Info

Publication number
CN117376636A
CN117376636A CN202210773086.6A CN202210773086A CN117376636A CN 117376636 A CN117376636 A CN 117376636A CN 202210773086 A CN202210773086 A CN 202210773086A CN 117376636 A CN117376636 A CN 117376636A
Authority
CN
China
Prior art keywords
video
target video
target
output information
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210773086.6A
Other languages
Chinese (zh)
Inventor
张海东
石卓婵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210773086.6A priority Critical patent/CN117376636A/en
Publication of CN117376636A publication Critical patent/CN117376636A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the disclosure provides a video processing method, a device, equipment, a storage medium and a program, wherein the method comprises the following steps: acquiring at least one original material, acquiring an identifier of a template video, wherein the template video comprises at least one special effect, calling a preset service interface to process the at least one original material and the identifier of the template video, and acquiring output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect; further, the first target video is displayed in a video processing page according to the current video processing type and the output information. Through the process, various video processing types only need to call the preset service interface and develop specific business logic on the basis of the output information of the preset service interface, so that the development cost is reduced and the development efficiency is improved.

Description

Video processing method, apparatus, device, storage medium, and program
Technical Field
The embodiment of the disclosure relates to the technical field of multimedia, in particular to a video processing method, a device, equipment, a storage medium and a program.
Background
Currently, a user may perform video processing through a terminal device, for example, add some special effects on the basis of an original video, so as to improve the visual effect of the original video.
In order to reduce the difficulty of video processing of a user and improve the video processing efficiency, a mode for processing the video based on the template video is provided. In particular, the terminal device may provide the user with some template video, including one or more special effects. The user can designate the original video and select the template video which is liked by the user, and the terminal equipment can apply the special effects in the template video selected by the user to the original video designated by the user, so that the target video with the same or similar special effects as the template video is generated.
However, there are many possible types of video processing in practical applications in the template video-based approach described above. For example, some video processing types indicate one-touch slicing, some video processing types indicate that secondary editing is required in addition to generating a target video, some video processing types indicate previewing the generated target video, and so on. In practical application, aiming at the various video processing types, the development of service codes is required to be carried out respectively, so that the development cost is higher and the efficiency is lower.
Disclosure of Invention
The embodiment of the disclosure provides a video processing method, a device, equipment, a storage medium and a program, which can reduce development cost and improve development efficiency.
In a first aspect, an embodiment of the present disclosure provides a video processing method, including:
acquiring at least one original material;
acquiring an identification of a template video, wherein the template video comprises at least one special effect;
calling a preset service interface to process the at least one original material and the identification of the template video, and obtaining output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect;
and displaying the first target video in a video processing page according to the current video processing type and the output information.
In a second aspect, embodiments of the present disclosure provide a video processing apparatus, including:
the first acquisition module is used for acquiring at least one original material;
the second acquisition module is used for acquiring the identification of the template video, wherein the template video comprises at least one special effect;
the first processing module is used for calling a preset service interface to process the at least one original material and the identification of the template video and obtaining output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect;
And the second processing module is used for displaying the first target video in a video processing page according to the current video processing type and the output information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions to implement the method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the method as described in the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
The embodiment of the disclosure provides a video processing method, a device, equipment, a storage medium and a program, wherein the method comprises the following steps: acquiring at least one original material, acquiring an identifier of a template video, wherein the template video comprises at least one special effect, calling a preset service interface to process the at least one original material and the identifier of the template video, and acquiring output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect; further, a first target video is displayed in the video processing page according to the current video processing type and the output information. In the process, by providing the preset service interface, various video processing types do not need to be developed respectively aiming at common service logic, only the preset service interface is required to be called, and the specific service logic is developed on the basis of the output information of the preset service interface, so that the development cost is reduced, and the development efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a set of user interfaces provided by embodiments of the present disclosure;
FIG. 4 is a schematic diagram of another set of user interfaces provided by embodiments of the present disclosure;
FIG. 5 is a schematic diagram of yet another set of user interfaces provided by embodiments of the present disclosure;
fig. 6 is a flowchart of another video processing method according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of yet another set of display interfaces provided by embodiments of the present disclosure;
FIG. 8 is a schematic diagram of a video processing process according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure;
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
The embodiment of the disclosure is applicable to scenes subjected to video processing based on a video template. In particular, the terminal device may provide the user with some template video, including one or more special effects. The user can specify the original video and can select the template video which the user likes. The terminal device may apply the special effects in the template video selected by the user to the original video specified by the user, thereby generating a target video having the same or similar special effects as the template video. Thus, the difficulty of video processing by a user can be reduced by the mode of video processing based on the video template.
For ease of understanding, an application scenario related to an embodiment of the present disclosure will be first described with reference to fig. 1.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure. Assuming that the original video material designated by the user is an original video A, the template video designated by the user is a template video B, and generating a target video based on the original video A and the template video B.
Referring to fig. 1, it is assumed that an original video a includes 3 video frames, and pictures corresponding to the 3 video frames are a picture A1, a picture A2, and a picture A3, respectively. Assume that the template video B includes 3 video frames, and pictures corresponding to the 3 video frames are a picture B1, a picture B2, and a picture B3, respectively. Wherein, there is the special effect in the 2 nd video frame of template video B, this special effect is used for overlapping and showing a pair of antithetical couplet on picture B2. For example, the content of the antithetical couplet is: the sound of firecracker sounds the old age and the spring wind warms up the tusu.
It should be appreciated that one or more special effects may be present in template video B, which is not limited by the present disclosure, and is illustrated in fig. 1 by way of example only. In addition, the present disclosure is not limited to the specific type of each effect.
The terminal equipment can apply the special effects in the template video B to the original video A to obtain a target video. With continued reference to fig. 1, in the obtained target video, there is a special effect in the template video B in the 2 nd video frame, that is, the couplet in the template video B is displayed superimposed on the picture A2 corresponding to the 2 nd video frame.
Based on the above application scenario, there are a plurality of possible video processing types in practical applications. For example, some video processing types indicate one-touch slicing, some video processing types indicate that secondary editing is required in addition to generating a target video, some video processing types indicate previewing the generated target video, and so on. In practical application, aiming at the various video processing types, the development of service codes is required to be carried out respectively, so that the development cost is higher and the efficiency is lower.
In order to solve the above technical problems, the applicant of the present disclosure analyzes a plurality of possible video processing types to obtain service logic to be executed for each video processing type, and further determines common service logic related to the plurality of possible video processing types. In particular, the common business logic involved under the above-mentioned multiple possible video processing types includes: and synthesizing the at least one original material and the special effects contained in the template video based on the at least one original material designated by the user and the identification of the template video designated by the user so as to obtain a first target video.
Based on the common business logic, video playing, video sharing and the like are required to be performed on the first target video aiming at the one-key-slice type. For the second editing type, the first target video needs to be edited according to the editing operation of the user to generate the second target video. For the preview type, the first target video needs to be played in the preview page for the user to preview. By analyzing the business logic specific to each of the plurality of video processing types, the information required to be relied upon/utilized when each video processing type executes the business logic specific to each video processing type can be determined.
Further, on the basis of the analysis, a preset service interface may be provided. The preset service interface takes the 'identification of the at least one original material and the template video' as input information and takes 'information required to be relied on/utilized when each video processing type executes the respective unique business logic' as output information. In this way, the implementation of the common business logic involved in the above-mentioned multiple possible video processing types can be encapsulated inside the preset service interface. Aiming at each video processing type, the common business logic can be realized only by calling the preset service interface, and further, the special business logic of the video processing type is executed on the basis of the output information of the preset service interface, so that a video processing result corresponding to the video processing type can be obtained.
Therefore, by providing the preset service interface, the plurality of video processing types do not need to be developed respectively aiming at common service logic, but only need to call the preset service interface and develop specific service logic on the basis of the output information of the preset service interface, thereby reducing the development cost and improving the development efficiency.
The technical solutions provided by the present disclosure are described in detail below with reference to several specific embodiments. The following embodiments may be combined with each other and may not be described in further detail in some embodiments for the same or similar concepts or processes.
Fig. 2 is a flowchart of a video processing method according to an embodiment of the disclosure. The method of the present embodiment may be performed by a terminal device or by a video processing apparatus disposed in the terminal device. As shown in fig. 2, the method of the present embodiment includes:
s201: at least one original material is obtained.
In this embodiment, the at least one original material refers to a material designated by a user and used for generating the target video. Raw materials include, but are not limited to: video material, image material, audio material, and the like. The amount of raw material may be one or more. The original materials can be stored locally by the terminal equipment, can be received from other terminal equipment, and can be obtained by current shooting of the terminal equipment.
S202: and obtaining the identification of the template video, wherein the template video comprises at least one special effect.
In this embodiment, the template video refers to a pre-generated video containing special effects, so that a user can watch the special effects in the video. Special effects refer to special effects presented in a video to enhance the visual experience of the video. The present embodiment is not limited to the type of special effects.
For example, the terminal device may provide a plurality of template videos, and the user may select one of the plurality of template videos according to his own preference, and the terminal device obtains the identity of the template video.
It should be noted that the execution order of S201 and S202 is not limited, and the execution order of the two may be interchanged, or the two may be executed synchronously.
S203: calling a preset service interface to process the at least one original material and the identification of the template video, and obtaining output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect.
In this embodiment, the input information of the preset service interface includes: and the identification of the at least one original material and the template video. The preset service interface is used for realizing common service logic related to various video processing types, and comprises the following steps: and synthesizing the at least one original material and the special effects contained in the template video based on the at least one original material designated by the user and the identification of the template video designated by the user so as to obtain a first target video. The first target video includes the at least one original material and the at least one special effect. The output information of the preset service interface is information related to the first target video. The output information may be used to obtain the first target video, for example, the output information may be processed to obtain the first target video.
In a possible implementation manner, the output information includes one or more of the following:
(1) Video resource data corresponding to a first target video, wherein the video resource data is used for encoding and generating the first target video.
The video resource data corresponding to the first target video may also be referred to as a data stream corresponding to the first target video, and may be regarded as data obtained by arranging the resources forming the first target video according to a preset data format. Video asset data cannot be directly played and needs to be played through a preset player.
The first target video can be generated by encoding the video resource data, and the format of the first target video can be any one of the following: avi, mp4, mov, etc. The first target video may typically be played live in a page or by a commonly used player.
(2) The first video description file corresponds to the first target video, and the first video description file comprises a plurality of information items arranged according to a preset data structure, wherein the information items are used for describing the first target video.
Illustratively, the first video description file may be a JavaScript object notation (JavaScript Object Notation, JSON) file. The first video description file is used for describing the number of channels included in the first target video, elements included in each channel, attribute information of the elements, and the like.
(3) Uniform resource locator (Uniform Resource Locator, URL) information corresponding to the first target video.
It should be understood that URL information corresponding to the first target video is used to indicate a storage path of the first target video. The storage path may be a local storage path of the terminal device or a cloud storage path.
In this embodiment, when applied to different service scenarios, the output information obtained by calling the preset service interface may be the same or different, which is not limited in this embodiment.
In one example, only one service interface may be provided, and input information of the service interface includes: the at least one original material and the identification of the template video, and the output information comprises: the method comprises the steps of a first video description file corresponding to a first target video, video resource data corresponding to the first target video and URL information corresponding to the first target video. That is, the information on which the business logic specific to each of the above-mentioned plural possible video processing types is required to be dependent/utilized is contained in the output information. Thus, multiple video processing types call the same preset service interface to obtain the same output information. Each video processing type selects its own desired information in the output information.
In another example, different service interfaces may be provided for different video processing types, respectively. The input information of the service interfaces corresponding to different video processing types is the same, and the method comprises the following steps: and the identification of the at least one original material and the template video. The output information of the service interfaces corresponding to different video processing types is all different. For example, the output information corresponding to the service interface 1 includes: and the first video description file corresponds to the first target video. The output information corresponding to the service interface 2 includes: the method comprises the steps of a first video description file corresponding to a first target video and video resource data corresponding to the first target video. The output information corresponding to the service interface 3 includes: the method comprises the steps of a first video description file corresponding to a first target video, video resource data corresponding to the first target video and URL information corresponding to the first target video.
S204: and displaying the first target video in the video processing page according to the current video processing type and the output information.
In this embodiment, the service logic common to the various video processing types is completed by calling the preset service interface, and the output information is obtained, so in S204, the first target video can be obtained only by executing the service logic specific to the current video processing type on the basis of the output information. Further, the first target video may be displayed in a video processing page.
It should be understood that, for the service logic corresponding to the current video processing type, all or part of the output information may be processed, which is not limited in this embodiment. It should be noted that, for specific implementation of various video processing types, reference may be made to the following examples of several embodiments, which are not described in detail herein.
The video processing method provided in this embodiment includes: acquiring at least one original material, acquiring an identifier of a template video, wherein the template video comprises at least one special effect, calling a preset service interface to process the at least one original material and the identifier of the template video, and acquiring output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect; further, the first target video is displayed in the video processing page according to the current video processing type and the output information. In the process, by providing the preset service interface, various video processing types do not need to be developed respectively aiming at common service logic, only the preset service interface is required to be called, and the specific service logic is developed on the basis of the output information of the preset service interface, so that the development cost is reduced, and the development efficiency is improved.
Based on the embodiment shown in fig. 2, in some possible implementations, S204 may be implemented as follows: the video processing page may be generated according to the current video processing type, and the video processing page includes a video player. According to the output information, a first target video can be obtained. Further, the video processing page is displayed, and the first target video is played through the video player.
Wherein the video processing pages corresponding to different video processing types are generally different. For example, if the current video processing type indicates that the first target video is to be clipped, the video processing page may be a video editing page. The video editing page may include video editing controls in addition to the video player. If the current video processing type indicates that the first target video is previewed, the video processing page may be a video preview page. If the current video processing type indicates that the first target video is generated by one key, the video processing page may be a video presentation page. The video presentation page can also comprise one or more of a release control and a sharing control besides the video player.
Several specific video processing types are respectively illustrated below in conjunction with fig. 3-5.
Example one
It is assumed that the current video processing type is used to indicate the clipping of the first target video. In this case, the output information of the preset service interface may include: and the first video description file corresponds to the first target video.
The S204 may include: and generating a video editing page according to the current video processing type. The video editing page comprises a video player and a video editing control. And processing the first video description file to obtain a first target video. For example, the video resource data may be generated according to the first video description file, and then the first target video may be obtained by performing encoding processing on the video resource data. And displaying the video editing page, and playing the first target video in the video editing page through a video player.
Optionally, after displaying the video editing page, the method may further include: responding to the detection of the editing operation input by a user through a video editing control, and adjusting the first video description file according to the editing operation to obtain a second video description file; and generating a second target video according to the second video description file in response to detection of confirmation operation input by the user through a confirmation control.
The following is a description with reference to fig. 3.
FIG. 3 is a schematic diagram of a set of user interfaces provided by embodiments of the present disclosure. As shown in fig. 3, the terminal device presents a plurality of template videos in a page 301, each of which includes one or more special effects. The user may click on a template video to view special effects in the template video. Illustratively, assume that the user clicks on template video 1 and the terminal device presents page 302. The page 302 includes a video playing area for playing the template video 1. The user can determine whether to like the special effects in the template video 1 by viewing the template video 1. If the user does not like the special effects in the template video 1, the user can return to the page 301 to select other template videos for viewing. If the user likes a special effect in template video 1, then the "use immediately" control can be clicked.
In response to detecting the user clicking on the "immediate use" control, the terminal device displays page 303. In page 303 the user may select at least one original material in an album or a gallery of material. A "go clip" control is included in page 303. And in response to detecting that the user clicks the clip removing control, the terminal equipment invokes a preset service interface to process at least one original material selected by the user and the identification of the template video designated by the user, and obtains output information of the preset service interface, wherein the output information comprises a first video description file corresponding to the first target video. Further, the terminal device generates a video editing page 304 according to the current video processing type, where the video editing page includes a video player 305. And processing the first video description file to obtain a first target video. The terminal device displays the video editing page 304 and plays the first target video through the video player 305 so that the user views the display effect of the first target video.
Further, some video editing controls (e.g., clip controls, beautification controls, music controls, subtitle controls, special effects controls, etc.) are also included in the video editing page 304. If the user is not satisfied with the display effect of the first target video, the user can edit the first target video in a editing mode, such as editing, beautifying, music, captions and special effects. And the terminal equipment adjusts the first video description file according to the editing operation input by the user to obtain a second video description file. Similarly, after obtaining the second video description file, the terminal device may play the edited first target video through the video player 305 according to the second video description file, so that the user views the display effect of the edited first target video. The video editing page 304 further includes a "confirm" control, and in response to detecting that the user clicks on the "confirm" control, the terminal device may generate a second target video according to the second video description file.
It will be appreciated that in this example, which controls are contained in the video editing page 304 is determined by the current video processing type, while the video frames displayed in the video player are determined by the first video description file.
Optionally, after the second target video is generated, the terminal device may perform processing such as playing, publishing, sharing, etc. on the second target video.
In the example shown in fig. 3, the output information of the preset service interface includes a first video description file corresponding to the first target video, so that the service requirement of performing the second editing on the first target video can be met.
In an example two, the first embodiment,
assume that the current video processing type is used to indicate previewing a first target video. In this case, the output information of the preset service interface may include: video resource data corresponding to the first target video.
The S204 may include: and generating a video preview page according to the current video processing type. The video preview page includes a video player. And obtaining the first target video by encoding the video resource data. And displaying the video preview page, and playing the first target video in the video preview page through a video player.
An example is illustrated below in connection with fig. 4.
FIG. 4 is a schematic diagram of another set of user interfaces provided by embodiments of the present disclosure. As shown in fig. 4, the terminal device presents a plurality of template videos in a page 401, each of which includes one or more special effects. The user may click on a template video to view special effects in the template video. Illustratively, assume that the user clicks on template video 1 and the terminal device presents page 402. The page 402 includes a video playing area for playing the template video 1. The user can determine whether to like the special effects in the template video 1 by viewing the template video 1. If the user does not like the special effects in the template video 1, the user can return to the page 401 to select other template videos for viewing. If the user likes a special effect in template video 1, then the "use immediately" control can be clicked.
In response to detecting the user clicking on the "immediate use" control, the terminal device displays page 403. In page 403 the user may select at least one original material in an album or a gallery of material. A "preview" control is included in page 403. And in response to detecting that the user clicks the preview control, the terminal equipment invokes a preset service interface to process at least one original material selected by the user and the identification of the template video designated by the user, and obtains output information of the preset service interface, wherein the output information comprises video resource data corresponding to the first target video. Further, the terminal device generates a video preview page 404 according to the current video processing type, where the video preview page 404 includes a video player 405. And obtaining the first target video by encoding the video resource data. The terminal device displays the video preview page 404 and plays the first target video through the video player 405 so that the user views the display effect of the first target video.
Optionally, the video preview page 404 may also include a "confirm composition" control. In response to detecting a confirmation composing operation input by a user, the terminal device stores the first target video in a preset storage space.
It should be appreciated that in this example, which controls are contained in the video preview page 404 is determined by the current video processing type, while the video frames displayed in the video player 405 are determined by the video asset data.
Optionally, after the first target video is generated, the terminal device may perform processing such as playing, publishing, sharing, etc. on the first target video.
In the example shown in fig. 4, the output information of the preset service interface includes video resource data corresponding to the first target video, so that the service requirement of previewing the first target video can be met.
Optionally, in this example, the output information of the preset service interface may further include: and the first video description file corresponds to the first target video. In the event of a failure, the first video description file may be used to troubleshoot the failure.
In an example three, the process is carried out,
assume that a current video processing type is used to instruct a one-touch generation of the first target video. In this case, the output information of the preset service interface may include: URL information corresponding to the first target video.
The S204 may include: and generating a video display page according to the current video processing type. The video presentation page includes a video player. And acquiring the first target video from a preset storage space according to the URL information corresponding to the first target video. And displaying the video display page, and playing the first target video in the video display page through a video player.
An example is illustrated below in connection with fig. 5.
Fig. 5 is a schematic diagram of yet another set of user interfaces provided by embodiments of the present disclosure. As shown in fig. 5, the terminal device presents a plurality of template videos in a page 501, each of which includes one or more special effects. The user may click on a template video to view special effects in the template video. Illustratively, assume that the user clicks on template video 1 and the terminal device presents page 502. The page 502 includes a video playing area for playing the template video 1. The user can determine whether to like the special effects in the template video 1 by viewing the template video 1. If the user does not like the special effects in the template video 1, the user can return to the page 501 to select other template videos for viewing. If the user likes a special effect in template video 1, then the "use immediately" control can be clicked.
In response to detecting the user clicking on the "immediate use" control, the terminal device displays page 503. In page 503 the user may select at least one original material in an album or gallery of material. The page 503 includes a "one-touch-and-tablet" control. And in response to detecting that the user clicks the one-key-slice control, the terminal equipment invokes a preset service interface to process at least one original material selected by the user and the identification of the template video designated by the user, and obtains output information of the preset service interface, wherein the output information comprises URL information corresponding to the first target video. Further, the terminal device generates a video presentation page 504 according to the current video processing type, where the video presentation page 504 includes a video player 505. And acquiring the first target video from a preset storage space according to the URL information corresponding to the first target video. The terminal device displays the video presentation page 504 and plays the first target video through the video player 505 so that the user views the display effect of the first target video.
Optionally, with continued reference to fig. 5, the video presentation page 504 further includes: at least one of a sharing control and a publishing control. The method in this embodiment further comprises at least one of the following:
responding to the fact that the release control is triggered, and releasing the first target video to a preset video platform according to URL information corresponding to the first target video;
and responding to the fact that the sharing control is triggered, and sending the first target video to terminal equipment of other users according to URL information corresponding to the first target video.
It should be appreciated that in this example, which controls are included in the video presentation page 504 is determined by the current video processing type, and the video frame displayed in the video player 505 is determined by the URL information corresponding to the first target video.
In the example shown in fig. 5, by including URL information corresponding to the first target video in the output information of the preset service interface, the service requirement of one-key slicing can be met, and the publishing and sharing of the sliced video are facilitated.
Optionally, in this example, the output information of the preset service interface may further include: at least one of a first video description file corresponding to the first target video and video resource data corresponding to the first target video. In this way, when a fault occurs, the first video description file corresponding to the first target video and the video resource data corresponding to the first target video can be used for troubleshooting the fault.
On the basis of any of the above embodiments, in some business scenarios, editing of special effects in the template video may be required before generating the first target video, for example: adding subtitles, setting transition, or changing text associated with a particular effect. In these traffic scenarios, the video processing method provided by the embodiment shown in fig. 6 may be employed.
Fig. 6 is a flowchart of another video processing method according to an embodiment of the disclosure. As shown in fig. 6, the method of the present embodiment includes:
s601: at least one original material is obtained.
S602: and acquiring an identification of the template video, wherein the template video comprises at least one special effect, and the at least one special effect comprises a first special effect to be edited.
S603: editing information corresponding to the first special effect is obtained, and the editing information is used for updating the first special effect into the second special effect.
S604: invoking a preset service interface to process the at least one original material, the identification of the template video and the editing information corresponding to the first special effect, and obtaining output information of the preset service interface, wherein the output information is used for obtaining a first target video, and the first target video comprises: the at least one original material, the second effect, and other effects of the at least one effect other than the first effect.
The difference between this embodiment and the embodiment shown in fig. 2 is that the input parameters of the preset service interface include editing information corresponding to the first special effect to be edited in addition to the at least one original material and the identifier of the template video. That is, the present embodiment allows the user to edit the special effects in the template video before the first target video is generated.
In this way, in the internal logic, the preset service interface updates the first special effect to the second special effect according to the editing information corresponding to the first special effect to be edited, and then synthesizes the at least one original material, the second special effect and other special effects except the first special effect in the at least one special effect to obtain the first target video.
S605: and displaying the first target video in the video processing page according to the current video processing type and the output information.
It should be understood that the specific implementation of S605 may be referred to the description of the foregoing embodiments, and will not be repeated herein.
An example is illustrated below in connection with fig. 7.
Fig. 7 is a schematic diagram of yet another set of display interfaces provided by an embodiment of the present disclosure. As shown in fig. 7, the terminal device presents a plurality of template videos in a page 701, each of which includes one or more special effects. The user may click on a template video to view special effects in the template video. Illustratively, assume that the user clicks on template video 1 and the terminal device presents page 702. The page 702 includes a video playing area for playing the template video 1. It is assumed that the template video 1 includes a first special effect to be edited, and the first special effect is that a pair of couplets is displayed on a display screen of a part of video frames in a superimposed manner. For example, the upper link is: the sound of firecracker is old, and the lower part is: the wind warms up the carcasses. Also included in page 702 is a drop text editing control that the user can enter his favorite drop, e.g., the user can enter drop text: the Happy song matrix caters for the new spring.
With continued reference to fig. 7, in response to detecting a user clicking on the "immediate use" control, the terminal device displays page 703. In page 703 the user can select at least one original material in an album or gallery of material. The page 703 includes a "one-touch-and-tablet" control. And in response to detecting that the user clicks the one-key-slice control, the terminal equipment invokes a preset service interface to process at least one original material selected by the user, the identification of the template video designated by the user and the editing information of the first special effect input by the user, and obtains output information of the preset service interface, wherein the output information comprises URL information corresponding to the first target video. Further, the terminal device generates a video presentation page 704 according to the current video processing type (one-key-slice); and acquiring the first target video from a preset storage space according to the URL information corresponding to the first target video. The terminal device displays the video presentation page 704 and plays the first target video through the video player so that the user can watch the display effect of the first target video. Referring to the video presentation page 704, in the first target video, the downlink text update for the antithetical couplet effect is "happy song gust welcome to new spring".
It should be noted that, fig. 7 is an example of performing special effect pre-editing in a one-key slice service scenario. Special effect pre-editing can be implemented in a similar manner for other possible service scenarios (e.g. editing the first target video, previewing the first target video), which is not illustrated in this embodiment.
In this embodiment, the input parameters of the preset service interface include: editing information of the first special effect to be edited meets the business requirement of pre-editing the special effect in the template video before synthesizing the first target video.
Based on any of the above embodiments, the implementation logic inside the preset service interface is illustrated in the following with reference to fig. 8.
Fig. 8 is a schematic diagram of a video processing procedure according to an embodiment of the disclosure. As shown in fig. 8, the preset service interface includes 3 input parameters, where the 3 rd input parameter is an optional parameter:
inputting parameters 1: at least one source material;
inputting parameters 2: identifying a template video;
inputting parameters 3: editing information of a first special effect to be edited in the template video.
The preset service interface comprises the following 3 output parameters:
Output parameter 1: a first video description file corresponding to the first target video;
output parameter 2: video resource data corresponding to the first target video;
output parameter 3: URL information corresponding to the first target video.
With continued reference to fig. 8, the internal logic of the preset service interface includes the following 4 processes, respectively:
(1) Resource preparation process
The process proceeds mainly with one or more of the following: checking input information, downloading special effect resources, marking a first special effect to be edited, and the like. The output of process (1) is a video model that is populated with complete assets.
(2) Generating a first video description file
The input of the process is the video model obtained in the process (1), the first special effect in the video model is updated to the second special effect according to the marking processing of the first special effect, the updated video model is obtained, and a first video description file is generated based on the updated video model. The output of process (2) is a first video description file.
(3) Video asset data generation process
The input of the process is the first video description file obtained in the step (2). And generating video resource data according to the first video description file, wherein the video resource data is used for encoding and generating a first target video. The output of process (3) is video asset data.
(4) URL information generation process
The input of the process is the video resource data obtained in the step (3). And obtaining the first target video and URL information corresponding to the first target video by encoding the video resource data. The output of process (4) is the URL of the first target video.
In this embodiment, by providing the preset service interface, service logic common to multiple video processing types is encapsulated inside the preset service interface, so that developers of different video processing types do not need to pay attention to the common service logic, and only need to call the preset service interface, thereby reducing development cost and improving development efficiency.
When the service of each video processing type calls the preset service interface, the product required by each video processing type can be obtained only by inputting at least one original material designated by a user and the identification of the template video, and the calling mode is simple. Furthermore, the output information of the preset service interface comprises information which is needed to be relied on/utilized by various video processing types, so that the preset service interface can be applied to various video processing type services, has higher expandability and is widely applicable to scenes.
In addition, when the service of each video processing type calls the preset service interface, editing information corresponding to a certain special effect can be input, so that the service requirement of performing special effect pre-editing before synthesizing the first target video is met.
Fig. 9 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure. The apparatus may be in the form of software and/or hardware. As shown in fig. 9, the video processing apparatus 900 provided in this embodiment includes: a first acquisition module 901, a second acquisition module 902, a first processing module 903, and a second processing module 904. Wherein,
a first obtaining module 901, configured to obtain at least one original material;
a second obtaining module 902, configured to obtain an identifier of a template video, where the template video includes at least one special effect;
the first processing module 903 is configured to invoke a preset service interface to process the at least one original material and the identifier of the template video, and obtain output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect;
and the second processing module 904 is configured to display the first target video in a video processing page according to the current video processing type and the output information.
In a possible implementation manner, the at least one special effect includes a first special effect to be edited; the apparatus further comprises: the third acquisition module is used for acquiring editing information corresponding to the first special effect, and the editing information is used for updating the first special effect into a second special effect;
the first processing module 903 is specifically configured to: invoking a preset service interface to process the at least one original material, the identification of the template video and the editing information corresponding to the first special effect, and obtaining output information of the preset service interface; the first target video includes: the at least one original material, the second effect, and other effects of the at least one effect other than the first effect.
In a possible implementation manner, the output information includes one or more of the following:
the first video description file is corresponding to the first target video, and comprises a plurality of information items arranged according to a preset data structure, wherein the information items are used for describing the first target video;
video resource data corresponding to the first target video, wherein the video resource data is used for encoding and generating the first target video;
And the URL information corresponding to the first target video.
In a possible implementation manner, the second processing module 904 is specifically configured to:
generating the video processing page according to the current video processing type, wherein the video processing page comprises a video player;
obtaining the first target video according to the output information;
and displaying the video processing page, and playing the first target video through the video player.
In a possible implementation manner, in a case where the current video processing type is used to indicate that the first target video is clipped, the output information includes: a first video description file corresponding to the first target video; the second processing module 904 is specifically configured to:
and processing the first video description file to obtain the first target video.
In a possible implementation manner, the video processing page further includes a video editing control and a confirmation control, and the second processing module 904 is further configured to:
responding to the detection of the editing operation input by the user through the video editing control, and adjusting the first video description file according to the editing operation to obtain a second video description file;
And responding to detection of confirmation operation input by a user through the confirmation control, and generating a second target video according to the second video description file.
In a possible implementation manner, in a case that the current video processing type is used to indicate that the first target video is previewed; the output information includes: video resource data corresponding to the first target video; the second processing module 904 is specifically configured to:
and obtaining the first target video by encoding the video resource data.
In a possible implementation manner, the output information further includes: and the first video description file corresponds to the first target video.
In a possible implementation manner, in a case where the current video processing type is used to instruct a one-key generation of the first target video, the output information includes: URL information corresponding to the first target video; the second processing module 904 is specifically configured to:
and acquiring the first target video from a preset storage space according to the URL information corresponding to the first target video.
In a possible implementation manner, the video processing page further includes: the second processing module 904 is further configured to at least one of publish control and share control:
Responding to the fact that the release control is triggered, and releasing the first target video to a preset video platform according to URL information corresponding to the first target video;
and responding to the fact that the sharing control is triggered, and sending the first target video to terminal equipment of other users according to URL information corresponding to the first target video.
In a possible implementation manner, the output information further includes: at least one of video resource data corresponding to the first target video and a first video description file corresponding to the first target video.
The video processing device provided in this embodiment may be used to execute the video processing method provided in any of the above method embodiments, and its implementation principle and technical effects are similar, and will not be described herein.
In order to achieve the above embodiments, the embodiments of the present disclosure further provide an electronic device.
Referring to fig. 10, a schematic diagram of a structure of an electronic device 1000 suitable for use in implementing embodiments of the present disclosure is shown, where the electronic device 1000 may be a terminal device. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic apparatus 1000 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1008 into a random access Memory (Random Access Memory, RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
In general, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1007 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 1008 including, for example, magnetic tape, hard disk, etc.; and communication means 1009. The communication means 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange data. While fig. 10 shows an electronic device 1000 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a video processing method, including:
acquiring at least one original material;
acquiring an identification of a template video, wherein the template video comprises at least one special effect;
calling a preset service interface to process the at least one original material and the identification of the template video, and obtaining output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect;
and displaying the first target video in a video processing page according to the current video processing type and the output information.
According to one or more embodiments of the present disclosure, the at least one effect includes a first effect to be edited; the method further comprises the steps of:
acquiring editing information corresponding to the first special effect, wherein the editing information is used for updating the first special effect into a second special effect;
invoking a preset service interface to process the at least one original material and the identification of the template video, and obtaining output information of the preset service interface, wherein the method comprises the following steps:
Invoking a preset service interface to process the at least one original material, the identification of the template video and the editing information corresponding to the first special effect, and obtaining output information of the preset service interface; the first target video includes: the at least one original material, the second effect, and other effects of the at least one effect other than the first effect.
According to one or more embodiments of the present disclosure, the output information includes one or more of the following:
the first video description file is corresponding to the first target video, and comprises a plurality of information items arranged according to a preset data structure, wherein the information items are used for describing the first target video;
video resource data corresponding to the first target video, wherein the video resource data is used for encoding and generating the first target video;
and the URL information corresponding to the first target video.
According to one or more embodiments of the present disclosure, displaying the first target video in a video processing page according to a current video processing type and the output information includes:
Generating the video processing page according to the current video processing type, wherein the video processing page comprises a video player;
obtaining the first target video according to the output information;
and displaying the video processing page, and playing the first target video through the video player.
According to one or more embodiments of the present disclosure, in a case where the current video processing type is used to indicate that the first target video is clipped, the output information includes: a first video description file corresponding to the first target video;
obtaining the first target video according to the output information, including:
and processing the first video description file to obtain the first target video.
In accordance with one or more embodiments of the present disclosure, the video processing page further includes a video editing control and a confirmation control therein, the method further including:
responding to the detection of the editing operation input by the user through the video editing control, and adjusting the first video description file according to the editing operation to obtain a second video description file;
and responding to detection of confirmation operation input by a user through the confirmation control, and generating a second target video according to the second video description file.
In accordance with one or more embodiments of the present disclosure, in a case where the current video processing type is used to indicate previewing of the first target video, the output information includes: video resource data corresponding to the first target video;
obtaining the first target video according to the output information, including:
and obtaining the first target video by encoding the video resource data.
According to one or more embodiments of the present disclosure, the output information further includes: and the first video description file corresponds to the first target video.
In accordance with one or more embodiments of the present disclosure, in a case where the current video processing type is used to instruct a one-touch generation of the first target video, the output information includes: URL information corresponding to the first target video;
obtaining the first target video according to the output information, including:
and acquiring the first target video from a preset storage space according to the URL information corresponding to the first target video.
According to one or more embodiments of the present disclosure, the video processing page further includes: at least one of publishing control and sharing control, the method further comprises at least one of:
Responding to the fact that the release control is triggered, and releasing the first target video to a preset video platform according to URL information corresponding to the first target video;
and responding to the fact that the sharing control is triggered, and sending the first target video to terminal equipment of other users according to URL information corresponding to the first target video.
According to one or more embodiments of the present disclosure, the output information further includes: at least one of video resource data corresponding to the first target video and a first video description file corresponding to the first target video.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a video processing apparatus including:
the first acquisition module is used for acquiring at least one original material;
the second acquisition module is used for acquiring the identification of the template video, wherein the template video comprises at least one special effect;
the first processing module is used for calling a preset service interface to process the at least one original material and the identification of the template video and obtaining output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect;
And the second processing module is used for displaying the first target video in a video processing page according to the current video processing type and the output information.
According to one or more embodiments of the present disclosure, the at least one effect includes a first effect to be edited; the apparatus further comprises: the third acquisition module is used for acquiring editing information corresponding to the first special effect, and the editing information is used for updating the first special effect into a second special effect;
the first processing module is specifically configured to: invoking a preset service interface to process the at least one original material, the identification of the template video and the editing information corresponding to the first special effect, and obtaining output information of the preset service interface; the first target video includes: the at least one original material, the second effect, and other effects of the at least one effect other than the first effect.
According to one or more embodiments of the present disclosure, the output information includes one or more of the following:
the first video description file is corresponding to the first target video, and comprises a plurality of information items arranged according to a preset data structure, wherein the information items are used for describing the first target video;
Video resource data corresponding to the first target video, wherein the video resource data is used for encoding and generating the first target video;
and the URL information corresponding to the first target video.
According to one or more embodiments of the present disclosure, the second processing module is specifically configured to:
generating the video processing page according to the current video processing type, wherein the video processing page comprises a video player;
obtaining the first target video according to the output information;
and displaying the video processing page, and playing the first target video through the video player.
According to one or more embodiments of the present disclosure, in a case where the current video processing type is used to indicate that the first target video is clipped, the output information includes: a first video description file corresponding to the first target video; the second processing module is specifically configured to:
and processing the first video description file to obtain the first target video.
According to one or more embodiments of the present disclosure, the video processing page further includes a video editing control and a confirmation control, and the second processing module is further configured to:
Responding to the detection of the editing operation input by the user through the video editing control, and adjusting the first video description file according to the editing operation to obtain a second video description file;
and responding to detection of confirmation operation input by a user through the confirmation control, and generating a second target video according to the second video description file.
In accordance with one or more embodiments of the present disclosure, in a case where the current video processing type is used to indicate previewing of the first target video, the output information includes: video resource data corresponding to the first target video; the second processing module is specifically configured to:
and obtaining the first target video by encoding the video resource data.
According to one or more embodiments of the present disclosure, the output information further includes: and the first video description file corresponds to the first target video.
In accordance with one or more embodiments of the present disclosure, in a case where the current video processing type is used to instruct a one-touch generation of the first target video, the output information includes: URL information corresponding to the first target video; the second processing module is specifically configured to:
And acquiring the first target video from a preset storage space according to the URL information corresponding to the first target video.
According to one or more embodiments of the present disclosure, the video processing page further includes: the second processing module is further configured to at least one of issue control and share control, where the second processing module is further configured to at least one of:
responding to the fact that the release control is triggered, and releasing the first target video to a preset video platform according to URL information corresponding to the first target video;
and responding to the fact that the sharing control is triggered, and sending the first target video to terminal equipment of other users according to URL information corresponding to the first target video.
According to one or more embodiments of the present disclosure, the output information further includes: at least one of video resource data corresponding to the first target video and a first video description file corresponding to the first target video.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions to implement the method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the method as described above in the first aspect and the various possible designs of the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method as described above in the first aspect and the various possible designs of the first aspect.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (15)

1. A video processing method, comprising:
acquiring at least one original material;
acquiring an identification of a template video, wherein the template video comprises at least one special effect;
Calling a preset service interface to process the at least one original material and the identification of the template video, and obtaining output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect;
and displaying the first target video in a video processing page according to the current video processing type and the output information.
2. The method of claim 1, wherein the at least one effect includes a first effect to be edited; the method further comprises the steps of:
acquiring editing information corresponding to the first special effect, wherein the editing information is used for updating the first special effect into a second special effect;
invoking a preset service interface to process the at least one original material and the identification of the template video, and obtaining output information of the preset service interface, wherein the method comprises the following steps:
invoking a preset service interface to process the at least one original material, the identification of the template video and the editing information corresponding to the first special effect, and obtaining output information of the preset service interface; the first target video includes: the at least one original material, the second effect, and other effects of the at least one effect other than the first effect.
3. The method of claim 1 or 2, wherein the output information comprises one or more of:
the first video description file is corresponding to the first target video, and comprises a plurality of information items arranged according to a preset data structure, wherein the information items are used for describing the first target video;
video resource data corresponding to the first target video, wherein the video resource data is used for encoding and generating the first target video;
and the URL information corresponding to the first target video.
4. The method according to claim 1 or 2, wherein displaying the first target video in a video processing page according to a current video processing type and the output information comprises:
generating the video processing page according to the current video processing type, wherein the video processing page comprises a video player;
obtaining the first target video according to the output information;
and displaying the video processing page, and playing the first target video through the video player.
5. The method of claim 4, wherein, in the case where the current video processing type is used to indicate the editing of the first target video, the output information includes: a first video description file corresponding to the first target video;
Obtaining the first target video according to the output information, including:
and processing the first video description file to obtain the first target video.
6. The method of claim 5, wherein the video processing page further comprises a video editing control and a confirmation control, the method further comprising:
responding to the detection of the editing operation input by the user through the video editing control, and adjusting the first video description file according to the editing operation to obtain a second video description file;
and responding to detection of confirmation operation input by a user through the confirmation control, and generating a second target video according to the second video description file.
7. The method of claim 4, wherein, in the case where the current video processing type is used to indicate previewing the first target video, the output information comprises: video resource data corresponding to the first target video;
obtaining the first target video according to the output information, including:
and obtaining the first target video by encoding the video resource data.
8. The method of claim 7, wherein the outputting information further comprises: and the first video description file corresponds to the first target video.
9. The method of claim 4, wherein, in the case where the current video processing type is used to instruct a one-touch generation of the first target video, the outputting information comprises: URL information corresponding to the first target video;
obtaining the first target video according to the output information, including:
and acquiring the first target video from a preset storage space according to the URL information corresponding to the first target video.
10. The method of claim 9, wherein the video processing page further comprises: at least one of publishing control and sharing control, the method further comprises at least one of:
responding to the fact that the release control is triggered, and releasing the first target video to a preset video platform according to URL information corresponding to the first target video;
and responding to the fact that the sharing control is triggered, and sending the first target video to terminal equipment of other users according to URL information corresponding to the first target video.
11. The method according to claim 9 or 10, wherein the output information further comprises: at least one of video resource data corresponding to the first target video and a first video description file corresponding to the first target video.
12. A video processing apparatus, comprising:
the first acquisition module is used for acquiring at least one original material;
the second acquisition module is used for acquiring the identification of the template video, wherein the template video comprises at least one special effect;
the first processing module is used for calling a preset service interface to process the at least one original material and the identification of the template video and obtaining output information of the preset service interface; the output information is used for obtaining a first target video, and the first target video comprises the at least one original material and the at least one special effect;
and the second processing module is used for displaying the first target video in a video processing page according to the current video processing type and the output information.
13. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
The processor executes the computer-executable instructions to implement the method of any one of claims 1 to 11.
14. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor implement the method of any one of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 11.
CN202210773086.6A 2022-06-30 2022-06-30 Video processing method, apparatus, device, storage medium, and program Pending CN117376636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210773086.6A CN117376636A (en) 2022-06-30 2022-06-30 Video processing method, apparatus, device, storage medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210773086.6A CN117376636A (en) 2022-06-30 2022-06-30 Video processing method, apparatus, device, storage medium, and program

Publications (1)

Publication Number Publication Date
CN117376636A true CN117376636A (en) 2024-01-09

Family

ID=89391607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210773086.6A Pending CN117376636A (en) 2022-06-30 2022-06-30 Video processing method, apparatus, device, storage medium, and program

Country Status (1)

Country Link
CN (1) CN117376636A (en)

Similar Documents

Publication Publication Date Title
US11887630B2 (en) Multimedia data processing method, multimedia data generation method, and related device
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
WO2021196903A1 (en) Video processing method and device, readable medium and electronic device
CN113891113B (en) Video clip synthesis method and electronic equipment
CN112911379B (en) Video generation method, device, electronic equipment and storage medium
CN109963162B (en) Cloud directing system and live broadcast processing method and device
CN109547841B (en) Short video data processing method and device and electronic equipment
US20240184438A1 (en) Interactive content generation method and apparatus, and storage medium and electronic device
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
US20240062443A1 (en) Video sharing method and apparatus, device, and medium
KR20220103110A (en) Video generating apparatus and method, electronic device, and computer readable medium
CN111970571B (en) Video production method, device, equipment and storage medium
WO2020220773A1 (en) Method and apparatus for displaying picture preview information, electronic device and computer-readable storage medium
US20230011395A1 (en) Video page display method and apparatus, electronic device and computer-readable medium
CN110674624A (en) Method and system for editing image and text
CN114584716B (en) Picture processing method, device, equipment and storage medium
CN111367447A (en) Information display method and device, electronic equipment and computer readable storage medium
CN113365010B (en) Volume adjusting method, device, equipment and storage medium
CN113992926A (en) Interface display method and device, electronic equipment and storage medium
CN112017261A (en) Sticker generation method and device, electronic equipment and computer readable storage medium
CN116366918A (en) Media content generation method, device, equipment, readable storage medium and product
CN117376636A (en) Video processing method, apparatus, device, storage medium, and program
CN111385638B (en) Video processing method and device
CN110366002B (en) Video file synthesis method, system, medium and electronic device
CN113885741A (en) Multimedia processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination