CN112165652B - Video processing method, device, equipment and computer readable storage medium - Google Patents

Video processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112165652B
CN112165652B CN202011034493.2A CN202011034493A CN112165652B CN 112165652 B CN112165652 B CN 112165652B CN 202011034493 A CN202011034493 A CN 202011034493A CN 112165652 B CN112165652 B CN 112165652B
Authority
CN
China
Prior art keywords
interactive
video
node information
branch node
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011034493.2A
Other languages
Chinese (zh)
Other versions
CN112165652A (en
Inventor
王启明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011034493.2A priority Critical patent/CN112165652B/en
Publication of CN112165652A publication Critical patent/CN112165652A/en
Application granted granted Critical
Publication of CN112165652B publication Critical patent/CN112165652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The embodiment of the disclosure provides a video processing method, a device, equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive question and at least two corresponding interactive options to be selected; respectively writing target video clips corresponding to interaction options to be selected in the same interaction branch node and corresponding interaction node information into different tracks in the same video file; and generating the configured interactive video file according to the target video clips and the interactive node information corresponding to the tracks in the same video file. Because the plurality of video clips are written into different tracks of the same video file to form a multi-track interactive video file, the plurality of video files do not need to be called in the manufacturing and using processes of the interactive video file, and the processing efficiency of the interactive video file is improved.

Description

Video processing method, device, equipment and computer readable storage medium
Technical Field
The embodiments of the present disclosure relate to the field of video processing technologies, and in particular, to a video processing method, apparatus, device, and computer-readable storage medium.
Background
With the development of technology, more and more users choose to watch videos on terminal equipment. In order to meet the personalized demand of users for video contents and the fragmented entertainment demand, interactive videos between videos and games gradually enter the lives of users.
In order to generate an interactive video, in the prior art, a configuration file is generally generated in advance, and the configuration file includes an association relationship between a corresponding video clip in each video file and interactive information. The interactive information comprises: interactive titles and interactive options. And when the interactive video is generated, the interactive sub-video of at least one interactive link is included. When the interactive video is generated, firstly, interactive information is displayed to a user, after receiving that the user triggers a certain interactive option through the viewed interactive title, a video file corresponding to the interactive option is determined through a configuration file, video segments in the video file are called to generate interactive sub-videos of the interactive link, and the interactive sub-videos of each interactive link are connected in series to generate the final interactive video.
Therefore, in the prior art, when an interactive video is generated, the final interactive video can be obtained only after the video file is called for many times and the interactive sub-videos of each interactive link are connected in series, so that the interactive video generation efficiency is low.
Disclosure of Invention
The embodiment of the disclosure provides a video processing method, a video processing device, video processing equipment and a computer readable storage medium, so as to solve the technical problems that interactive video files are relatively complicated to manufacture and the calling efficiency of the video files is not high in the playing process in the prior art.
In a first aspect, an embodiment of the present disclosure provides a video processing method, including:
acquiring at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive question and at least two corresponding interactive options to be selected;
respectively writing target video clips corresponding to interaction options to be selected in the same interaction branch node and corresponding interaction node information into different tracks in the same video file;
and generating the configured interactive video file according to the target video clips and the interactive node information corresponding to the tracks in the same video file.
In a second aspect, an embodiment of the present disclosure provides a video processing method, including:
receiving an interactive video playing request sent by a user, wherein the interactive video playing request comprises interactive video identification information;
acquiring a configured interactive video file corresponding to the interactive video identification information according to the interactive video playing request; the configured interactive video file comprises a plurality of tracks, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are correspondingly written in different tracks;
if the display condition of the interactive branch node information is determined to be met, controlling a display interface to display the interactive node information;
and responding to the confirmation operation of the user on the target interaction option in the interaction node information, and controlling the video track written with the target video clip corresponding to the target interaction option to play the target video clip.
In a third aspect, an embodiment of the present disclosure provides a video processing apparatus, including:
the acquisition module is used for acquiring at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive question and at least two corresponding interactive options to be selected;
the processing module is used for writing the target video clips corresponding to the interaction options to be selected in the same interaction branch node and the corresponding interaction node information into different tracks in the same video file respectively;
and the generating module is used for generating the configured interactive video file according to the target video clips and the interactive node information corresponding to the tracks in the same video file.
In a fourth aspect, an embodiment of the present disclosure provides a video processing apparatus, including:
the receiving module is used for receiving an interactive video playing request sent by a user, wherein the interactive video playing request comprises interactive video identification information;
the interactive video file acquisition module is used for acquiring the configured interactive video file corresponding to the interactive video identification information according to the interactive video playing request; the configured interactive video file comprises a plurality of tracks, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are correspondingly written in different tracks;
the display module is used for controlling a display interface to display the interactive node information if the display condition of the interactive branch node information is determined to be met;
and the control module is used for responding to the confirmation operation of the user on the target interaction option in the interaction node information and controlling the video track written with the target video clip corresponding to the target interaction option to play the target video clip.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video processing method as set forth in the first or second aspect and various possible designs of the first or second aspect above.
In a sixth aspect, the present disclosure provides a computer-readable storage medium, which stores computer-executable instructions, and when a processor executes the computer-executable instructions, the video processing method according to the first aspect or the second aspect and various possible designs of the first aspect or the second aspect is implemented.
In the video processing method, the apparatus, the device, and the computer-readable storage medium provided in this embodiment, by obtaining at least one interactive branch node information and a target video segment corresponding to each interactive option to be selected in the interactive branch node information, the target video segment corresponding to each interactive option to be selected in the same interactive branch node and corresponding interactive node information are respectively written into different tracks in the same video file, and the configured interactive video file is generated according to the target video segment corresponding to each track in the same video file and the interactive node information. Because the plurality of video clips are written into different tracks of the same video file, a multi-track interactive video file is formed, so that the plurality of video files are not required to be called in the manufacturing and using processes of the interactive video file, and the processing efficiency of the interactive video file is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a network architecture according to the present embodiment;
fig. 3 is a schematic flowchart of a video processing method according to a second embodiment of the disclosure;
fig. 4 is a schematic diagram of an interactive video file provided by an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a video processing method according to a third embodiment of the disclosure;
fig. 6 is a schematic flowchart of a video processing method according to a fourth embodiment of the disclosure;
FIG. 7 is a schematic view of an interactive interface provided by an embodiment of the present disclosure;
FIG. 8 is a schematic view of yet another interface interaction provided by an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a video processing apparatus according to a fifth embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a video processing apparatus according to a sixth embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a video processing apparatus according to a seventh embodiment of the disclosure;
fig. 12 is a schematic structural diagram of an electronic device according to a seventh embodiment of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The present disclosure provides a video processing method, apparatus, device and readable storage medium, in order to solve the above-mentioned technical problems that in the prior art, a plurality of video files need to be called in the process of making and playing interactive video files, the operation is complicated, and the video processing efficiency is not high.
It should be noted that the video processing method, apparatus, device and readable storage medium provided by the present disclosure may be applied to various scenes for interactive video file production.
In the prior art, in order to implement the production of an interactive video file, a configuration file needs to be first constructed according to a preset script, and a plurality of video files are called according to the configuration file, but when an interactive option in the interactive video file is complex, the configuration file is correspondingly complex, which results in a difficulty in producing the interactive video file. In addition, in the using process of the interactive video file, the calling efficiency of the video file is not high, and the user experience is poor.
In the process of solving the technical problem, the inventor finds, through research, that in order to achieve efficiency of interactive video production and calling, a target video and interactive information can be added to different tracks in the same video file respectively, and the target video and the interactive information are fused into an integral video file. Therefore, in the subsequent interactive video playing process, the video file does not need to be called, the sub-video can be directly played by adopting different video tracks, and the playing efficiency of the interactive video is improved.
Fig. 1 is a schematic flowchart of a video processing method according to a first embodiment of the disclosure, as shown in fig. 1, the method includes:
step 101, obtaining at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive topic and at least two corresponding interactive options to be selected.
The execution subject of the present embodiment is a video processing apparatus, and the video processing apparatus can be coupled to a terminal device. The user can interact with the terminal equipment in an interface interaction mode and the like.
In this embodiment, in order to generate an interactive video file between a video and a game, at least one piece of interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information may be obtained first. The interactive branch node information comprises an interactive topic and at least two corresponding interactive options to be selected. For example, an interactive title may ask character B for character A to not see a new movie that is shown. Correspondingly, the interaction options to be selected can be required, not required, and the like. Further, each interactive option to be selected may correspond to one target video clip.
Optionally, the interactive branch node information and the target video segments corresponding to the interactive options to be selected in the interactive branch node information may be input on the terminal device by the user in an interface interaction manner.
In an implementation manner, fig. 2 is a schematic diagram of a network architecture based on this embodiment, and as shown in fig. 2, the network architecture based on this disclosure at least includes: the terminal device 1 and the server 2, wherein the server 2 is provided with a video processing device. The video processing device is written by C/C + +, Java, Shell or Python languages and the like; the terminal device 1 may be a desktop computer, a tablet computer, or the like. The terminal device 1 is in communication connection with the server 2 so that the two can perform information interaction. Based on the network architecture, the theme execution video processing apparatus of this embodiment may also be coupled to a server, and the server is in communication connection with the terminal device. Therefore, the video processing device can acquire the interactive branch node information input by the user on the terminal equipment and the target video clips corresponding to the interactive options to be selected in the interactive branch node information, and perform data processing operation.
And 102, respectively writing the target video clips corresponding to the interaction options to be selected in the same interaction branch node and the corresponding interaction node information into different tracks in the same video file.
In this embodiment, since each video file has a plurality of different tracks, different video clips may be added to each track. In practical applications, the video clips in each track can be played synchronously or individually. Based on the content, the target video clips corresponding to the interaction options to be selected in the same interaction branch node and the corresponding interaction node information can be selected and written into different tracks in the same video file respectively.
Different from the prior art, the method and the device have the advantages that a preset configuration file is adopted to record a plurality of different interactive branch nodes, a plurality of video files are called according to the configuration file, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are respectively written into different tracks in the same video file, so that different tracks can be controlled to play different video clips in the playing process of the interactive video, and the efficiency of making and playing the interactive video file is improved.
And 103, generating a configured interactive video file according to the target video clips and the interactive node information corresponding to the tracks in the same video file.
In this embodiment, after the target video segment corresponding to each interactive option to be selected in the same interactive branch node and the corresponding interactive node information are written into different tracks in the same video file, the configured interactive video file can be generated according to the target video segment and the interactive node information corresponding to each track in the same video file.
Further, on the basis of the first embodiment, the step 101 specifically includes:
receiving an interactive video configuration request sent by a user, wherein the interactive video configuration request comprises at least one interactive branch node information.
And responding to the interactive video configuration request, and displaying the video clips to be selected corresponding to the interactive options to be selected in the interactive branch node information.
And taking the video clip to be selected confirmed by the user aiming at each interactive option to be selected as the corresponding target video clip.
In this embodiment, the interactive branch node information and the target video segments corresponding to the interactive options to be selected in the interactive branch node information may be input by the user on the terminal device in an interface interaction manner.
Specifically, the video processing apparatus may receive an interactive video configuration request sent by a user, where the interactive video configuration request includes information about at least one interactive branch node. The interactive video configuration request may be specifically generated by a user triggering a video configuration icon displayed on the display interface.
After receiving the interactive video configuration request, the video processing device can display a plurality of to-be-selected video clips corresponding to each to-be-selected interactive option in the interactive branch node information according to a display interface of the interactive video configuration request control terminal device, so that a user can select the to-be-selected video clips. And taking the video clip to be selected confirmed by the user aiming at each interactive option to be selected as the corresponding target video clip.
In the video processing method provided by this embodiment, by obtaining at least one piece of interaction branch node information and a target video segment corresponding to each to-be-selected interaction option in the interaction branch node information, the target video segment corresponding to each to-be-selected interaction option in the same interaction branch node and corresponding interaction node information are respectively written into different tracks in the same video file, and a configured interaction video file is generated according to the target video segment corresponding to each track in the same video file and the interaction node information. Because the plurality of video clips are written into different tracks of the same video file, a multi-track interactive video file is formed, so that the plurality of video files are not required to be called in the manufacturing and using processes of the interactive video file, and the processing efficiency of the interactive video file is improved.
Fig. 3 is a schematic flow chart of a video processing method according to a second embodiment of the disclosure, and based on the first embodiment, as shown in fig. 3, step 102 specifically includes:
step 201, writing the target video clips corresponding to the interaction options to be selected in the same interaction branch node into different video tracks in the same video file.
Step 202, writing the corresponding interactive node information in the same interactive branch node into the subtitle track in the same video file.
In this embodiment, since one interactive branch node often corresponds to at least two target video segments, in order to play different target video segments according to the to-be-selected interactive option selected by the user in the subsequent video playing process, the target video segments corresponding to each to-be-selected interactive option in the same interactive branch node can be respectively added to different video tracks. For example, if the interactive branch node corresponds to four different interactive options to be selected, that is, four different target video clips, the four target video clips may be added to the video tracks 0 to 3, respectively. In addition, the video file can also comprise a subtitle track, and the corresponding interactive node information in the same interactive branch node can also be written into the subtitle track in the same video file to obtain the interactive video file.
Fig. 4 is a schematic view of an interactive video file provided in the embodiment of the present disclosure, and as shown in fig. 4, the interactive video file has a plurality of different tracks, each track is provided with different target video clips, and a specific video track can be selected according to actual requirements to play the target video clip. For example, video track 0 may be selected to play target video segment 0, video track 1 may be used to play target video segment 1, video track 3 may be used to play target video segment 2, and video track 0 may be selected to play target video segment 3. In addition, the corresponding interactive node information in the interactive branch node can be written into the subtitle track in the same video file.
Further, on the basis of any of the above embodiments, step 201 specifically includes:
determining a video track to be written according to the number of the target video clips;
and writing the target video clips corresponding to the interactive options to be selected into the video track to be written according to a time alignment strategy.
In this embodiment, for at least two target video segments corresponding to each to-be-selected interaction option in the same interaction branch node, the video track to be written in may be determined according to the number of the target video segments. For example, if the number of target video segments is four, the number of video tracks to be written is correspondingly four.
And writing the target video clips corresponding to the interaction options to be selected into the video track to be written according to a time alignment strategy. Each target video clip is written into one video track, so that the video tracks can be accurately controlled to play the target video clip selected by the user in the subsequent video playing process.
Further, on the basis of any of the above embodiments, the interactive video configuration request further includes a display timestamp of the interactive branch node information;
step 202 specifically includes:
determining the writing position of the interactive node information in the subtitle track according to the display timestamp;
and writing the interactive node information into the subtitle track according to the writing position.
In this embodiment, because the playing times corresponding to different interactive node information are different, the interactive branch node may be labeled by using the display timestamp.
Further, the writing operation of the interactive node information can be realized according to the display time stamp. Specifically, the writing position of the interactive node information in the subtitle track may be determined according to the display timestamp. Further, the interactive node information may be written into the subtitle track at a time according to the writing position.
In the video processing method provided by this embodiment, the target video segments corresponding to the interactive options to be selected in the same interactive branch node are written into different video tracks in the same video file. And writing the corresponding interactive node information in the same interactive branch node into the subtitle track in the same video file to obtain a complete interactive video file. Therefore, in the process of making and using the interactive video files, the calling operation of a plurality of video files is not required to be carried out too much, and the processing efficiency of the interactive video is improved.
Fig. 5 is a schematic flow chart of a video processing method according to a third embodiment of the present disclosure, and on the basis of any of the foregoing embodiments, as shown in fig. 5, before step 101, the method further includes:
step 301, obtaining an initial video clip selected by a user.
Step 302, writing the initial video clip into a preset video track in the video file.
In this embodiment, the interactive video file may be a video file starting from interactive information, or may be a video file starting from a public video file. For example, the interactive video file may be a question and answer video, and after the user plays the question and answer video, the first interactive node information in the question and answer video may be directly played. Still for example, the interactive video file may be an interactive video starting with a public video file, and after the user plays the interactive video, a scenario video may be played first.
Correspondingly, when the interactive video file is a video file which takes a section of public video file as a start, the initial video clip selected by the user can be obtained before the interactive branch node information and the target video clip corresponding to each interactive option to be selected in the interactive branch node information are obtained. And writing the initial video clip into a preset video track in the video file.
In the video processing method provided by this embodiment, before the interactive branch node information and the target video segment corresponding to each interactive option to be selected in the interactive branch node information are acquired, the initial video segment selected by the user may also be acquired. And writing the initial video clip into a preset video track in the video file, so that the interactive video file is more diversified. The method is suitable for different application scenes.
Fig. 6 is a schematic flowchart of a video processing method according to a fourth embodiment of the present disclosure, and as shown in fig. 6, the method includes:
step 401, receiving an interactive video playing request sent by a user, where the interactive video playing request includes interactive video identification information.
The execution subject of the present embodiment is a video processing apparatus, and the video processing apparatus can be coupled to a terminal device. The user can interact with the terminal equipment in an interface interaction mode and the like.
In this embodiment, after the interactive video file is generated, the user can view and play the interactive video file on the terminal device. Specifically, the video processing apparatus may be configured to send an interactive video playing request, where the interactive video playing request includes interactive video identification information. Accordingly, the video processing device can play the interactive video file corresponding to the interactive video identification information.
Step 402, acquiring an interactive video file which is configured and corresponds to the interactive video identification information according to the interactive video playing request; the configured interactive video file comprises a plurality of tracks, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are correspondingly written in different tracks.
In this embodiment, after the interactive video playing request sent by the user is obtained, the configured interactive video file corresponding to the interactive video identification information may be obtained. The interactive video file comprises a plurality of tracks, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are correspondingly written in different tracks.
And step 403, if the display condition of the interactive branch node information is determined to be met, controlling a display interface to display the interactive node information.
In this embodiment, since the interactive video file includes a plurality of interactive branch nodes, in order to achieve accurate display of the interactive branch nodes, it may be determined whether a display condition of the interactive branch node information is currently satisfied, and when the display condition is satisfied, the display interface is controlled to display the interactive branch node.
And step 404, responding to the confirmation operation of the user on the target interaction option in the interaction node information, and controlling the video track written with the target video clip corresponding to the target interaction option to play the target video clip.
In this embodiment, the user can select an interactive option corresponding to the interactive branch node according to actual requirements. Correspondingly, the video processing device can acquire the confirmation operation of the user on the target interaction option in the interaction node information, and control the video track written with the target video clip corresponding to the target interaction option to play the target video clip according to the confirmation operation.
For example, an interactive branch node may ask character B for character a to don't see a newly-shown movie. Accordingly, the target interaction option may be a primary option. At this time, the video processing apparatus may control the video track written with the target video clip to be watched together with the movie to play the target video clip.
Fig. 7 is a schematic view of an interaction interface provided by the embodiment of the present disclosure, and as shown in fig. 7, a target interaction option 1 and a target interaction option 2 may be displayed on a display interface, and after a user triggers the target interaction option 2, a target video clip corresponding to the target interaction option 2 may be displayed on the display interface.
Further, on the basis of the fourth embodiment, after the step 402, the method further includes:
and judging whether the preset video track before the interactive branch node information is written into the position comprises an initial video clip or not.
And if the initial video clip is determined to be included, playing the initial video.
In this embodiment, the interactive video file may be a video file starting from interactive information, or may be a video file starting from a common initial video file.
Therefore, after an interactive video playing request sent by a user is obtained, it can be firstly determined whether the preset video track before the interactive branch node information writing position includes an initial video clip. If the initial video clip is included, the preset video track written in the initial video clip can be controlled to play the initial video.
Further, on the basis of the fourth embodiment, the method further includes:
and judging whether the playing time is consistent with the display time stamp of the corresponding interactive branch node or not in the process of playing the initial video clip or the target video clip.
And if the playing time is consistent with the display time stamp of the interactive branch node, determining that the display condition of the interactive branch node information is met.
In this embodiment, since the playing time corresponding to different interactive node information is different, the interactive branch node may be labeled by using the display timestamp.
Therefore, after the configured interactive video file corresponding to the interactive video identification information is acquired, whether the playing time is consistent with the display timestamp of the corresponding interactive branch node or not can be judged in the process of playing the initial video clip or the target video clip. And when the timestamps are consistent, judging that the display condition of the interactive branch node information is met currently. At this time, the display interface can be controlled to display the interactive branch node information, so that the user can interact according to the interactive branch node information.
As an implementation manner, after the user finishes playing the interactive video file, the user may perform an export operation on the played interactive video file. Specifically, a video export request sent by a user may be obtained, where the video export request includes a plurality of target interaction options. The plurality of target interaction options may be target interaction options input by a user in the process of viewing the interactive video file.
And respectively acquiring target video clips corresponding to the target interaction options, and uploading the target video clips corresponding to the target interaction options to a preset network platform and/or storing the target video clips to a preset storage path. It should be noted that, because each piece of interactive node information in the target video segments corresponding to the multiple target interactive options only has one target video segment, the interactive video files corresponding to the multiple target interactive options can be played by any player.
Fig. 8 is a schematic view of another interface interaction provided by the embodiment of the present disclosure, as shown in fig. 8, a user may implement derivation of an interactive video in an interface interaction manner. Specifically, the user can realize the export of the interactive video by triggering a preset export icon. Before the interactive video is exported, the interactive option icons on the display interface can be triggered in an interface interaction mode, and the generation of a plurality of interactive nodes is realized.
In the video processing method provided by this embodiment, by acquiring an interactive video playing request sent by a user and acquiring an interactive video file corresponding to interactive video identification information, a video track in which a target video clip corresponding to a target interactive option is written is controlled to play the target video clip in response to a confirmation operation of the user on the target interactive option in interactive node information. Therefore, the played video can better meet the personalized requirements of the user, and the user experience is improved.
Fig. 9 is a schematic structural diagram of a video processing apparatus according to a fifth embodiment of the present disclosure, as shown in fig. 9, the apparatus includes: an acquisition module 51, a processing module 52 and a generation module 53. The acquiring module 51 is configured to acquire at least one interactive branch node information and a target video segment corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive topic and at least two corresponding interactive options to be selected. And the processing module 52 is configured to write the target video segment corresponding to each interactive option to be selected in the same interactive branch node and the corresponding interactive node information into different tracks in the same video file, respectively. And a generating module 53, configured to generate the configured interactive video file according to the target video segments and the interactive node information corresponding to the tracks in the same video file.
Further, on the basis of the fifth embodiment, the obtaining module is configured to: receiving an interactive video configuration request sent by a user, wherein the interactive video configuration request comprises at least one interactive branch node information. And responding to the interactive video configuration request, and displaying the video clips to be selected corresponding to the interactive options to be selected in the interactive branch node information. And taking the video clip to be selected confirmed by the user aiming at each interactive option to be selected as the corresponding target video clip.
Fig. 10 is a schematic structural diagram of a video processing apparatus according to a sixth embodiment of the present disclosure, and based on the fifth embodiment, as shown in fig. 10, the processing module includes: a first processing unit 61 and a second processing unit 62. The first processing unit 61 is configured to write the target video segment corresponding to each interactive option to be selected in the same interactive branch node into different video tracks in the same video file. And the second processing unit 62 is configured to write the corresponding interactive node information in the same interactive branch node into the subtitle track in the same video file.
Further, on the basis of any of the above embodiments, the first processing unit is configured to: and determining the video tracks to be written according to the number of the target video clips. And writing the target video clips corresponding to the interactive options to be selected into the video track to be written according to a time alignment strategy.
Further, on the basis of any of the above embodiments, the interactive video configuration request further includes a display timestamp of the interactive branch node information, and the first processing unit is configured to: and determining the writing position of the interactive node information in the subtitle track according to the display timestamp. And writing the interactive node information into the subtitle track according to the writing position.
Further, on the basis of any one of the above embodiments, the apparatus further includes: the device comprises an initial video acquisition module and a writing module. The initial video acquisition module is used for acquiring an initial video clip selected by a user. And the writing module is used for writing the initial video clip into a preset video track in the video file.
Fig. 11 is a schematic structural diagram of a video processing apparatus according to a seventh embodiment of the disclosure, and as shown in fig. 11, the apparatus includes: a receiving module 71, an interactive video file acquiring module 72, a display module 73 and a control module 74. The receiving module 71 is configured to receive an interactive video playing request sent by a user, where the interactive video playing request includes interactive video identification information. An interactive video file obtaining module 72, configured to obtain, according to the interactive video playing request, an interactive video file whose configuration is completed and which corresponds to the interactive video identification information; the configured interactive video file comprises a plurality of tracks, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are correspondingly written in different tracks. And the display module 73 is configured to control the display interface to display the interactive node information if it is determined that the display condition of the interactive branch node information is met. And the control module 74 is configured to, in response to a confirmation operation of the user on a target interaction option in the interaction node information, control the video track in which the target video clip corresponding to the target interaction option is written to play the target video clip.
Further, on the basis of the seventh embodiment, the apparatus further includes: the device comprises a judging module and a playing module, wherein the judging module is used for judging whether a preset video track before the interactive branch node information is written into the position comprises an initial video clip or not. And the playing module is used for playing the initial video if the initial video clip is determined to be included.
Further, on the basis of the seventh embodiment, the apparatus further includes: the device comprises a detection module and a determination module, wherein the detection module is used for judging whether the playing time is consistent with the display time stamp of the corresponding interactive branch node in the process of playing the initial video clip or the target video clip. And the determining module is used for determining that the display condition of the interactive branch node information is met if the playing time is consistent with the display time stamp of the interactive branch node.
Fig. 12 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present disclosure, and as shown in fig. 12, the electronic device 800 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 12, the electronic device 800 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage device 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 807 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 11 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a video processing method, including:
acquiring at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive question and at least two corresponding interactive options to be selected;
respectively writing target video clips corresponding to interaction options to be selected in the same interaction branch node and corresponding interaction node information into different tracks in the same video file;
and generating the configured interactive video file according to the target video clips and the interactive node information corresponding to the tracks in the same video file.
According to one or more embodiments of the present disclosure, the obtaining at least one interactive branch node information and a target video segment corresponding to each interactive option to be selected in the interactive branch node information includes: receiving an interactive video configuration request sent by a user, wherein the interactive video configuration request comprises at least one interactive branch node information; responding to the interactive video configuration request, and displaying a video clip to be selected corresponding to each interactive option to be selected in the interactive branch node information; and taking the video clip to be selected confirmed by the user aiming at each interactive option to be selected as the corresponding target video clip.
According to one or more embodiments of the present disclosure, writing the target video segment corresponding to each interactive option to be selected in the same interactive branch node and the corresponding interactive node information into different tracks in the same video file respectively includes: writing target video clips corresponding to interaction options to be selected in the same interaction branch node into different video tracks in the same video file; and writing the corresponding interactive node information in the same interactive branch node into the subtitle track in the same video file.
According to one or more embodiments of the present disclosure, writing a target video segment corresponding to each interactive option to be selected in the same interactive branch node into different video tracks in the same video file includes: determining a video track to be written according to the number of the target video clips; and writing the target video clips corresponding to the interactive options to be selected into the video track to be written according to a time alignment strategy.
According to one or more embodiments of the present disclosure, the interactive video configuration request further includes a display timestamp of the interactive branch node information; writing the interactive node information corresponding to the same interactive branch node into the caption track in the same video file, including: determining the writing position of the interactive node information in the subtitle track according to the display timestamp; and writing the interactive node information into the subtitle track according to the writing position.
According to one or more embodiments of the present disclosure, before the obtaining at least one interactive branch node information and a target video segment corresponding to each interactive option to be selected in the interactive branch node information, the method further includes: acquiring an initial video clip selected by a user; and writing the initial video clip into a preset video track in the video file.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a video processing method including: receiving an interactive video playing request sent by a user, wherein the interactive video playing request comprises interactive video identification information; acquiring a configured interactive video file corresponding to the interactive video identification information according to the interactive video playing request; the configured interactive video file comprises a plurality of tracks, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are correspondingly written in different tracks; if the display condition of the interactive branch node information is determined to be met, controlling a display interface to display the interactive node information; and responding to the confirmation operation of the user on the target interaction option in the interaction node information, and controlling the video track written with the target video clip corresponding to the target interaction option to play the target video clip.
According to one or more embodiments of the present disclosure, after the obtaining, according to the interactive video playing request, the configured interactive video file corresponding to the interactive video identification information, the method further includes: judging whether a preset video track before the interactive branch node information is written into the position comprises an initial video clip or not; and if the initial video clip is determined to be included, playing the initial video.
According to one or more embodiments of the present disclosure, further comprising: judging whether the playing time is consistent with the display time stamp of the corresponding interactive branch node or not in the process of playing the initial video clip or the target video clip; and if the playing time is consistent with the display time stamp of the interactive branch node, determining that the display condition of the interactive branch node information is met.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided a video processing apparatus including:
the acquisition module is used for acquiring at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive question and at least two corresponding interactive options to be selected;
the processing module is used for writing the target video clips corresponding to the interaction options to be selected in the same interaction branch node and the corresponding interaction node information into different tracks in the same video file respectively;
and the generating module is used for generating the configured interactive video file according to the target video clips and the interactive node information corresponding to the tracks in the same video file.
According to one or more embodiments of the present disclosure, the obtaining module is configured to: receiving an interactive video configuration request sent by a user, wherein the interactive video configuration request comprises at least one interactive branch node information; responding to the interactive video configuration request, and displaying a video clip to be selected corresponding to each interactive option to be selected in the interactive branch node information; and taking the video clips to be selected confirmed by the user aiming at the interaction options to be selected as corresponding target video clips.
According to one or more embodiments of the present disclosure, the processing module includes: the first processing unit is used for writing the target video clips corresponding to the interaction options to be selected in the same interaction branch node into different video tracks in the same video file; and the second processing unit is used for writing the corresponding interactive node information in the same interactive branch node into the subtitle track in the same video file.
According to one or more embodiments of the present disclosure, the first processing unit is configured to: determining a video track to be written according to the number of the target video clips; and writing the target video clips corresponding to the interactive options to be selected into the video track to be written according to a time alignment strategy.
According to one or more embodiments of the present disclosure, the interactive video configuration request further includes a display timestamp of the interactive branch node information; the first processing unit is configured to: determining the writing position of the interactive node information in the subtitle track according to the display timestamp; and writing the interactive node information into the subtitle track according to the writing position.
According to one or more embodiments of the present disclosure, the apparatus further comprises: the initial video acquisition module is used for acquiring an initial video clip selected by a user; and the writing module is used for writing the initial video clip into a preset video track in the video file.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a video processing apparatus including:
the receiving module is used for receiving an interactive video playing request sent by a user, wherein the interactive video playing request comprises interactive video identification information;
the interactive video file acquisition module is used for acquiring the configured interactive video file corresponding to the interactive video identification information according to the interactive video playing request; the configured interactive video file comprises a plurality of tracks, and target video clips corresponding to interaction options to be selected in the same interactive branch node and corresponding interactive node information are correspondingly written in different tracks;
the display module is used for controlling a display interface to display the interactive node information if the display condition of the interactive branch node information is determined to be met;
and the control module is used for responding to the confirmation operation of the user on the target interaction option in the interaction node information and controlling the video track written with the target video clip corresponding to the target interaction option to play the target video clip.
According to one or more embodiments of the present disclosure, the apparatus further comprises: the judging module is used for judging whether a preset video track before the interactive branch node information is written into the position comprises an initial video clip or not; and the playing module is used for playing the initial video if the initial video clip is determined to be included.
According to one or more embodiments of the present disclosure, further comprising: the detection module is used for judging whether the playing time is consistent with the display time stamp of the corresponding interactive branch node or not in the process of playing the initial video clip or the target video clip; and the determining module is used for determining that the display condition of the interactive branch node information is met if the playing time is consistent with the display timestamp of the interactive branch node.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video processing method as set forth in the first or second aspect and various possible designs of the first or second aspect above.
In a sixth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement a video processing method as set forth in the first or second aspect and various possible designs of the first or second aspect.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (18)

1. A video processing method, comprising:
acquiring at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive question and at least two corresponding interactive options to be selected;
respectively writing target video clips corresponding to interaction options to be selected in the same interaction branch node and corresponding interaction node information into different tracks in the same video file;
generating a configured interactive video file according to target video clips and interactive node information corresponding to all tracks in the same video file;
the obtaining of at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information includes:
receiving an interactive video configuration request sent by a user, wherein the interactive video configuration request comprises at least one interactive branch node information;
responding to the interactive video configuration request, and displaying a video clip to be selected corresponding to each interactive option to be selected in the interactive branch node information;
and taking the video clips to be selected confirmed by the user aiming at the interaction options to be selected as corresponding target video clips.
2. The method of claim 1, wherein the writing the target video segment corresponding to each interactive option to be selected and the corresponding interactive node information in the same interactive branch node into different tracks in the same video file respectively comprises:
writing target video clips corresponding to interaction options to be selected in the same interaction branch node into different video tracks in the same video file;
and writing the corresponding interactive node information in the same interactive branch node into the subtitle track in the same video file.
3. The method of claim 2, wherein writing the target video segments corresponding to the interactive options to be selected in the same interactive branch node into different video tracks in the same video file comprises:
determining a video track to be written according to the number of the target video clips;
and writing the target video clips corresponding to the interactive options to be selected into the video track to be written according to a time alignment strategy.
4. The method of claim 2, wherein the interactive video configuration request further includes a display time stamp of the interactive branch node information;
writing the interactive node information corresponding to the same interactive branch node into the subtitle track in the same video file comprises the following steps:
determining the writing position of the interactive node information in the subtitle track according to the display timestamp;
and writing the interactive node information into the subtitle track according to the writing position.
5. The method according to any one of claims 1 to 4, wherein before the obtaining at least one interactive branch node information and the target video segment corresponding to each interactive option to be selected in the interactive branch node information, the method further comprises:
acquiring an initial video clip selected by a user;
and writing the initial video clip into a preset video track in the video file.
6. A video processing method, comprising:
receiving an interactive video playing request sent by a user, wherein the interactive video playing request comprises interactive video identification information;
acquiring a configured interactive video file corresponding to the interactive video identification information according to the interactive video playing request; the configured interactive video file comprises a plurality of tracks, target video clips corresponding to all interaction options to be selected in the same interaction branch node and corresponding interaction node information are correspondingly written in different tracks, the target video clips are the video clips to be selected confirmed by the user aiming at all the interaction options to be selected in the interaction branch node information, wherein the video clips to be selected corresponding to all the interaction options to be selected are displayed in response to an interactive video configuration request sent by the user, and the interactive video configuration request comprises at least one piece of interaction branch node information;
if the display condition of the interactive branch node information is determined to be met, controlling a display interface to display the interactive node information;
and responding to the confirmation operation of the user on the target interaction option in the interaction node information, and controlling the video track written with the target video clip corresponding to the target interaction option to play the target video clip.
7. The method according to claim 6, wherein after acquiring the configured interactive video file corresponding to the interactive video identification information according to the interactive video playing request, the method further comprises:
judging whether a preset video track before the interactive branch node information is written into the position comprises an initial video clip or not;
and if the initial video clip is determined to be included, playing the initial video.
8. The method of claim 7, further comprising:
judging whether the playing time is consistent with the display time stamp of the corresponding interactive branch node or not in the process of playing the initial video clip or the target video clip;
and if the playing time is consistent with the display time stamp of the interactive branch node, determining that the display condition of the interactive branch node information is met.
9. A video processing apparatus, comprising:
the acquisition module is used for acquiring at least one interactive branch node information and a target video clip corresponding to each interactive option to be selected in the interactive branch node information; the interactive branch node information comprises an interactive question and at least two corresponding interactive options to be selected;
the processing module is used for writing the target video clips corresponding to the interaction options to be selected in the same interaction branch node and the corresponding interaction node information into different tracks in the same video file respectively;
the generating module is used for generating configured interactive video files according to the target video clips and the interactive node information corresponding to the tracks in the same video file;
the acquisition module is used for:
receiving an interactive video configuration request sent by a user, wherein the interactive video configuration request comprises at least one piece of interactive branch node information;
responding to the interactive video configuration request, and displaying a video clip to be selected corresponding to each interactive option to be selected in the interactive branch node information;
and taking the video clips to be selected confirmed by the user aiming at the interaction options to be selected as corresponding target video clips.
10. The apparatus of claim 9, wherein the processing module comprises:
the first processing unit is used for writing the target video clips corresponding to the interaction options to be selected in the same interaction branch node into different video tracks in the same video file;
and the second processing unit is used for writing the corresponding interactive node information in the same interactive branch node into the subtitle track in the same video file.
11. The apparatus of claim 10, wherein the first processing unit is configured to:
determining a video track to be written according to the number of the target video clips;
and writing the target video clips corresponding to the interactive options to be selected into the video track to be written according to a time alignment strategy.
12. The apparatus of claim 10, wherein the interactive video configuration request further includes a display time stamp of the interactive branch node information;
the first processing unit is configured to:
determining the writing position of the interactive node information in the subtitle track according to the display timestamp;
and writing the interactive node information into the subtitle track according to the writing position.
13. The apparatus according to any one of claims 9-12, further comprising:
the initial video acquisition module is used for acquiring an initial video clip selected by a user;
and the writing module is used for writing the initial video clip into a preset video track in the video file.
14. A video processing apparatus, comprising:
the receiving module is used for receiving an interactive video playing request sent by a user, wherein the interactive video playing request comprises interactive video identification information;
the interactive video file acquisition module is used for acquiring the configured interactive video file corresponding to the interactive video identification information according to the interactive video playing request; the configured interactive video file comprises a plurality of tracks, target video clips corresponding to all interaction options to be selected in the same interaction branch node and corresponding interaction node information are correspondingly written in different tracks, the target video clips are the video clips to be selected confirmed by the user aiming at all the interaction options to be selected in the interaction branch node information, wherein the video clips to be selected corresponding to all the interaction options to be selected are displayed in response to an interactive video configuration request sent by the user, and the interactive video configuration request comprises at least one piece of interaction branch node information;
the display module is used for controlling a display interface to display the interactive node information if the display condition of the interactive branch node information is determined to be met;
and the control module is used for responding to the confirmation operation of the user on the target interaction option in the interaction node information and controlling the video track written with the target video clip corresponding to the target interaction option to play the target video clip.
15. The apparatus of claim 14, further comprising:
the judging module is used for judging whether a preset video track before the interactive branch node information is written into the position comprises an initial video clip or not;
and the playing module is used for playing the initial video if the initial video clip is determined to be included.
16. The apparatus of claim 15, further comprising:
the detection module is used for judging whether the playing time is consistent with the display time stamp of the corresponding interactive branch node or not in the process of playing the initial video clip or the target video clip;
and the determining module is used for determining that the display condition of the interactive branch node information is met if the playing time is consistent with the display time stamp of the interactive branch node.
17. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video processing method of any of claims 1-5 or 6-8.
18. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the video processing method of any of claims 1-5 or 6-8.
CN202011034493.2A 2020-09-27 2020-09-27 Video processing method, device, equipment and computer readable storage medium Active CN112165652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011034493.2A CN112165652B (en) 2020-09-27 2020-09-27 Video processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011034493.2A CN112165652B (en) 2020-09-27 2020-09-27 Video processing method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112165652A CN112165652A (en) 2021-01-01
CN112165652B true CN112165652B (en) 2022-09-20

Family

ID=73860520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011034493.2A Active CN112165652B (en) 2020-09-27 2020-09-27 Video processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112165652B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037960B (en) * 2021-03-04 2024-04-02 上海哔哩哔哩科技有限公司 Interactive video generation method and device
CN114125501A (en) * 2021-10-30 2022-03-01 杭州当虹科技股份有限公司 Interactive video generation method and playing method and device thereof
CN115460468B (en) * 2022-08-10 2023-09-15 北京爱奇艺科技有限公司 Interactive video file creation method, interactive video playing method, device, electronic equipment and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070099684A1 (en) * 2005-11-03 2007-05-03 Evans Butterworth System and method for implementing an interactive storyline
KR101560727B1 (en) * 2014-04-07 2015-10-15 네이버 주식회사 Service method and system for providing multi-track video contents
CN108683952B (en) * 2018-05-30 2020-10-27 互影科技(北京)有限公司 Video content segment pushing method and device based on interactive video
CN109509376A (en) * 2018-12-18 2019-03-22 广雅传媒(武汉)有限公司 A kind of interactive mode psychological health education system and method

Also Published As

Publication number Publication date
CN112165652A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN109640188B (en) Video preview method and device, electronic equipment and computer readable storage medium
CN111246275B (en) Comment information display and interaction method and device, electronic equipment and storage medium
CN112165652B (en) Video processing method, device, equipment and computer readable storage medium
CN111970577B (en) Subtitle editing method and device and electronic equipment
CN111291220B (en) Label display method and device, electronic equipment and computer readable medium
CN109640129B (en) Video recommendation method and device, client device, server and storage medium
US20240040199A1 (en) Video-based interaction method and apparatus, storage medium and electronic device
WO2022007724A1 (en) Video processing method and apparatus, and device and storage medium
CN111510760A (en) Video information display method and device, storage medium and electronic equipment
CN113259740A (en) Multimedia processing method, device, equipment and medium
WO2022007722A1 (en) Display method and apparatus, and device and storage medium
JP2023537772A (en) Video recommendation method, apparatus, electronic device and storage medium
CN110784753B (en) Interactive video playing method and device, storage medium and electronic equipment
CN111246304A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113507637A (en) Media file processing method, device, equipment, readable storage medium and product
CN114707065A (en) Page display method, device, equipment, computer readable storage medium and product
CN113727170A (en) Video interaction method, device, equipment and medium
CN114470751A (en) Content acquisition method and device, storage medium and electronic equipment
CN114707092A (en) Live content display method, device, equipment, readable storage medium and product
CN111246245A (en) Method and device for pushing video aggregation page, server and terminal equipment
CN113365010A (en) Volume adjusting method, device, equipment and storage medium
WO2023088484A1 (en) Method and apparatus for editing multimedia resource scene, device, and storage medium
CN111338729A (en) Method, device, medium and electronic equipment for playing view
CN115412759A (en) Information display method, device, equipment, computer readable storage medium and product
CN115103236A (en) Image record generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant