CN113132808B - Video generation method and device and computer readable storage medium - Google Patents

Video generation method and device and computer readable storage medium Download PDF

Info

Publication number
CN113132808B
CN113132808B CN201911401997.0A CN201911401997A CN113132808B CN 113132808 B CN113132808 B CN 113132808B CN 201911401997 A CN201911401997 A CN 201911401997A CN 113132808 B CN113132808 B CN 113132808B
Authority
CN
China
Prior art keywords
video
interactive
page
event
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911401997.0A
Other languages
Chinese (zh)
Other versions
CN113132808A (en
Inventor
雷彬
刘嘉鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911401997.0A priority Critical patent/CN113132808B/en
Publication of CN113132808A publication Critical patent/CN113132808A/en
Application granted granted Critical
Publication of CN113132808B publication Critical patent/CN113132808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a video generation method, a video generation device and a computer readable storage medium; playing a target video on a video playing page, wherein the video playing page comprises a video generation control; when the triggering operation aiming at the video generation control is detected, acquiring an interaction event between a user and the video playing page; and generating an interactive video based on the interactive event, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period. The scheme can improve the efficiency of interactive video generation.

Description

Video generation method and device and computer readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a video generation method and apparatus, and a computer-readable storage medium.
Background
With the development of communication technology, when a user watches videos, the watching experience can be enhanced through the interaction with the videos, and interactive videos can be generated based on the interaction between the user and the videos to be shared with other users.
In the research and practice process of the related technology, the inventor of the application finds that the interactive operation between the user and the video can be known in advance through the interactive video, so that the video playing efficiency is enhanced, but the interactive video generation process is too complex, manual screenshot and the like are generally needed, and the interactive video generation efficiency is low.
Disclosure of Invention
The embodiment of the application provides a video generation method, a video generation device, computer equipment and a computer readable storage medium, which can improve the efficiency of interactive video generation.
The embodiment of the application provides a video generation method, which comprises the following steps:
playing a target video on a video playing page, wherein the video playing page comprises a video generating control;
monitoring an interaction event between a user and the video playing page when a trigger operation aiming at the video generating control is detected;
and generating an interactive video based on the monitored interactive events, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period.
Correspondingly, an embodiment of the present application provides a video generating apparatus, including:
the playing unit is used for playing a target video on a video playing page, and the video playing page comprises a video generating control;
the monitoring unit is used for monitoring an interaction event between a user and the video playing page when the triggering operation aiming at the video generating control is detected;
and the generating unit is used for generating an interactive video based on the monitored interactive events, wherein the interactive video comprises a video picture interacted between the user and the video playing page in a target time period.
In one embodiment, the listening unit includes:
the display subunit is configured to display, when a trigger operation for the video generation control is detected, monitoring start indication information on the video playing page, where the monitoring start indication information is used to indicate a time distance between current time and monitoring start time;
and the first monitoring subunit is used for monitoring the interaction event between the user and the video playing page when the time distance is detected to reach a preset time distance.
In one embodiment, the listening unit includes:
and the second monitoring subunit is used for monitoring an interaction event between a user and the interaction control in the video playing area when the triggering operation aiming at the video generating control is detected.
In an embodiment, the video generating apparatus further includes:
and the interactive control display unit is used for displaying the interactive control in the video playing area when detecting that the current playing time of the target video reaches the preset playing time.
In an embodiment, the video generating apparatus further includes:
and the recording and displaying unit is used for recording and displaying the starting video time and the ending video time of the interaction event aiming at the interaction control in the interaction time display area when the interaction operation of the user and the interaction control is monitored.
In an embodiment, the generating unit includes:
the acquisition subunit is used for acquiring the address information of the target video and the page attribute information of the video playing page when the monitoring is finished;
the playing subunit is used for playing the target video again in the virtual page without the page browser according to the address information and the page attribute information;
the simulation and interception subunit is used for simulating the interaction event through the virtual page in the process of replaying the target video, and intercepting the video pictures corresponding to the interaction event to obtain a plurality of interaction video pictures;
and the generating subunit is used for generating an interactive video based on the interactive video pictures.
In an embodiment, the playing subunit is further configured to set page attribute information of a virtual page in the pageless browser according to the page attribute information, so as to obtain a set virtual page; and according to the address information, the target video is replayed in the set virtual page.
In an embodiment, the simulation and interception subunit is further configured to, during the replaying of the target video, traverse the monitored interactivity event based on the start video time and the end video time of the current interactivity event; and simulating the current interaction event in the virtual page for the traversed current interaction event.
In an embodiment, the simulation and interception subunit is further configured to, for the traversed current interactivity event, simulate an interactive operation of the current interactivity event; and responding to the interactive operation through the interface of the non-page browser so as to simulate the current interactive event.
In an embodiment, the simulation and capture subunit is further configured to obtain a plurality of video frames captured by the pageless browser from the target video; and determining a video picture corresponding to the interactive event from the plurality of video pictures based on the interactive event to obtain a plurality of interactive video pictures.
In an embodiment, the simulating and intercepting subunit is further configured to obtain, based on the interactive event, interactive area attribute information of a video picture corresponding to the interactive event; and extracting the interaction area of the video picture corresponding to each interaction event according to the interaction area attribute information to obtain a plurality of interaction video pictures.
In an embodiment, the generating subunit is further configured to sequence the plurality of interactive video pictures based on a start video time and an end video time of the interactive operation, so as to obtain a sequence of sequenced interactive video picture frames; and generating an interactive video according to the sequenced interactive video frame sequences.
In an embodiment, the generating unit includes:
the assembling subunit is used for assembling the interactive events to obtain the assembled interactive events;
the sending subunit is used for sending the assembled interactive event to a server, wherein the server generates an interactive video according to the assembled interactive event and returns the interactive video after generating the interactive video;
and the receiving subunit is used for receiving the interactive video returned by the server.
Accordingly, embodiments of the present application further provide a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the steps in the video generation method provided in any of the embodiments of the present application.
Correspondingly, an embodiment of the present application further provides a computer-readable storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the steps in the video generation method provided in any of the embodiments of the present application.
According to the method and the device, the target video can be played on the video playing page, and the video playing page comprises a video generating control; monitoring an interaction event between a user and the video playing page when a trigger operation aiming at the video generating control is detected; and generating an interactive video based on the monitored interactive events, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period. According to the scheme, the interactive events between the user and the video playing page can be monitored through the triggering operation aiming at the video generating control, so that the interactive video corresponding to the interactive events is generated, the operation process is simple and clear, and the efficiency of generating the interactive video can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a video generation method provided in an embodiment of the present application;
fig. 2 is a detailed schematic diagram of a video generation method provided in an embodiment of the present application;
fig. 3 is a flowchart of a video generation method provided by an embodiment of the present application;
fig. 4 is a schematic view of a video playing page of a video generation method provided in an embodiment of the present application;
FIG. 5 is a flowchart illustrating an interaction operation of a video generation method according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating interaction event recording of a video generation method according to an embodiment of the present application;
fig. 7 is a schematic diagram of target video replay in a video generation method provided in an embodiment of the present application;
FIG. 8 is a block chain system according to an embodiment of the present disclosure;
fig. 9 is another flowchart of a video generation method provided by an embodiment of the present application;
Fig. 10 is a device diagram of a video generation method provided in an embodiment of the present application;
fig. 11 is another apparatus diagram of a video generation method provided in an embodiment of the present application;
fig. 12 is another apparatus diagram of a video generation method provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a video generation method, a video generation device, computer equipment and a computer readable storage medium. Specifically, the embodiment of the application provides a video generation device suitable for computer equipment. The computer device may be a terminal or a server, and the terminal may be a mobile phone, a tablet computer, a notebook computer, and the like. The server may be a single server or a server cluster composed of a plurality of servers.
The video generation method of the present application will be described by taking a computer device as an example.
The video generation method in the embodiment of the present application may be executed on a terminal, or may be executed by the terminal and a server together. The above examples should not be construed as limiting the present application.
The embodiment of the present application may be executed by a terminal, for example, referring to fig. 1, when the video generation method is executed on the terminal, the terminal may play a target video on a video play page, where the video play page includes a video generation control; when the triggering operation aiming at the video generation control is detected, monitoring an interaction event between a user and the video playing page; and generating an interactive video based on the monitored interactive event, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period.
The embodiment of the application can also be executed by the terminal and the server together, for example, referring to fig. 2, taking the example that the terminal and the server execute the video generation method together, the terminal can play the target video on the video playing page, and the video playing page includes a video generation control; when the triggering operation aiming at the video generation control is detected, the terminal monitors an interaction event between a user and the video playing page; the terminal sends the interactive event to a server; and the server generates an interactive video based on the received interactive event sent by the terminal, wherein the interactive video comprises a video picture interacted between the user and the video playing page in a target time period.
Therefore, the interactive video generating method and the interactive video generating device can monitor the interactive events between the user and the video playing page through the triggering operation aiming at the video generating control to generate the interactive video corresponding to the interactive events, the operation process is simple and clear, and the interactive video generating efficiency can be improved.
The following detailed descriptions are given separately, and it should be noted that the description sequence of the following examples is not intended to limit the preferred sequence of the examples.
Embodiments of the present application will be described from the perspective of a video generation apparatus, which may be specifically integrated in a terminal.
An embodiment of the present application provides a video generation method, which may be executed by a terminal, and as shown in fig. 3, a specific flow of the video generation method may be as follows:
101. and playing the target video on a video playing page, wherein the video playing page comprises a video generation control.
The type of the target video can be an interactive video type or other types of videos, wherein the interactive video is a brand new video type, when a user watches the video, the somatosensory feedback is enhanced through interaction with the video, the user participates in the plot development, richer watching experience is brought to the user, the video can also be a common video, and the like.
The video generation control may be used for triggering, by the user, monitoring an interaction event between the user and the video playing page to generate an interactive video, for example, the user may trigger the interaction event between the user and the video playing page after monitoring by performing an operation manner such as clicking, sliding, double-clicking, and the like on the video generation control.
For example, as shown in fig. 4, the video playing page includes a video playing area and a playing setting area, where a video is played in the video playing area, and an attribute adjusting control for a user to adjust the window size of the video playing area is provided in the playing setting area, for example, the user may adjust the window size of the video playing area by clicking, sliding, and other triggering operations on the attribute adjusting control, and a time display area for displaying the interaction between the user and the target video, for example, an interaction time display area, where a dance picture of a character appears during the playing of the target video, a time point at which a dance action starts corresponds to a time point at which the target video is played, a time point at this time is displayed in the interaction time display area at this time is the start time of the interaction operation, and a plurality of interaction controls may appear during dancing of the character, for example, the controls for triggering the actions of jumping upwards, rotating leftwards, waving arms rightwards, jumping forwards, squatting backwards and the like of the character are triggered, the dance actions of the character are completed through the triggering operations of clicking, sliding and the like on the interactive controls, and the time point of playing the target video corresponding to the time point of finishing the dance can be the termination time of the interactive operation.
102. And when the triggering operation aiming at the video generation control is detected, monitoring an interaction event between a user and the video playing page.
The interactive event refers to an event of an interactive operation performed by a user on the video playing page in the process of playing the target video, where the event is an operation that can be identified by a control, for example, in the process of playing the interactive video, there are components, such as a mouse, a keyboard, etc., for the user to interact with a character, an automobile, a door window of a house, etc., in the target video, and when the user uses the mouse to click on a control corresponding to the interactive content appearing in the video playing page, that is, click and interact, an interactive event is obtained, and so on.
In an embodiment, clicking, sliding, and the like may be performed on a video generation control of a video playback page to trigger an interaction event performed between a monitoring user and the video playback page, and specifically, the step "monitoring an interaction event between a monitoring user and the video playback page when the triggering operation for the video generation control is detected" may include:
when the triggering operation aiming at the video generation control is detected, displaying monitoring starting indication information on the video playing page, wherein the monitoring starting indication information is used for indicating the time distance between the current time and the monitoring starting time;
And when the time distance is detected to reach the preset time distance, monitoring the interaction event of the user and the video playing page.
The monitoring start indication information refers to indication information for indicating an interaction event between a monitoring user and a video playing page, where the indication information may indicate a time distance between a current time and a monitoring start time, for example, the monitoring start indication information may be countdown indication information of starting monitoring, and then the time distance reaches a preset time distance, which may be understood as that if the countdown indication information of starting monitoring is 3 seconds, the preset time distance is 3 seconds, and when the countdown indication information of starting monitoring is changed from 3 seconds to 0 seconds, the interaction event between the monitoring user and the video playing page starts to be monitored.
For example, a target video is played on a video playing page, a video generation control of the video playing page is triggered, then monitoring start countdown indication information may appear, and after the countdown is finished, monitoring an interaction event between a user and the video playing page starts, or as shown in fig. 5, a video generation control of the video playing page may be triggered first, a countdown of X seconds (for example, 3 seconds) occurs, a preparation event is reserved for the user, a new countdown (for example, 10 seconds) may appear after the countdown of X seconds is finished, and then the target video is played, and at this time, monitoring an interaction operation between the user and the target video, including a mouse track, mouse and keyboard operations, and the like, starts.
In an embodiment, the video playing page includes a video playing area and the video generating control, where the video playing area includes a video frame of the target video, and specifically, the step "monitoring an interaction event between a user and the video playing page when a triggering operation for the video generating control is detected" may include:
and when the triggering operation aiming at the video generation control is detected, monitoring the interaction event between the user and the interaction control in the video playing area.
For example, as shown in fig. 6, after detecting that the user performs a triggering operation on a video generation control of a video playback page, monitoring an interaction event between the user and the video playback page may be started, that is, recording a mouse event, such as a click, a double click, a scroll, a mouse movement, and the like, may be started, when the playing of the target video is finished (or the user may actively finish playing the target video), the recording is finished, monitoring a next event may be removed, and then performing a data call back, and the recorded data may be placed in a recording event list (recordEventList) according to a preset format, so as to obtain the interaction event.
Wherein, the rough data structure of the recording event list is as follows:
Figure BDA0002347721640000081
the eventName indicates the type of the event, for example, the event is a "click" event, the eventData indicates event data, for example, MouseEvent mouse event data, and the time indicates the time point of occurrence of the event data.
The user can obtain the video picture and the interactive video by clicking or sliding the video generation control without using other tools, the generation of the video picture screenshot and the interactive video can be carried out in the background, the process can be automatic, and the user does not need to carry out additional interactive operation.
In an embodiment, when the target video is played to a preset video playing time point, an interaction control may be displayed in the video playing display area, where the interaction control may be used for a user to perform an interaction operation with the video playing page, and the specific steps may include:
and when the current playing time of the target video is detected to reach the preset playing time, displaying the interactive control in the video playing area.
In one embodiment, the video playing page further comprises an interaction time display area, and when the interaction operation of the user and the interaction control is monitored, the starting video time and the ending video time of the interaction event for the interaction control are recorded and displayed at the interaction time.
The starting video time of the interactive operation refers to a time point when the interactive operation between the user and the video playing page is started, the time point when the interactive operation is started may be a time point when the current interactive time is relative to a time point when the current playing target video is played, the ending time point of the interactive operation may be a time point when the interactive operation between the user and the video playing page is ended relative to a time point when the target video is played, and the starting video time and the ending video time of the interactive operation may be recorded.
For example, as shown in fig. 4, the video playback page includes an interaction time display area that can be used to display a start video time and an end video time for a user to interact with the video playback page.
103. And generating an interactive video based on the monitored interactive event, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period.
The interactive event refers to an event obtained by monitoring an interactive operation between a user and the video playing page, and the interactive event may include interactive event data, and an interactive video may be generated according to the interactive event.
The interactive video refers to a target video based on playing and comprises a video of a video picture interacted between a user and the video playing page, and the interactive video can be used for other users to preview the interaction between the user and the target video.
For example, after the video generation control is triggered, when the target video is played to a preset playing time point, the interactive control displayed in the video playing area is triggered, at this time, the preset playing time point is a starting time point of the target time period, and a time point at which the triggering operation performed by the user for the interactive control in the video playing area is ended is an ending time point of the target time period.
In an embodiment, the interactive video may be generated according to the monitored interactive event, and in particular, the step "generating the interactive video based on the monitored interactive event" may include:
acquiring address information of the target video and page attribute information of the video playing page;
according to the address information and the page attribute information, the target video is replayed in a virtual page without a page browser;
In the process of playing the target video again, simulating the interaction event through the virtual page, and intercepting video pictures corresponding to the interaction event to obtain a plurality of interaction video pictures;
an interactive video is generated based on the plurality of interactive video pictures.
The pageless browser can refer to an interface-free browser, for example, a headless browser, which is a browser without an interface, but has all functions of the browser, except for the interface, various operations in the browser can be performed through commands, and the steps of daily using the browser are as follows: starting a browser, opening a webpage, and interacting, wherein in the headless browser, the processes can be executed through programs or scripts, so that a real browser use scene is simulated.
For example, as shown in fig. 7, taking a headless browser as an example for explanation, after the monitoring is finished, an interaction event can be obtained, then opening the target video in the headless browser according to the address information of the target video, setting the window size of the current virtual page of the headless browser according to the page attribute information of the video playing page, playing the target video in the headless browser, in the process of playing the target video again, in a step 102 of traversing sequentially and temporally one by one, obtaining an interactive event because the recorded data is placed in a recorded event list (recordEventList) according to a preset format, the interaction event is simulated in the virtual page, and the video pictures corresponding to the interaction event can be captured by utilizing the web page snapshot capability of the headless browser, and finally, the interaction video of the video pictures interacted between the user and the video playing page is generated according to the video pictures.
In an embodiment, the step "replay the target video in a virtual page without a page browser according to the address information and the page attribute information" may specifically include:
setting page attribute information of a virtual page in the page-free browser according to the page attribute information to obtain a set virtual page;
and according to the address information, the target video is replayed in the set virtual page.
For example, the page size of the virtual page in the pageless browser may be set according to the page attribute information, such as according to the page size information, and then the target video may be replayed in the virtual page of the pageless browser according to the address information.
In one embodiment, the step of simulating the interactive event through the virtual page during the process of replaying the target video may include:
traversing the monitored interactive event based on the starting video time and the ending video time of the current interactive event in the process of replaying the target video;
and simulating the current interaction event in the virtual page for the traversed current interaction event.
Wherein, the start video time and the end video time are both the start video time and the end video time of the interactive operation recorded and displayed in the interactive time display area in step 102, and the monitored interactive events can be traversed according to the time sequence of the start video time and the end video time in the interactive events.
In an embodiment, for a traversed interaction event, simulating in a virtual page so as to intercept a video frame corresponding to the interaction event, specifically, the step "for a traversed current interaction event, simulating the current interaction event in the virtual page" may include:
for the traversed current interactive event, simulating the interactive operation of the current interactive event;
and responding to the interactive operation through the interface of the page-free browser to simulate the current interactive event.
In an embodiment, the captured video frames may not be only the video frames corresponding to the interactive event, and therefore, the captured video frames may also be filtered to ensure that the video frames corresponding to the interactive event are obtained, specifically, the step "capturing the video frames corresponding to the interactive event to obtain a plurality of interactive video frames" may include:
acquiring a plurality of video pictures of the target video intercepted by the non-page browser;
and determining a video picture corresponding to the interactive event from the plurality of video pictures based on the interactive event to obtain a plurality of interactive video pictures.
For example, a plurality of video frames may be captured during the process of replaying the target video and simulating the interaction event in the virtual page by using the web page snapshot capability of the headless browser, and then the video frames corresponding to the interaction event are screened out from the captured video frames, so as to obtain a plurality of interaction video frames, wherein the plurality of interaction video frames may be used to generate the interaction video.
In an embodiment, the method may obtain attribute information of an interaction area of a video picture corresponding to an interaction event, for example, side length information of an interaction area, to determine side length information of the video picture that needs to be obtained, so as to obtain the interaction video picture, and specifically, the step "determining a video picture corresponding to the interaction event from among the plurality of video pictures based on the interaction event, so as to obtain a plurality of interaction video pictures" may include:
acquiring interactive region attribute information of a video picture corresponding to the interactive event based on the interactive event;
and according to the attribute information of the interactive area, extracting the interactive area of the video picture corresponding to each interactive event to obtain a plurality of interactive video pictures.
For example, a video picture corresponding to an interactive event captured by a web page snapshot function of a headless browser may have a partial area not belonging to an area to be captured, and at this time, attribute information of the interactive area of the video picture corresponding to the interactive event may be obtained, and the video picture corresponding to the interactive event may be captured again according to the attribute information of the interactive area to obtain a plurality of interactive video pictures.
In an embodiment, the step "generating an interactive video based on the plurality of interactive video pictures" may include:
Sequencing the plurality of interactive video pictures based on the initial video time and the final video time of the interactive operation to obtain a sequenced interactive video picture frame sequence;
and generating an interactive video according to the sequenced interactive video frame sequence.
For example, after a plurality of interactive video pictures are sequenced to obtain an interactive video picture frame sequence, the interactive video picture frame sequence can be spliced into a video by using a picture frame sequence-to-video tool.
In an embodiment, after the monitoring is finished, an interactive video may be generated based on the monitored interactive event, and the specific steps may include:
when the monitoring is finished, based on the address information of the target video and the page attribute information of the video playing page;
according to the address information and the page attribute information, the target video is replayed in a virtual page without a page browser;
in the process of playing the target video again, simulating the interaction event through the virtual page, and intercepting video pictures corresponding to the interaction event to obtain a plurality of interaction video pictures;
an interactive video is generated based on the plurality of interactive video pictures.
In an embodiment, the interactive video may be generated by a server, for example, the server may receive an interactive event sent by the terminal, and then the server generates the interactive video according to the interactive event, specifically, the step "generating the interactive video based on the monitored interactive event" may include:
Assembling the interaction event to obtain an assembled interaction event;
sending the assembled interactive event to a server, wherein the server generates an interactive video according to the assembled interactive event and returns the interactive video after generating the interactive video;
and receiving the interactive video returned by the server.
In step 102, the recorded data is placed in a recording event list (recordEventList) according to a preset format to obtain an interactive event, after the monitoring is finished, the interactive event can be assembled and then sent to a server, the server generates an interactive video according to the assembled interactive event, and finally the interactive video is returned to the terminal.
For example, the terminal may send the assembled interactive event placed in the recorded event list (recordEventList) to the server, as shown in fig. 2, the server may obtain address information of the target video and page attribute information of the video playing page, then perform page attribute setting on the virtual page in the pageless browser according to the page attribute information, for example, setting the side length of the page, may obtain the set virtual page, and then open the previously obtained address information of the target video in the set virtual page to play the target video again.
In the process of replaying the target video, the operations in the recording event list are sequentially traversed one by one according to time, and a control Application Program Interface (API) without a page browser can be used to respond to a corresponding event (eventName) in the recording event list (recordEventList), for example, a click operation can find a response element according to event object information in event data (eventData), and trigger a click event corresponding to the element.
After the monitored interactive events are completely traversed, the target video can be replayed in a page-free browser, interactive operation between a user and a video playing page is simulated, a plurality of video pictures can be intercepted in the process, video pictures corresponding to the interactive events are screened out from the plurality of video pictures, a plurality of interactive video pictures can be obtained, and interactive videos can be generated according to the plurality of interactive video pictures.
Optionally, the multiple interactive video pictures may be spliced into an interactive video by using a tool for converting a video by using a picture frame sequence, where the interactive video may include video pictures of the interaction between the user and the video playing page in a target time period, for example, the target time period may be a time period between a time point when the control is triggered and a time point when the monitoring is finished for the video.
The video image capturing can use the web page snapshot capability without a page browser, and the snapshot and the video can be spliced on the server, so that the performance resource of the server can be fully utilized.
In one embodiment, the interactive video may be stored in a blockchain to facilitate subsequent extraction and storage of information, as shown in fig. 8, the computer device may be a node in a distributed system, wherein, the distributed system can be a blockchain system, the blockchain system can be a distributed system formed by connecting a plurality of nodes in a network communication mode, Peer-To-Peer (P2P, Peer To Peer) networks can be formed among the nodes, any type of computer equipment, such as servers, terminals and other electronic equipment can become one node in the blockchain system by joining the Peer-To-Peer network, the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and the blocks record data submitted by nodes in the Block chain system.
As can be seen from the above, in the embodiment, the interactive event between the user and the video playing page can be monitored through the triggering operation for the video generating control, so as to generate the interactive video corresponding to the interactive event, the operation process is simple and clear, and the efficiency of generating the interactive video can be improved.
Based on the above description, the video generation method of the present application will be further described below by way of example.
An embodiment of the present application provides a video generation method, where a terminal and a server may jointly execute the method as shown in fig. 9, and a specific flow of the video generation method may be as follows:
201. and the terminal plays the target video on a video playing page, wherein the video playing page comprises a video generation control.
For example, taking the target video as the interactive video for example, when the interactive video is played on the video playing page, during the playing process of the interactive video, the user may interact with the interactive video, for example, when the current time reaches a preset video playing time point, a bubble capable of inputting text information appears on the video playing page, and the user may interact with the interactive video at this time point based on the bubble, for example, as shown in fig. 4, a text "this is a bubble" is input in the bubble on the video playing page, and other text contents may also be input, for example, "today is very sunny", and the like.
The video playing page has a video generating control, and the video generating control can be clicked or slid to monitor an interaction event between the user and the video playing page in the process of interacting with the video playing page after the triggering operation for the video generating control is performed, for example, an event of the interaction operation performed by the user for bubbles appearing in the video playing page, and the like.
202. And when the triggering operation aiming at the video generation control is detected, the terminal monitors the interaction event between the user and the video playing page.
For example, taking a target video as an interactive video as an example for explanation, as shown in fig. 4, when a triggering operation of a video generation control for a video playing page is detected, monitoring start indication information is displayed on the video playing page, where the monitoring start indication information is used to indicate a time distance between a current time and a monitoring start time, for example, countdown indication information, and when it is detected that the time distance reaches a preset time distance, an interactive event between a monitoring user and the video playing page is monitored, that is, when the countdown indication information is changed from X seconds at the beginning to 0 seconds, the interactive event between the monitoring user and the video playing page is started to be monitored.
The video playing page comprises a video playing area and a video generating control, the video playing area comprises a video picture of a target video, when a triggering operation aiming at the video generating control is detected, an interaction event between a user and the interaction control in the video playing area is monitored in the video playing process, for example, when the time point of the current interaction video playing reaches the preset playing time point, a video character A capable of interacting with the user appears, the user can interact with the video character A through the interaction control appearing on the video playing page to indicate the video character A to run forwards, turn backwards, take off a hat and other events, wherein the events that the video character runs forwards, turns backwards, takes off the hat and the like through the corresponding interaction control are interaction events.
Optionally, in fig. 4, there is an interaction time display area, where the interaction time display area may be in a lower right corner area of the video playing page, and when an interaction operation between the user and the interaction control is monitored, a start video time and an end video time of an interaction event corresponding to the interaction operation may be displayed in the area, and the start video time and the end video time of the interaction event may be recorded.
203. And the server generates an interactive video based on the monitored interactive event, wherein the interactive video comprises a video picture interacted between the user and the video playing page in a target time period.
The terminal can send the monitored interactive events to the server, the server obtains address information of a target video and page attribute information of a video playing page, the target video is played again in a non-page browser according to the address information of the target video and the page attribute information of the video playing page, the server simulates the interactive events through the virtual page in the process of playing the target video again, video pictures corresponding to the interactive events are intercepted to obtain a plurality of interactive video pictures, finally, interactive videos are generated based on the interactive video pictures, the terminal can send the monitored interactive events to the server after the monitoring is finished, and the server generates the interactive videos based on the interactive events.
For example, an interactive video may be generated in a server, taking a headless browser as an example, a terminal may assemble the interactive event to obtain an assembled interactive event, and then the terminal may send the assembled interactive event placed in a recorded event list (recordEventList) to the server, as shown in fig. 2, the server generates an interactive video based on the interactive event, the server may obtain address information of a target video and page attribute information of a video playing page, then the address information of the target video and the page attribute information of the video playing page, replay the target video in a virtual page of the headless browser, traverse operations in the interactive event in sequence and time, intercept video frames corresponding to the interactive event to obtain a plurality of interactive video frames, and finally may splice the plurality of interactive video frames into an interactive video, and returns the interactive video to the terminal.
The server can obtain a plurality of video pictures of the interactive video intercepted by the headless browser, then determine a video picture corresponding to the interactive event from the plurality of video pictures based on the interactive event to obtain a plurality of interactive video pictures, and finally, the server can generate the interactive video based on the plurality of interactive video pictures.
The more detailed obtaining process of the interactive video pictures may be that, based on the interactive event, the interactive area attribute information of the video pictures corresponding to the interactive event is obtained, and then, according to the interactive area attribute information, interactive area extraction is performed on the video pictures corresponding to each interactive event, so as to obtain a plurality of interactive video pictures.
The interactive video generation may be performed by sequencing the plurality of interactive video frames based on the start video time and the end video time of the interactive operation to obtain a sequence of sequenced interactive video frames, and generating the interactive video according to the sequence of sequenced interactive video frames.
Optionally, the page attribute information of the virtual page in the pageless browser may be specifically set according to the page attribute information to obtain a set virtual page, and then the target video is played again in the set virtual page according to the address information.
Optionally, during the process of playing back the target video, the monitored interactivity event may be traversed based on the start video time and the end video time of the current interactivity event, for the traversed current interactivity event, the current interactivity event may be simulated in the virtual page, for example, for the traversed current interactivity event, the interactivity operation of the current interactivity event may be simulated, and the interactivity operation may be responded through the interface of the headless browser to simulate the current interactivity event.
As can be seen from the above, in the embodiment, the interactive event between the user and the video playing page is monitored through the triggering operation for the video generating control, so as to generate the interactive video corresponding to the interactive event, the operation process is simple and clear, and the efficiency of generating the interactive video can be improved.
In order to better implement the above method, correspondingly, the embodiment of the present application further provides a video generating apparatus, where the video generating apparatus may be specifically integrated in a terminal, and referring to fig. 10, the video generating apparatus may include a playing unit 301, a monitoring unit 302, and a generating unit 303, as follows:
(1) a playback unit 301;
the playing unit 301 is configured to play the target video on a video playing page, where the video playing page includes a video generation control.
(2) A listening unit 302;
a monitoring unit 302, configured to monitor an interaction event between a user and the video playback page when a trigger operation for the video generation control is detected.
In an embodiment, the listening unit 302 includes:
a display subunit 3021, configured to display, on the video playback page, monitoring start indication information when a trigger operation for the video generation control is detected, where the monitoring start indication information is used to indicate a time distance between a current time and a monitoring start time;
The first monitoring subunit 3022 is configured to monitor an interaction event between the user and the video playback page when it is detected that the time distance reaches a preset time distance.
In one embodiment, the listening unit includes:
the second monitoring subunit 3023 is configured to monitor an interaction event between the user and the interaction control in the video playing area when the trigger operation for the video generation control is detected.
(3) A generation unit 303;
a generating unit 303, configured to generate an interactive video based on the monitored interactive event, where the interactive video includes a video picture interacted between the user and the video playing page in a target time period.
In an embodiment, the generating unit 303 includes:
an obtaining subunit 3031, configured to, when the monitoring is finished, obtain address information of the target video and page attribute information of the video playing page;
a playing sub-unit 3032, configured to replay the target video in a virtual page without a page browser according to the address information and the page attribute information;
the simulation and interception sub-unit 3033 is configured to simulate the interactive event through the virtual page and intercept a video picture corresponding to the interactive event to obtain a plurality of interactive video pictures in the process of replaying the target video;
A generating subunit 3034 is configured to generate an interactive video based on the plurality of interactive video pictures.
In an embodiment, the playing sub-unit 3032 is further configured to set page attribute information of a virtual page in the pageless browser according to the page attribute information, so as to obtain a set virtual page; and according to the address information, the target video is replayed in the set virtual page.
In an embodiment, the simulation and interception sub-unit 3033 is further configured to traverse the monitored interactivity event based on the start video time and the end video time of the current interactivity event during the process of replaying the target video; and simulating the current interaction event in the virtual page for the traversed current interaction event.
In an embodiment, the simulation and interception sub-unit 3033 is further configured to simulate, for the traversed current interaction event, an interaction operation of the current interaction event; and responding to the interactive operation through the interface of the page-free browser to simulate the current interactive event.
In an embodiment, the simulation and capture sub-unit 3033 is further configured to obtain a plurality of video frames captured by the pageless browser from the target video; and determining a video picture corresponding to the interactive event from the plurality of video pictures based on the interactive event to obtain a plurality of interactive video pictures.
In an embodiment, the simulating and intercepting subunit 3033 is further configured to obtain, based on the interaction event, interaction region attribute information of a video frame corresponding to the interaction event; and according to the attribute information of the interactive area, extracting the interactive area of the video picture corresponding to each interactive event to obtain a plurality of interactive video pictures.
In an embodiment, the generating subunit 3034 is further configured to sequence the multiple interactive video frames based on the start video time and the end video time of the interactive operation, so as to obtain a sequence of sequenced interactive video frame sequences; and generating the interactive video according to the sequenced interactive video frame sequence.
In an embodiment, the generating unit 303 includes:
an assembling subunit 3035, configured to assemble the interaction event when the monitoring is finished, to obtain an assembled interaction event;
a sending subunit 3036, configured to send the assembled interaction event to a server, where the server may generate an interaction video according to the assembled interaction event, and return the interaction video after generating the interaction video;
and the receiving subunit 3037 is configured to receive the interactive video returned by the server.
In an embodiment, as shown in fig. 11, the video generating apparatus further includes:
And an interactive control display unit 304, configured to display the interactive control in the video playing area when it is detected that the current playing time of the target video reaches a preset playing time.
In an embodiment, as shown in fig. 12, the video generating apparatus further includes:
and the recording and displaying unit 305 is used for recording and displaying the starting video time and the ending video time of the interaction event aiming at the interaction control in the interaction time display area when the interaction operation of the user and the interaction control is monitored.
As can be seen from the above, in the video generation apparatus of the embodiment of the present application, the playing unit 301 plays the target video on the video playing page, where the video playing page includes the video generation control; then, when detecting a trigger operation for the video generation control, the monitoring unit 302 monitors an interaction event between the user and the video playing page; generating, by the generating unit 303, an interactive video based on the monitored interactive event, where the interactive video includes a video frame interacted between the user and the video playback page in the target time period. According to the scheme, the interactive events between the user and the video playing page can be monitored through the triggering operation aiming at the video generating control, so that the interactive video corresponding to the interactive events is generated, the operation process is simple and clear, and the efficiency of generating the interactive video can be improved.
The following are detailed descriptions. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Accordingly, an embodiment of the present application further provides a computer device, where the computer device may be a terminal or a server, and as shown in fig. 13, it shows a schematic structural diagram of a computer device according to an embodiment of the present application, and specifically:
the computer device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 13 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device as a whole. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user pages, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The computer device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
playing a target video on a video playing page, wherein the video playing page comprises a video generation control; when the triggering operation aiming at the video generation control is detected, monitoring an interaction event between a user and the video playing page; and generating an interactive video based on the monitored interactive event, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period.
For the above embodiments, reference may be made to the foregoing embodiments, and details are not described herein.
In one embodiment, as shown in fig. 8, the computer device may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computer device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
As can be seen from the above, in the embodiment, the interactive event between the user and the video playing page is monitored through the triggering operation for the video generating control, so as to generate the interactive video corresponding to the interactive event, the operation process is simple and clear, and the efficiency of generating the interactive video can be improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the video generation methods provided in the embodiments of the present application. For example, the instructions may perform the steps of:
Playing a target video on a video playing page, wherein the video playing page comprises a video generating control; monitoring an interaction event between a user and the video playing page when a trigger operation for the video generating control is detected; and generating an interactive video based on the monitored interactive event, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period.
The above embodiments can be referred to in the foregoing embodiments, and detailed description is omitted here.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any video generation method provided in the embodiments of the present application, beneficial effects that can be achieved by any video generation method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described again here.
The video generation method, the video generation device, the computer device, and the computer-readable storage medium provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principles and embodiments of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A method of video generation, comprising:
playing a target video on a video playing page, wherein the video playing page comprises a video generation control;
monitoring an interaction event between a user and the video playing page when a trigger operation aiming at the video generating control is detected;
generating an interactive video based on the monitored interactive events, wherein the interactive video comprises video pictures interacted between the user and the video playing page in a target time period;
when the triggering operation for the video generation control is detected, monitoring an interaction event between a user and the video playing page, including:
when the triggering operation aiming at the video generation control is detected, displaying monitoring start indication information on the video playing page, wherein the monitoring start indication information is used for indicating the time distance between the current time and the monitoring start time;
and monitoring the interaction event of the user and the video playing page when the time distance reaches a preset time distance.
2. The method of claim 1, wherein the video playback page comprises a video playback area, wherein the video playback area comprises a video frame of the target video;
When the triggering operation for the video generation control is detected, monitoring an interaction event between a user and the video playing page includes:
and when the triggering operation aiming at the video generation control is detected, monitoring the interaction event between the user and the interaction control in the video playing area.
3. The method of claim 2, further comprising:
and when the current playing time of the target video is detected to reach the preset playing time, displaying the interactive control in the video playing area.
4. The method of claim 2, wherein the video playback page further comprises an interactive time display area;
the method further comprises the following steps:
when the interactive operation of the user and the interactive control is monitored, recording and displaying the starting video time and the ending video time of the interactive event aiming at the interactive control in the interactive time display area.
5. The method of claim 4, wherein generating an interaction video based on the heard interaction events comprises:
acquiring address information of the target video and page attribute information of the video playing page;
According to the address information and the page attribute information, the target video is played again in a virtual page without a page browser;
in the process of playing the target video again, simulating the interaction event through the virtual page, and intercepting video pictures corresponding to the interaction event to obtain a plurality of interaction video pictures;
and generating an interactive video based on the plurality of interactive video pictures.
6. The method of claim 5, wherein said replaying said target video in a virtual page without a page browser according to said address information and said page attribute information comprises:
setting page attribute information of a virtual page in the page-free browser according to the page attribute information to obtain a set virtual page;
and according to the address information, the target video is replayed in the set virtual page.
7. The method of claim 5, wherein simulating the interaction event through the virtual page during the replaying of the target video comprises:
traversing the monitored interactive events based on the starting video time and the ending video time of the current interactive event in the process of replaying the target video;
And simulating the current interaction event in the virtual page for the traversed current interaction event.
8. The method of claim 7, wherein simulating the current interactivity event in the virtual page for the traversed current interactivity event comprises:
for the traversed current interactive event, simulating the interactive operation of the current interactive event;
and responding to the interactive operation through the interface of the non-page browser so as to simulate the current interactive event.
9. The method according to claim 5, wherein the intercepting a video frame corresponding to an interactive event to obtain a plurality of interactive video frames comprises:
acquiring a plurality of video pictures of the target video intercepted by the non-page browser;
and determining a video picture corresponding to the interactive event from the plurality of video pictures based on the interactive event to obtain a plurality of interactive video pictures.
10. The method of claim 9, wherein the determining a video frame corresponding to the interactivity event from the plurality of video frames based on the interactivity event to obtain a plurality of interactive video frames comprises:
Acquiring interactive region attribute information of a video picture corresponding to the interactive event based on the interactive event;
and extracting the interactive area of the video picture corresponding to each interactive event according to the interactive area attribute information to obtain a plurality of interactive video pictures.
11. The method of claim 5, wherein generating an interactive video based on the plurality of interactive video pictures comprises:
sequencing the plurality of interactive video pictures based on the initial video time and the ending video time of the interactive operation to obtain a sequence of sequenced interactive video picture frames;
and generating an interactive video according to the sequenced interactive video frame sequences.
12. The method of claim 1, wherein generating an interactive video based on the heard interactive events comprises:
assembling the interaction event to obtain an assembled interaction event;
sending the assembled interactive event to a server, wherein the server generates an interactive video according to the assembled interactive event and returns the interactive video after generating the interactive video;
and receiving the interactive video returned by the server.
13. A video generation apparatus, comprising:
the playing unit is used for playing a target video on a video playing page, and the video playing page comprises a video generating control;
the monitoring unit is used for monitoring an interaction event between a user and the video playing page when the triggering operation aiming at the video generating control is detected;
a generating unit, configured to generate an interactive video based on the monitored interactive event, where the interactive video includes a video frame interacted between the user and the video playing page within a target time period:
the monitoring unit comprises:
the display subunit is configured to display, when a trigger operation for the video generation control is detected, monitoring start indication information on the video playing page, where the monitoring start indication information is used to indicate a time distance between current time and monitoring start time;
and the first monitoring subunit is used for monitoring the interaction event between the user and the video playing page when the time distance is detected to reach a preset time distance.
14. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the video generation method of any of claims 1 to 12.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor reads the computer program stored in the memory to perform the method of any one of claims 1 to 12.
CN201911401997.0A 2019-12-30 2019-12-30 Video generation method and device and computer readable storage medium Active CN113132808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401997.0A CN113132808B (en) 2019-12-30 2019-12-30 Video generation method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401997.0A CN113132808B (en) 2019-12-30 2019-12-30 Video generation method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113132808A CN113132808A (en) 2021-07-16
CN113132808B true CN113132808B (en) 2022-07-29

Family

ID=76768251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401997.0A Active CN113132808B (en) 2019-12-30 2019-12-30 Video generation method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113132808B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117014648A (en) * 2022-04-28 2023-11-07 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106488301A (en) * 2015-08-25 2017-03-08 北京新唐思创教育科技有限公司 A kind of record screen method and apparatus and video broadcasting method and device
CN108769814A (en) * 2018-06-01 2018-11-06 腾讯科技(深圳)有限公司 Video interaction method, device and readable medium
WO2019100757A1 (en) * 2017-11-23 2019-05-31 乐蜜有限公司 Video generation method and device, and electronic apparatus
CN110149528A (en) * 2019-05-21 2019-08-20 北京字节跳动网络技术有限公司 A kind of process method for recording, device, system, electronic equipment and storage medium
CN110221765A (en) * 2019-06-10 2019-09-10 惠州Tcl移动通信有限公司 A kind of video file broadcasting method, device, storage medium and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106488301A (en) * 2015-08-25 2017-03-08 北京新唐思创教育科技有限公司 A kind of record screen method and apparatus and video broadcasting method and device
WO2019100757A1 (en) * 2017-11-23 2019-05-31 乐蜜有限公司 Video generation method and device, and electronic apparatus
CN108769814A (en) * 2018-06-01 2018-11-06 腾讯科技(深圳)有限公司 Video interaction method, device and readable medium
CN110149528A (en) * 2019-05-21 2019-08-20 北京字节跳动网络技术有限公司 A kind of process method for recording, device, system, electronic equipment and storage medium
CN110221765A (en) * 2019-06-10 2019-09-10 惠州Tcl移动通信有限公司 A kind of video file broadcasting method, device, storage medium and terminal

Also Published As

Publication number Publication date
CN113132808A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
EP3600581B1 (en) Distributed sample-based game profiling with game metadata and metrics and gaming api platform supporting third-party content
CN110784752B (en) Video interaction method and device, computer equipment and storage medium
US11363353B2 (en) Video highlight determination method and apparatus, storage medium, and electronic device
CN105791291B (en) The method and apparatus of real-time update in the display control method of network application, display
WO2017140229A1 (en) Video recording method and apparatus for mobile terminal
CN107050850A (en) The recording and back method of virtual scene, device and playback system
CN110830735B (en) Video generation method and device, computer equipment and storage medium
US20160199742A1 (en) Automatic generation of a game replay video
CN111263170B (en) Video playing method, device and equipment and readable storage medium
CN110389697B (en) Data interaction method and device, storage medium and electronic device
CN109361954B (en) Video resource recording method and device, storage medium and electronic device
CN111314204A (en) Interaction method, device, terminal and storage medium
CN108600850A (en) Video sharing method, client, server and storage medium
CN112188223B (en) Live video playing method, device, equipment and medium
CN112619130A (en) Multi-scene playback method and device for game
CN103561106A (en) System and method for remote teaching and remote meeting
CN109821235B (en) Game video recording method, device and server
CN113824983A (en) Data matching method, device, equipment and computer readable storage medium
CN113521743A (en) Game synchronization method, device, terminal, server and storage medium
CN113132808B (en) Video generation method and device and computer readable storage medium
CN109032768A (en) Moving method, device, terminal, server and the storage medium of utility cession
CN113868575A (en) Webpage same-screen method and system
WO2018149170A1 (en) Cross-application control method and device
CN109086123A (en) Moving method, device, terminal, server and the storage medium of utility cession
CN109040848A (en) Barrage is put upside down method, apparatus, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40048669

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant