CN112218130A - Control method and device for interactive video, storage medium and terminal - Google Patents

Control method and device for interactive video, storage medium and terminal Download PDF

Info

Publication number
CN112218130A
CN112218130A CN202010918472.0A CN202010918472A CN112218130A CN 112218130 A CN112218130 A CN 112218130A CN 202010918472 A CN202010918472 A CN 202010918472A CN 112218130 A CN112218130 A CN 112218130A
Authority
CN
China
Prior art keywords
interactive
target
video
user
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010918472.0A
Other languages
Chinese (zh)
Inventor
宋晓波
高柏青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dami Technology Co Ltd
Original Assignee
Beijing Dami Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dami Technology Co Ltd filed Critical Beijing Dami Technology Co Ltd
Priority to CN202010918472.0A priority Critical patent/CN112218130A/en
Publication of CN112218130A publication Critical patent/CN112218130A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof

Abstract

The embodiment of the application discloses a control method and device of an interactive video, a storage medium and a terminal. In the process of playing a target video, receiving a first interactive instruction executed by a user, responding to the first interactive instruction, determining a target identifier selected by the first interactive instruction, wherein the target identifier is an identifier of target interactive content, determining a starting position of the target interactive content in the target video, and positioning a playing progress in the target video to the starting position. By the method, the interactive content can be quickly positioned in the interactive video, for example, a user can automatically position questions and then answer the questions in a recorded broadcast course, so that the flexibility of user operation is improved, and the efficiency of the user is also improved.

Description

Control method and device for interactive video, storage medium and terminal
Technical Field
The present application relates to the field of online education, and in particular, to a method, an apparatus, a storage medium, and a terminal for controlling an interactive video.
Background
With the development of the internet, online teaching is popular with more and more people, is not limited by time and places, can be flexibly learned, and can be interacted between teachers and students. The interaction of live broadcast class is comparatively common, carries out interactive in recorded broadcast class when interactive, then needs add interactive content at recorded broadcast class, for example: classroom interactive contents such as classroom answering can be added in recorded lessons, but when a user answers recorded lessons, the user can only answer the recorded lessons when waiting for the playing progress of the recorded lessons to the answering interface, and when the user wants to answer the recorded lessons, the user can only manually drag the playing progress to the answering interface, so that the problem of low efficiency exists.
Disclosure of Invention
The embodiment of the application provides a control method and device of an interactive video, a computer storage medium and a terminal, and aims to solve the technical problem of low efficiency caused by the fact that students can only answer questions according to a video playing progress sequence when watching recorded and broadcast lessons and answering the questions in a classroom in the process of online teaching. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for controlling an interactive video, where the method includes:
receiving a first interactive instruction executed by a user in the playing process of a target video;
responding to the first interactive instruction, and determining a target identifier selected by the first interactive instruction; wherein the target identification is an identification of target interactive content;
determining a starting position of the target interactive content in a target video;
and positioning the playing progress in the target video to the starting position.
Optionally, before receiving a first interactive instruction executed by a user during the playing process of the target video, the method includes:
receiving a second interactive instruction executed by the user in the playing process of the target video;
and responding to the second interactive instruction, displaying an identification list on a playing window of the target video, wherein the identification list comprises at least one identification of interactive content, the interactive content is located in the target video, and the target identification belongs to the identification list.
Optionally, before receiving the second interactive instruction executed by the user in the playing process of the target video, the method further includes:
determining an original video and at least one interactive content;
and synthesizing the at least one interactive content into the original video based on the course structure corresponding to the original video to obtain the target video.
Optionally, the synthesizing the at least one interactive content into the original video based on the course structure corresponding to the original video includes:
determining a corresponding insertion time point of the at least one interactive content in the original video based on the course structure;
and respectively synthesizing the at least one interactive content and the original video according to the insertion time point.
Optionally, after the positioning the playing progress in the target video to the starting position, the method further includes:
closing the identification list;
displaying the target interactive content;
receiving at least one third interactive instruction executed by the user;
and responding to the third interactive instruction, and displaying an interactive result in a playing window of the target video.
Optionally, the method further comprises:
receiving a fourth interactive instruction executed by the user in the playing process of the target video;
and closing the identification list in response to the fourth interactive instruction.
Optionally, the interactive content is an interactive topic.
In a second aspect, an embodiment of the present application provides an apparatus for controlling an interactive video, where the apparatus includes:
the instruction receiving module is used for receiving a first interactive instruction executed by a user in the playing process of the target video;
the identification determining module is used for responding to the first interactive instruction and determining a target identification selected by the first interactive instruction; wherein the target identification is an identification of target interactive content;
the position determining module is used for determining the starting position of the target interactive content in the target video;
and the position positioning module is used for positioning the playing progress in the target video to the starting position. In a third aspect, embodiments of the present application provide a computer storage medium having a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a memory and a processor; wherein the memory stores a computer program adapted to be loaded by the memory and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the scheme of the embodiment of the application is executed, a first interactive instruction executed by a user is received in the playing process of a target video, a target identifier selected by the first interactive instruction is determined in response to the first interactive instruction, wherein the target identifier is an identifier of target interactive content, the initial position of the target interactive content in the target video is determined, and the playing progress in the target video is positioned to the initial position. According to the method, the interactive content can be quickly positioned in the interactive video, for example, a user can automatically position questions and then answer the questions in a recorded broadcast class, so that the flexibility of user operation is improved, and the efficiency of the user is also improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 3 is an architectural diagram of the android operating system of FIG. 1;
FIG. 4 is an architecture diagram of the IOS operating system of FIG. 1;
fig. 5 is a flowchart illustrating a control method for interactive video according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a control method for interactive video according to an embodiment of the present application;
fig. 7 is a display schematic diagram of a playing interface of a target video provided by an embodiment of the present application;
fig. 8 is a display schematic diagram of a playing interface of a target video provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a control device for interactive video according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the embodiments of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Referring to fig. 1, a block diagram of a terminal according to an exemplary embodiment of the present application is shown. A terminal in the present application may include one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The storage data area may also store data created by the terminal in use, such as a phonebook, audio-video data, chat log data, and the like.
Referring to fig. 2, the memory 120 may be divided into an operating system space, in which an operating system runs, and a user space, in which native and third-party applications run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 3, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the terminal, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, a power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game-like application, an instant messaging program, a photo beautification program, a shopping program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 4, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the terminal. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework shown in FIG. 4, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. The touch display screen is generally provided at a front panel of the terminal. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the terminals illustrated in the above-described figures do not constitute limitations on the terminals, as the terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the terminal described above. Optionally, the execution subject of each step is an operating system of the terminal. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The terminal of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. The user can view information such as displayed text, images, video, etc. using the display device on the terminal 101. The terminal may be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the terminal shown in fig. 1, the processor 110 may be configured to call an application program stored in the memory 120, and specifically execute the interactive video control method according to the embodiment of the present application.
When the scheme of the embodiment of the application is executed, a first interactive instruction executed by a user is received in the playing process of a target video, a target identifier selected by the first interactive instruction is determined in response to the first interactive instruction, wherein the target identifier is an identifier of target interactive content, the initial position of the target interactive content in the target video is determined, and the playing progress in the target video is positioned to the initial position. According to the method, the interactive content can be quickly positioned in the interactive video, for example, a user can automatically position questions and then answer the questions in a recorded broadcast class, so that the flexibility of user operation is improved, and the efficiency of the user is also improved.
In the following method embodiments, for convenience of description, only the main execution body of each step is described as a terminal.
Please refer to fig. 5, which is a flowchart illustrating a control method of an interactive video according to an embodiment of the present disclosure. As shown in fig. 5, the method of the embodiment of the present application may include the steps of:
s501, in the process of playing the target video, receiving a first interactive instruction executed by a user.
The target video is a teaching video for playing at the user terminal, and the teaching video can be stored in a terminal local media file library or a streaming media server of the internet. The first interactive instruction is an interactive instruction generated by a touch operation executed on a terminal touch screen when a user watches a target video, and the touch operation may be a touch control operation or a mouse click control operation. The touch control operation may be a touch operation in which a user clicks a virtual button on a touch screen using a finger or a stylus; the mouse click control operation may be a touch operation in which a user performs click control on a virtual button on the display screen using a button on the mouse.
Generally, a user opens an application program to watch a target video, wherein the target video can be a cache video which is downloaded to a terminal local file library by the user in advance, or a video which is played online by the user through networking. In the playing process of the target video, an identification list is displayed on a display interface of the target video, a plurality of different identification buttons are arranged on the identification list, a user can click one of the identification buttons, and the terminal identifies the clicking operation of the user and receives a first interaction instruction. Such as: the identification list can be a numeric identification list of the topic or an alphabetical identification list of the topic.
S502, responding to the first interactive instruction, and determining a target identifier selected by the first interactive instruction.
The target identifier is an identifier of target interactive content, and the target interactive content may be display content that is interactively operated with a user.
Generally, after receiving a first interactive instruction executed by a user, a terminal may analyze information, such as position information of an identification button and pattern information of the identification button, carried in the first interactive instruction, and determine a target identification selected by the user based on the first interactive instruction. Such as: when the identification list is a digital identification list, the user selects one of the digital identifications, the terminal identifies the position and the digital pattern of the digital identification selected by the user, and determines that the number selected by the user is specifically 1 or 2 or other numbers. When the identification list is an alphabet identification list, the user selects one of the alphabet identifications, the terminal identifies the position and the alphabet pattern where the alphabet identification is located, and the alphabet selected by the user is determined to be A or B or other alphabets.
S503, determining the starting position of the target interactive content in the target video.
Wherein, the starting position refers to a time position at which the target interactive content starts to be displayed in the target video.
Generally, the target interactive content corresponds to the target identifier one by one, the target interactive content can be pre-synthesized into the target video according to the video content of the target video, the target identifier can be determined according to the position information of the identifier button and the pattern information of the identifier button in the first interactive instruction, further, the target interactive content associated with the target identifier can be determined, and then the position information of the target interactive content in the target video can be determined, it can be understood that the position information here refers to time information, that is, the target interactive content starts to be displayed at a certain time of the target video.
S504, positioning the playing progress in the target video to the initial position.
Generally, the starting position of the target interactive content in the target video may be a certain time, and the playing progress in the target video may be located to the certain time. For example, the duration of the target video is 30 minutes, the position of the target interactive content in the target video is determined to be 19 minutes and 20 seconds, that is, regardless of the time when the current playing progress of the target video is, the playing progress of the target video can be positioned to be 19 minutes and 20 seconds, and further, the target interactive content is displayed.
When the scheme of the embodiment of the application is executed, a first interactive instruction executed by a user is received in the playing process of a target video, a target identifier selected by the first interactive instruction is determined in response to the first interactive instruction, wherein the target identifier is an identifier of target interactive content, the initial position of the target interactive content in the target video is determined, and the playing progress in the target video is positioned to the initial position. According to the method, the interactive content can be quickly positioned in the interactive video, for example, a user can automatically position questions and then answer the questions in a recorded broadcast class, so that the flexibility of user operation is improved, and the efficiency of the user is also improved.
Please refer to fig. 6, which is a flowchart illustrating a control method of an interactive video according to an embodiment of the present disclosure. As shown in fig. 6, the method of the embodiment of the present application may include the steps of:
s601, determining an original video and at least one interactive content.
The original video is a video without adding interactive contents in the target video, is a teaching video for a teacher to give lessons to one or more students in online teaching, and can be a video for recording and playing lessons. The interactive content refers to video content which can be displayed on the terminal and can be interactively operated by a user.
Generally, an original video and interactive contents are needed for generating a target video, the original video and the interactive contents needed for generating the target video can be determined, one or more interactive contents can be added in one original video, and it should be noted that the application does not limit the number of the interactive contents at all, and the number of the interactive contents can be determined according to the video contents of the specific original video.
For example: the interactive content can be topic interactive content, one or more topic segments can be added in the original video, topics can be properly added according to teaching content in the original video, and the topics can be selection topics, judgment topics or connection topics.
S602, determining a corresponding insertion time point of at least one interactive content in the original video based on the course structure.
The insertion time point refers to a time point of inserting interactive contents in the target video, each interactive content corresponds to one insertion time point, and a plurality of insertion time points corresponding to the interactive contents are formed when a plurality of interactive contents are inserted in one original video.
In the application, the original video is an original video of a recorded broadcast class, and an insertion time point at which the interactive content can be inserted into the original video can be determined according to a course structure corresponding to the recorded broadcast class. For example: in a recording session with a duration of 30 minutes, four interactive contents are to be inserted, which are to be inserted at such four time points 00:05:00, 00:11:40, 00:16:10 and 00:25:40, respectively.
S603, at least one interactive content and the original video are respectively synthesized according to the insertion time point to obtain a target video.
In general, based on the insertion time point determined in S602, the interactive contents corresponding to the insertion time point are inserted, and the target video may be obtained.
S604, in the process of playing the target video, receiving a second interactive instruction executed by the user.
The second interactive instruction is used for indicating the terminal to display the identification list on the playing interface of the target video, and is generated by touch operation executed by a user on a terminal touch screen, and the second interactive instruction carries information such as pattern information of a clicked button, position information of the clicked button and the like. The touch operation may be a touch control operation, and may be a mouse click control operation. The touch control operation may be a touch operation in which a user clicks a virtual button on a touch screen using a finger or a stylus; the mouse click control operation may be a touch operation in which a user performs click control on a virtual button on the display screen using a button on the mouse.
Generally, in the playing process of the target video, a user clicks a second touch button on the playing interface to generate a second interactive instruction, the second touch button corresponds to the second interactive instruction, the second interactive instruction is generated only when the user clicks the second touch button, and the terminal receives the second interactive instruction after recognizing the touch operation performed by the user.
And S605, responding to the second interactive instruction, and displaying the identification list in the playing window of the target video.
Generally, the terminal receives the second interactive instruction, analyzes the second interactive instruction to obtain a position and a pattern of a clicked button carried in the second interactive instruction, determines the button clicked by the second interactive instruction, and displays an identification list on the play window in response to the second interactive instruction, wherein the identification list comprises at least one identification of interactive content, and the target identification belongs to the identification list.
S606, in the process of playing the target video, receiving a first interactive instruction executed by a user.
The first interactive instruction is used for indicating the terminal to determine the target identifier, a user clicks a first touch button on the playing interface to generate the first interactive instruction, the first interactive instruction is generated only when the user clicks the first touch button, and the terminal receives the first interactive instruction after recognizing the touch operation executed by the user.
S607, responding to the first interactive instruction, and determining the target identifier selected by the first interactive instruction.
Wherein the target identification is the identification of the target interactive content.
For example: as shown in fig. 7, a display diagram of a play interface of a target video is shown, where 710 is the play interface of the target video, 720 is an identification list on the play interface, a user clicks topic 1 therein, a first interactive instruction is generated based on the click operation, and a terminal parses the first interactive instruction and determines that the button clicked by the user is a button for identifying topic 1 in the list.
S608, determining the starting position of the target interactive content in the target video.
Generally, the target interactive content is associated with a target identifier and an insertion time point of the target interactive content, the target interactive content identifier may be determined based on S607, further, the target interactive content corresponding to the target interactive identifier may be determined, the insertion time point of the target interactive content in the target video may be determined, and a starting position of the target interactive content in the target video may be determined.
And S609, positioning the playing progress in the target video to the initial position, and closing the identification list.
Generally, after determining a starting time point of the target interactive content in the target video, the playing progress of the target video is located to the starting time point, and further, the terminal closes the identifier list.
In addition to one way of closing the list of identities mentioned in this step, optionally, in a possible embodiment, there is also one way of closing the list of identities. And receiving a fourth interactive instruction executed by the user in the playing process of the target video, and closing the identification list in response to the fourth interactive instruction. And the fourth interactive instruction is used for indicating the terminal to close the identifier list, and the fourth interactive instruction carries the pattern information and the position information of the clicked button. The button corresponding to the fourth interactive instruction is a fourth touch button, the fourth interactive instruction is generated only when the user clicks the fourth touch button, the terminal analyzes the fourth interactive instruction after receiving the fourth interactive instruction, the clicked button in the fourth interactive instruction is determined to be a closing button of the identification list according to the pattern information and the position information of the clicked button, and further the identification list is closed in response to the fourth interactive instruction.
S610, displaying the target interactive content.
The target interactive content is an interactive topic, and the interactive topic can include a selection topic, a judgment topic and the like.
For example: as shown in the display schematic diagram of the playing interface of the target video shown in fig. 8, after the playing progress of the target video is located to the start position, the interactive title shown in fig. 8 may be displayed, which is the target identifier in S607, that is, the interactive content of title 1.
S611, receiving at least one third interactive instruction executed by the user.
And the third interactive instruction user instructs the terminal to display the interactive result in the playing window of the target video. And the user clicks a third touch button on the playing interface to further generate a third interactive instruction, the third interactive instruction is generated only when the user clicks the third touch button, and the terminal receives the third interactive instruction after recognizing the touch operation executed by the user.
Generally, after the user completes the target interactive content with the terminal, the user clicks a third touch button on a display interface of the target video, and generates a third interactive instruction based on a clicking operation of the user, and further, the terminal receives the third interactive instruction. It should be noted that the third touch button, the first touch button in the first interactive instruction, the second touch button in the second interactive instruction, and the fourth touch button in the fourth interactive instruction are different touch buttons on the playing interface.
And S612, responding to the third interactive instruction, and displaying an interactive result in a playing window of the target video.
Generally, after receiving the third interactive instruction, the terminal analyzes the third interactive instruction, determines a touch button in the third interactive instruction, and further, in response to the third interactive instruction, displays an interactive result in a play window of the target video, where the interactive result is an interactive result obtained by performing an interactive operation between the user and the target interactive content.
For example: the interactive content can be an interactive topic, when the interactive topic is a selection topic or a judgment topic, the interactive result can be an interactive picture, the interactive picture displays the character analysis of each option of the topic, the interactive result can also be an interactive video, and the interactive video can be a video for teachers to explain each option of the topic.
When the scheme of the embodiment of the application is executed, a first interactive instruction executed by a user is received in the playing process of a target video, a target identifier selected by the first interactive instruction is determined in response to the first interactive instruction, wherein the target identifier is an identifier of target interactive content, the initial position of the target interactive content in the target video is determined, and the playing progress in the target video is positioned to the initial position. According to the method, the interactive content can be quickly positioned in the interactive video, for example, a user can automatically position questions and then answer the questions in a recorded broadcast class, so that the flexibility of user operation is improved, and the efficiency of the user is also improved.
Fig. 9 is a schematic structural diagram of a control device for interactive video according to an embodiment of the present disclosure. The control device of the interactive video can be realized by software, hardware or a combination of the software and the hardware to form all or part of the terminal. The apparatus 900 includes:
the instruction receiving module 910 is configured to receive a first interactive instruction executed by a user during a playing process of a target video;
an identifier determining module 920, configured to determine, in response to the first instruction, a target identifier selected by the first instruction; wherein the target identification is an identification of target interactive content;
a position determining module 930, configured to determine a starting position of the target interactive content in the target video; and the position positioning module is used for positioning the playing progress in the target video to the starting position.
Optionally, the apparatus 900 further comprises:
a video determination module for determining an original video and at least one interactive content;
and the video synthesis module is used for synthesizing the at least one interactive content into the original video based on the course structure corresponding to the original video to obtain the target video.
Optionally, the apparatus 900 further comprises:
the second instruction receiving module is used for receiving a second interactive instruction executed by the user in the playing process of the target video;
and the second instruction response module is used for responding to the second interactive instruction, displaying an identification list on a playing window of the target video, wherein the identification list comprises at least one identification of the interactive content, the interactive content is positioned in the target video, and the target identification belongs to the identification list.
Optionally, the video composition module comprises:
a composition position determining unit, configured to determine, based on the course structure, a corresponding insertion time point of the at least one interactive content in the original video;
and the synthesis processing unit is used for respectively carrying out synthesis processing on the at least one interactive content and the original video according to the insertion time point.
Optionally, the apparatus 900 further comprises:
an identification list closing unit, configured to close the identification list;
the target interactive content display unit displays the target interactive content;
a third instruction receiving unit, configured to receive a third interactive instruction executed by the user;
and the third instruction response unit is used for responding to the third interactive instruction and displaying an interactive result in a playing window of the target video.
Optionally, the apparatus 900 further comprises:
a fourth instruction receiving unit, configured to receive a fourth interactive instruction executed by the user in the playing process of the target video;
a fourth instruction response unit, configured to close the identifier list in response to the fourth interactive instruction.
When the scheme of the embodiment of the application is executed, a first interactive instruction executed by a user is received in the playing process of a target video, a target identifier selected by the first interactive instruction is determined in response to the first interactive instruction, wherein the target identifier is an identifier of target interactive content, the initial position of the target interactive content in the target video is determined, and the playing progress in the target video is positioned to the initial position. According to the method, the interactive content can be quickly positioned in the interactive video, for example, a user can automatically position questions and then answer the questions in a recorded broadcast class, so that the flexibility of user operation is improved, and the efficiency of the user is also improved.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the above method steps, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 5 and fig. 6, which are not described herein again.
The application also provides a terminal, which comprises a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A method for controlling interactive video, the method comprising:
receiving a first interactive instruction executed by a user in the playing process of a target video;
responding to the first interactive instruction, and determining a target identifier selected by the first interactive instruction; wherein the target identification is an identification of target interactive content;
determining a starting position of the target interactive content in a target video;
and positioning the playing progress in the target video to the starting position.
2. The method according to claim 1, wherein before receiving the first interactive instruction executed by the user during the playing of the target video, the method comprises:
receiving a second interactive instruction executed by the user in the playing process of the target video;
and responding to the second interactive instruction, displaying an identification list on a playing window of the target video, wherein the identification list comprises at least one identification of interactive content, the interactive content is located in the target video, and the target identification belongs to the identification list.
3. The method according to claim 2, wherein before receiving the second interactive instruction executed by the user during the playing of the target video, the method further comprises:
determining an original video and at least one interactive content;
and synthesizing the at least one interactive content into the original video based on the course structure corresponding to the original video to obtain the target video.
4. The method as claimed in claim 3, wherein the synthesizing of the at least one interactive content into the original video based on the course structure corresponding to the original video comprises:
determining a corresponding insertion time point of the at least one interactive content in the original video based on the course structure;
and respectively synthesizing the at least one interactive content and the original video according to the insertion time point.
5. The method of claim 2, wherein after locating the playback progress in the target video to the start position, further comprising:
closing the identification list;
displaying the target interactive content;
receiving at least one third interactive instruction executed by the user;
and responding to the third interactive instruction, and displaying an interactive result in a playing window of the target video.
6. The method of claim 1, further comprising
Receiving a fourth interactive instruction executed by the user in the playing process of the target video;
and closing the identification list in response to the fourth interactive instruction.
7. The method of claim 1, wherein the interactive content is an interactive topic.
8. An interactive video control apparatus, comprising:
the instruction receiving module is used for receiving a first interactive instruction executed by a user in the playing process of the target video;
the identification determining module is used for responding to the first interactive instruction and determining a target identification selected by the first interactive instruction; wherein the target identification is an identification of target interactive content;
the position determining module is used for determining the starting position of the target interactive content in the target video;
and the position positioning module is used for positioning the playing progress in the target video to the starting position.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 7.
CN202010918472.0A 2020-09-03 2020-09-03 Control method and device for interactive video, storage medium and terminal Pending CN112218130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010918472.0A CN112218130A (en) 2020-09-03 2020-09-03 Control method and device for interactive video, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010918472.0A CN112218130A (en) 2020-09-03 2020-09-03 Control method and device for interactive video, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN112218130A true CN112218130A (en) 2021-01-12

Family

ID=74049982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010918472.0A Pending CN112218130A (en) 2020-09-03 2020-09-03 Control method and device for interactive video, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112218130A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837709A (en) * 2021-02-24 2021-05-25 北京达佳互联信息技术有限公司 Audio file splicing method and device
CN112887790A (en) * 2021-01-22 2021-06-01 深圳市优乐学科技有限公司 Method for fast interacting and playing video
CN112887791A (en) * 2021-01-22 2021-06-01 深圳市优乐学科技有限公司 Method for controlling video fluency

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095255A (en) * 2016-06-20 2016-11-09 武汉斗鱼网络科技有限公司 A kind of drop-down menu display control method and device
CN108024139A (en) * 2017-12-08 2018-05-11 广州视源电子科技股份有限公司 Playback method, device, terminal device and the storage medium of Internet video courseware
CN108495194A (en) * 2018-03-21 2018-09-04 优酷网络技术(北京)有限公司 Video broadcasting method, computer storage media during answer and terminal device
CN109167950A (en) * 2018-10-25 2019-01-08 腾讯科技(深圳)有限公司 Video recording method, video broadcasting method, device, equipment and storage medium
US10194189B1 (en) * 2013-09-23 2019-01-29 Amazon Technologies, Inc. Playback of content using multiple devices
CN110798746A (en) * 2019-09-10 2020-02-14 上海道浮于海科技有限公司 Short video answering system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10194189B1 (en) * 2013-09-23 2019-01-29 Amazon Technologies, Inc. Playback of content using multiple devices
CN106095255A (en) * 2016-06-20 2016-11-09 武汉斗鱼网络科技有限公司 A kind of drop-down menu display control method and device
CN108024139A (en) * 2017-12-08 2018-05-11 广州视源电子科技股份有限公司 Playback method, device, terminal device and the storage medium of Internet video courseware
CN108495194A (en) * 2018-03-21 2018-09-04 优酷网络技术(北京)有限公司 Video broadcasting method, computer storage media during answer and terminal device
CN109167950A (en) * 2018-10-25 2019-01-08 腾讯科技(深圳)有限公司 Video recording method, video broadcasting method, device, equipment and storage medium
CN110798746A (en) * 2019-09-10 2020-02-14 上海道浮于海科技有限公司 Short video answering system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887790A (en) * 2021-01-22 2021-06-01 深圳市优乐学科技有限公司 Method for fast interacting and playing video
CN112887791A (en) * 2021-01-22 2021-06-01 深圳市优乐学科技有限公司 Method for controlling video fluency
CN112837709A (en) * 2021-02-24 2021-05-25 北京达佳互联信息技术有限公司 Audio file splicing method and device
US11756586B2 (en) 2021-02-24 2023-09-12 Beijing Dajia Internet Information Technology Co., Ltd. Method for splicing audio file and computer device

Similar Documents

Publication Publication Date Title
CN110570698B (en) Online teaching control method and device, storage medium and terminal
CN111459586B (en) Remote assistance method, device, storage medium and terminal
CN112218130A (en) Control method and device for interactive video, storage medium and terminal
US11842425B2 (en) Interaction method and apparatus, and electronic device and computer-readable storage medium
CN110413347B (en) Advertisement processing method and device in application program, storage medium and terminal
EP4124052A1 (en) Video production method and apparatus, and device and storage medium
WO2023134419A1 (en) Information interaction method and apparatus, and device and storage medium
CN111124668A (en) Memory release method and device, storage medium and terminal
CN115190366B (en) Information display method, device, electronic equipment and computer readable medium
CN112214706A (en) Information publishing method and device, electronic equipment and storage medium
CN111866372A (en) Self-photographing method, device, storage medium and terminal
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN113163055B (en) Vibration adjusting method and device, storage medium and electronic equipment
CN111913614B (en) Multi-picture display control method and device, storage medium and display
CN110702346B (en) Vibration testing method and device, storage medium and terminal
CN112995562A (en) Camera calling method and device, storage medium and terminal
CN113825022B (en) Method and device for detecting play control state, storage medium and electronic equipment
CN113419650A (en) Data moving method and device, storage medium and electronic equipment
CN113114849A (en) Alarm clock reminding method and device, storage medium and terminal
CN113490028A (en) Video processing method, device, storage medium and terminal
CN111538997A (en) Image processing method, image processing device, storage medium and terminal
CN111859999A (en) Message translation method, device, storage medium and electronic equipment
CN113126859A (en) Contextual model control method, contextual model control device, storage medium and terminal
CN110392313B (en) Method, system, medium and electronic device for displaying specific voice comments
CN115328567A (en) Application setting method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210112

RJ01 Rejection of invention patent application after publication