CN109391834B - Playing processing method, device, equipment and storage medium - Google Patents

Playing processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN109391834B
CN109391834B CN201710656522.0A CN201710656522A CN109391834B CN 109391834 B CN109391834 B CN 109391834B CN 201710656522 A CN201710656522 A CN 201710656522A CN 109391834 B CN109391834 B CN 109391834B
Authority
CN
China
Prior art keywords
video data
information
playing
video
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710656522.0A
Other languages
Chinese (zh)
Other versions
CN109391834A (en
Inventor
陶佳杰
张瑞平
刘思敏
汪良平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710656522.0A priority Critical patent/CN109391834B/en
Publication of CN109391834A publication Critical patent/CN109391834A/en
Application granted granted Critical
Publication of CN109391834B publication Critical patent/CN109391834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a playing processing method, a playing processing device, playing processing equipment and a storage medium, so that operation can be conveniently executed in a normal playing process. The method comprises the following steps: in the process of playing video data, acquiring prompt information according to a trigger point corresponding to the video data; displaying the prompt information; and receiving response operation corresponding to the prompt information, and changing the display of the set content after the response operation meets the set condition. The terminal equipment can execute operation under the condition of not influencing the normal playing of the video data.

Description

Playing processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for playing processing, a terminal device, a server, a storage medium, and an operating system.
Background
With the development of terminal technology, televisions are also more and more intelligent, and users can perform various required operations such as watching videos, playing games, browsing webpages and the like by adopting the smart televisions.
The user can watch the video in full screen by using the smart television, but if prompt information exists in the process of watching the video, the user needs to quit playing the video to check the information, or other equipment is adopted to acquire the information, for example, a mobile phone scans a code to check the information, but the operations often affect the normal playing of the video, for example, the user needs to quit checking the information, or the user needs to go back to play again after checking the information on other equipment such as a mobile phone because of missing the playing content, in short, the existing television interaction mode affects the normal playing of the video.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a playing processing method, so as to perform operations in a normal playing process.
Correspondingly, the embodiment of the application also provides a playing processing device, television equipment, a server and an operating system, which are used for ensuring the realization and application of the method.
In order to solve the above problem, an embodiment of the present application discloses a play processing method, including: in the process of playing video data, acquiring prompt information according to a trigger point corresponding to the video data; displaying the prompt information; and receiving response operation corresponding to the prompt information, and changing the display of the set content after the response operation meets the set condition.
The embodiment of the application discloses a playing processing method, which comprises the following steps: setting a trigger point for video data in advance; sending prompt information corresponding to the trigger point in the video data playing process; judging whether the response operation corresponding to the prompt information meets a set condition; and changing the display of the setting content after the response operation satisfies the setting condition.
The embodiment of the application discloses a play processing device, including: the interactive prompting module is used for acquiring prompting information according to a trigger point corresponding to the video data in the process of playing the video data; the playing module is used for displaying the prompt information; and the operation module is used for receiving response operation corresponding to the prompt message and changing the display of the set content after the response operation meets the set condition.
The embodiment of the application discloses a play processing device, including: the setting module is used for setting a trigger point for the video data in advance; the prompt processing module is used for sending prompt information corresponding to the trigger point in the video data playing process; the result processing module is used for judging whether the response operation corresponding to the prompt message meets the set condition; and changing the display of the setting content after the response operation satisfies the setting condition.
The embodiment of the application discloses terminal equipment, include: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform a method as described in one or more of the embodiments of the application.
The embodiments of the present application disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause a terminal device to perform a method as described in one or more of the embodiments of the present application.
The embodiment of the application discloses a server, includes: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the server to perform a method as described in one or more of the embodiments of the application.
Embodiments of the present application disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause a server to perform a method as described in one or more of the embodiments of the present application.
The embodiment of the application discloses an operating system for a television terminal, which comprises: a display unit playing video data; displaying prompt information; and changing the display of the setting contents after the setting conditions are satisfied in response to the operation; the communication unit is used for acquiring prompt information according to the trigger point corresponding to the video data in the process of playing the video data; and receiving a response operation corresponding to the prompt message.
Compared with the prior art, the embodiment of the application has the following advantages:
in this embodiment of the present application, after the trigger point of the video data is reached, the server may provide a prompt message corresponding to the trigger point to prompt an executable operation, so as to display the prompt message in the terminal device, and the terminal device may prompt a response operation corresponding to the message, and change the display of the set content after the response operation meets the set condition, so that the terminal device can execute the operation without affecting the normal play of the video data, that is, the operation does not affect the normal play of the video.
Drawings
FIG. 1A is a schematic diagram of an example of an interface display according to an embodiment of the present application;
FIG. 1B is a schematic diagram of another example of an interface display according to an embodiment of the present application;
fig. 2 is a flowchart illustrating steps of a server according to an embodiment of a playing processing method of the present application;
fig. 3 is a flowchart illustrating steps of a terminal side according to an embodiment of a playing processing method of the present application;
fig. 4 is a flowchart of steps of a terminal side according to another embodiment of the playing processing method of the present application;
FIG. 5 is a schematic diagram of yet another example of an interface display according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating the steps of one embodiment of a method for setting trigger points according to the present application;
FIG. 7 is a flowchart illustrating steps of another embodiment of a server-side playing processing method according to the present application;
fig. 8 is a block diagram of a playback processing apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of an alternative embodiment of a playback processing apparatus according to the present application;
fig. 10 is a block diagram of another embodiment of a playback processing apparatus according to the present application;
fig. 11 is a block diagram showing the structure of another alternative embodiment of the playback processing apparatus of the present application;
FIG. 12 is a diagram illustrating a hardware configuration of an apparatus according to an embodiment of the present application;
FIG. 13 is a diagram illustrating a hardware configuration of an apparatus according to another embodiment of the present application;
fig. 14 is a schematic diagram of an operating system according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
In the embodiment of the present application, a terminal device refers to a terminal device having a multimedia function, and these devices can support functions in the aspects of audio, video, data, and the like. The terminal device in this embodiment may include a (smart) television, a mobile terminal, a personal computer, and the like, where the mobile terminal is a mobile phone, a tablet computer, a wearable device, and the like. The embodiment of the application can be applied to various operating systems such as Android, iOS, YunOS, TVOS and the like.
The user can watch the video data in a full screen mode by adopting the television equipment, and the television equipment can display prompt information and interact with the user in the process. For example, in the process of playing video data, a prompt message is displayed, for example, the prompt message is "press M key to remove advertisement", as shown in fig. 1A, after a user sends an instruction through the M key, the user may enter an interaction mode, for example, the interface shown in fig. 1B is used, video data continues to be played in the middle first area, and the prompt message such as an interaction mode is displayed in the second area, and the user may send a response operation by using a remote control device according to the prompt message. The server can judge whether the response operation meets the setting condition or not, and change the display of the setting content in the terminal after the response operation meets the setting condition, such as canceling the display of the setting content, reducing the display window of the setting content, and the like, so that the user can conveniently select the content which the user wants to see and does not want to see, such as eliminating the advertisement, reducing the display window of the promotion information, and the like. For example, capturing objects such as advertisements, stars and the like played in video data through screen capture, and the advertisements in the video can be changed after the advertisements are analyzed by the server. Therefore, the normal playing of the video data is not influenced by the operation of the user.
Therefore, in the embodiment of the application, a trigger point can be set in the video data in advance, the trigger point is a node for triggering interaction, and the interaction with a user can be triggered by the trigger point through equipment such as television equipment and the like. Therefore, the image data where the characteristic object is located can be determined in the video data, and then the time point is determined according to the video frame corresponding to the image data, and the trigger point is set at the time point. One video data may have one or more trigger points, which may be set according to the same or different feature objects, so that one video data may support one or more interactions. The feature objects are display objects designated in the video data, and include texts, images and the like, such as a certain segment of characters, lines, stars, advertisements and the like.
Terminal devices such as television devices and the like can request to acquire video data and then play the video data, and during the process of playing the video data, a full-screen playing mode shown in fig. 1A or a modal playing mode shown in fig. 1B can be adopted, and prompt information can be acquired according to a trigger point corresponding to the video data, so that the prompt information is displayed, for example, the video data and the prompt information are displayed in a target interface. Then, response operations corresponding to the prompt message can be received, wherein some response operations can be received in various play modes, for example, response operations through screenshot, and some response operations need to be accessed in a non-full-screen play mode, for example, the prompt message of answering questions and the like. In one example, the terminal device may display the prompt message during the process of playing the video data in the full-screen playing mode, for example, display the prompt message after detecting the trigger point, so as to enter a non-full-screen playing mode, such as a modal playing mode, in which the interaction is performed.
In an optional embodiment of the present application, after the playing time of the video data reaches the trigger point, a data request corresponding to the trigger point may be generated by triggering, and since the corresponding response operations of different trigger points may be different, information such as a trigger parameter corresponding to the trigger point may be added to the data request, and then the data request is sent to the server. After receiving the data request, the server may determine corresponding prompt information according to the trigger point, where the prompt information is related to the feature object corresponding to the trigger point, and the prompt information is used to prompt an executable operation, for example, the prompt information may prompt a response operation to be executed, such as screenshot, answering, and the like. The server sends the operation information corresponding to the response operation to terminal equipment such as television equipment, and then can acquire the operation result of the response. In the process, the terminal device may further continue to display the video data and other prompt information in the target interface, for example, display the prompt information in any second area of the interface shown in fig. 1B.
According to the prompt information, the user can send an instruction through the remote control device, the terminal device can receive a response operation corresponding to the prompt information, and then the response operation is fed back to the server, for example, the response operation is fed back to the server by sending interactive information. The server can judge whether the response operation meets preset conditions, such as whether a required target object exists in the screenshot data or not, whether the reply information is accurate or not, and the like. After the response operation meets the set condition, the display of the set content is changed, for example, the advertisement in the video can be changed after the advertisement image is intercepted. Then, the terminal device is notified, and the terminal device changes the display of the setting content accordingly. After the prompt information is displayed, the user can also obtain more detailed information through a trigger instruction sent by the remote control equipment, for example, the prompt can be used for answering, and then the corresponding question information can be obtained through the trigger instruction. The terminal equipment can execute one or more interactions in the video playing process, so that at least one instruction can be received, and each instruction corresponds to executable response operation. The server may also change the display of the setting contents in conjunction with the analysis result for one or more response operations.
The remote control device is various devices having a function of remotely controlling terminal devices such as television devices, for example, a mobile terminal such as a mobile phone, or a remote controller, and the remote control device can communicate with the television terminal in one or more modes such as infrared, bluetooth and WIFI. The switching between the full-screen playing mode and the non-full-screen playing mode, the indication of response operation and the like can be realized through a designated key, the designated key can be a fixed key on a remote controller, a user indication can be sent out by clicking the key, and the switching between different playing modes can be realized through the user indication.
In the embodiment of the application, the non-full screen playing mode can display other prompt information besides the prompt information in the second area. Such as star, scenario, user comments, surrounding products, etc. The prompt information is related to the currently played video data, for example, all or part of a video frame corresponding to the video data may be determined, such as intercepting the video frame, and determining a playing time point that needs to be intercepted, so as to query various associated objects related to the video and the video frame, and obtain information related to feature information of the video data corresponding to the associated objects, such as information of an introduction page of a star in the video frame, information of a commodity page of a clothing of the same type of the star in the video frame, information of a scenario introduction page corresponding to the video, information of an evaluation information page of a user on the video, and the like.
As shown in fig. 1B, the video data may be displayed in the center of the screen, and then the recommendation information and the prompt information are displayed in the periphery of the video, i.e., in the second area a-H. The second area for displaying information around the video may be information of the same application type or information of different application types, where each type of information may be regarded as an associated object of the video data, each associated object corresponds to one application type, and the application type corresponding to at least one associated object of the at least two associated objects is different from the application type of the currently played video data. The application type of the video data is a multimedia application type, and recommending the corresponding application type may include: a multimedia application class, a web application class (e.g., a browser application, a current news application, etc.), a business application class (e.g., a shopping application, a ticketing application), a game application class, and so forth. For example, the star introduction information identified for the video frame by the display interface a of the associated object; the presentation interface B, C of the associated object is information of other movies showing by the star; the display interface D of the associated object is prompt information; the display interface G of the associated object is the information of the television shows played by the star; the display interface E of the associated object is evaluation information of the video data, and if the video data is a TV show or a movie, the evaluation information comprises the scores of the TV show or the movie, the movie scores of the user and the like; the display interface F of the associated object is the commodity page information of the same money in the star screenplay or the periphery of the video.
Therefore, the user can watch the video and interact at the same time, and can check the information of various associated objects related to the video, thereby improving the watching interest of the user and meeting various watching requirements of the user. Besides performing interaction according to the prompt information, after clicking a display interface of the associated object, the user can also display detailed recommended content of the associated object, for example, clicking evaluation information can display evaluation information of each user beside a video, and also can jump to an evaluation webpage for evaluation viewing, and if the user clicks information of other movies showing by the star, the user can jump to play the movie.
Based on the above scenario, the video data based interaction step is discussed in detail.
Referring to fig. 2, a flowchart illustrating steps of a server according to an embodiment of a playing processing method of the present application is shown, and specifically, the method may include the following steps:
in step 202, a trigger point is set for the video data in advance.
And 204, sending prompt information corresponding to the trigger point in the video data playing process.
And step 206, judging whether the response operation corresponding to the prompt message meets the set condition.
If yes, go to step 210; if not, the flow is ended.
And step 208, changing the display of the setting content after the response operation meets the setting condition.
The server can determine image data where the characteristic object is located in the video data in advance, then determine a time point according to the video frame corresponding to the image data, and set a trigger point at the time point. The feature object may be content originally owned by the video data, such as a star in a video frame, an embedded advertisement, or the like, or may be an advertisement, text, or the like added to the video data at a later stage, which is not limited in the embodiment of the present application. And then when equipment such as a television terminal and the like needs to play video data, the video data is sent to the corresponding television terminal, the television terminal sends a data request according to a trigger point in the playing process, and the server can receive the data request. Then, a trigger parameter is obtained from the data request, the trigger parameter corresponds to the characteristic object, so that prompt information corresponding to the characteristic object can be obtained, and then the prompt information is sent to a television terminal and other equipment on the terminal side. And the television terminal receives corresponding response operation according to the prompt message. The server determines whether the response operation satisfies a set condition, for example, whether a desired object exists in the captured image. So that the display of the set content is changed after the response operation satisfies the set condition, for example, if the set content is an object inserted in the video data at a later stage, the change manner may be to delete the inserted content; if the content is the original content in the video data, the change mode can be the blocking operation such as playing mosaic. And then can feed back to the terminal equipment, so that the display of the set content is changed on the terminal equipment, namely the response operation of the user does not influence the playing of the terminal video.
Referring to fig. 3, a flowchart illustrating steps of a terminal side in an embodiment of a play processing method according to the present application is shown, which may specifically include the following steps:
step 302, in the process of playing video data, obtaining prompt information according to a trigger point corresponding to the video data.
And step 304, displaying the prompt message.
Step 306, receiving a response operation corresponding to the prompt message, and changing the display of the setting content after the response operation meets the setting condition.
The television terminal and other equipment on the terminal side can send a playing request to the server side to acquire the required video data and play the video data, the video data is provided with a trigger point, and the trigger point can trigger the start of interaction, so that in the process of playing the video data, after the playing time of the video data reaches the trigger point, the trigger parameter corresponding to the trigger point can be acquired, then a data request is generated according to the trigger parameter, and the data request is sent to the server side so as to request the interactive content from the server side. The server can obtain prompt information according to the data request, the terminal can display the prompt information after receiving the prompt information, and prompt the user to execute operation and obtain prompt information such as rewards and the like through the prompt information. Then, after the terminal receives the response operation corresponding to the prompt message, the server may correspondingly determine whether the response operation satisfies the setting condition, for example, determine whether a desired object exists in the captured image, and then change the display of the setting content, for example, stop sending the setting content to the terminal device, or change the display of the setting content in the video data. So that the display of some target objects in the video, such as advertisements, etc., can be changed.
In summary, after the trigger point of the video data is reached, the server may provide the prompt information corresponding to the trigger point to prompt the executable operation, so as to display the prompt information in the terminal device, and the terminal device may prompt the response operation corresponding to the information, and change the display of the set content after the response operation satisfies the set condition, so that the terminal device may perform the operation without affecting the normal play of the video data, and the display of the set content may be changed through the operation, thereby improving the interest in viewing the video.
In the embodiment of the application, a user can adopt the remote control device and the television terminal to execute various response operations, wherein different response operations can be executed according to different operation types, and the response operations comprise at least one of the following operations: screen capture indicating operation, acquisition indicating operation and selection indicating operation. The picture played by the current screen can be intercepted through the screen intercepting indication operation, for example, the image data of the whole screen is intercepted, and for example, the image data played by the video in the interface is intercepted. The information input by the user can be obtained through the obtaining indication operation, and the input information is used as the reply information to reply the prompt information and the interaction content of the prompt. Corresponding information can be selected from the prompt information displayed in the interface through selection indication operation and uploaded as reply information.
The interaction of the embodiment of the application can be applied to various scenes, and advertisement interaction is taken as an example. The method can mark the advertisements implanted in the brands of the advertisers, the marking mode can be that instructions are sent out through keys of the remote control equipment, screenshot is carried out according to the instructions, after the intercepted image data is confirmed to have the advertisement images, the marking is confirmed to be successful, and then the users can receive rewards such as advertisements in the changed videos after finishing a certain number of marks. For another example, relevant question and answer settings are performed on relevant advertiser brands appearing in the video data, a user can upload response information through the remote control device, and rewards can be received after answer is correct. For another example, in the process of playing video data, relevant question and answer questions and options are set for relevant advertiser brands in the video data, for example, pictures appearing in the advertiser brands in the video are selected, the user sends an instruction to determine the options through the remote control device, and corresponding rewards can be received after the options are correct.
Wherein the display of the change setting content comprises at least one of the following steps: canceling the display of the set content; shortening the display time of the set content; reducing the size of the display window corresponding to the set content; reducing the playing volume corresponding to the set content; reducing the resolution of the set content; masks are added for the set content. The display of the set content can be cancelled, for example, advertisements inserted in the middle and later periods of the video data are deleted; the display time of the set content is shortened, namely the display time of the set content is shortened, for example, the set content is normally displayed for 30 seconds(s), and the set content is displayed for 10s after the display time is shortened, so that the display time of the advertisement data can be shortened, and the watching effect of a user is improved; reducing the size of the display window corresponding to the set content, for example, changing the full-screen display into a small-window display, where the size of the small window is smaller than the screen size, and setting the window size range after reduction, etc., so as not to affect the playing of the video data, for example, in the target interface shown in fig. 1b, displaying the video data in the first area, displaying the set content in any second area, etc.; for the set content with audio data, the playing volume corresponding to the set content can be reduced to a certain threshold or will be 0, so that the playing sound for the set content is small and does not interfere with the user; the resolution of the set content is reduced, namely the resolution of the set content can be reduced, so that the definition of the set content is reduced, and the data volume can be reduced; the setting content is masked by adding a mask to the setting content, that is, the setting content is masked by a mask, for example, the setting content is masked by a mosaic, a black screen, or the like, for example, an advertisement in the video data may be masked by a mosaic, a black screen, or the like.
The server comprises a direct change and an indirect change for changing the display of the setting content, wherein the direct change is to change the setting content by means of canceling, shielding, reducing resolution and the like, then the transmission of the setting content is stopped, or the changed setting content is transmitted to the terminal equipment, the terminal equipment correspondingly displays the setting content, and the indirect change can inform the terminal equipment of time reduction data, size change data, volume change data and the like, so that the terminal equipment changes the display of the setting content according to the data.
Referring to fig. 4, a flowchart of steps at a terminal side in another embodiment of the playing processing method of the present application is shown, which may specifically include the following steps:
step 402, in the process of playing video data, judging whether a trigger point corresponding to the video data is reached according to playing time.
The television terminal and other devices on the terminal side can send a playing request to the server side to play the required video data. The video data is provided with trigger points, and interaction based on the video content can be started, so that in the process of acquiring and playing the video data, whether the current playing time has the trigger points or not can be judged in real time, namely whether the playing time reaches the trigger points corresponding to the video data or not is judged.
If yes, the trigger point corresponding to the video data is reached, and step 404 is executed; if not, the trigger point corresponding to the video data is not reached, and the step 402 is returned to continue the judgment.
Step 404, generating a data request according to the trigger parameter corresponding to the trigger point, and sending the data request.
After the video data has a trigger point at the playing time point, a trigger parameter corresponding to the trigger point may be obtained, the parameter information may be set according to a requirement, for example, attribute information related to the trigger point may be set when the trigger point is set, a trigger identifier of the trigger point may be used as the trigger parameter and added to the data request, and then the data request is sent to the server to request the interactive content from the server.
Namely, the server can search the attribute information corresponding to the trigger point based on the trigger identifier, such as the object identifier of the feature object, the interactive content, and the like, so as to obtain the prompt information and then return the prompt information to the terminal side.
Step 406, receiving the prompt message.
And step 408, displaying the prompt message.
The television terminal can receive the prompt information returned by the server and then display the prompt information, for example, video data and interactive prompt information are simultaneously played in the target interface, so that a user can interact while continuously watching the video, and the enjoyment of viewing is improved. In the embodiment of the application, the interaction prompt information and the video data can be displayed in various layout modes in the interaction interface, for example, the interaction prompt information can be displayed when the interaction data is played in a full screen mode, and the interaction prompt information can also be displayed in other modes. In other words, in the process of displaying the video data, the screen may also display the prompting information related to viewing, and the prompting information may be the prompting of the video content, or the prompting of other related content, such as the commodity corresponding to the advertisement of the captured advertisement in the screen, the shopping coupon, the viewing coupon, and the like.
One of the display modes is as follows: and playing the video data in the target interface in a full screen mode, and displaying the interaction prompt information. The video data can be played in a full-screen mode in the television terminal, namely the video data is displayed in the whole screen, so that the prompt information can be displayed in the target interface in a floating window mode, a playing component mode and the like. According to the specific interactive content, the corresponding prompt information is displayed, for example, "3 times of the picture marked with the M key XX is displayed, so that the advertisement in one day can be avoided", or "2 times of the picture marked with the XX is displayed, and 1 time of marking is performed, so that the advertisement in one day can be avoided", and the like.
The other display mode is as follows: and playing the video data in a first area in the target interface, and displaying the prompt message in a second area. The video data can also be played in the television terminal in a non-full screen mode, one non-full screen mode can be provided with a first area and at least one second area in the target interface, the video data is played in the first area, and prompt information including the prompt information and other prompt information is displayed in the second area.
The layout of the first area and the second area in the target interface can be set by default or can be set according to a certain display template. The display template is provided with a first area and at least one second area, the video data can be played on the first area of the interface, for example, the video data is played on the first area by adopting a playing component, each second area can also correspond to one playing component, and therefore information can be prompted on the second area, for example, a display interface of the associated object is displayed. Therefore, the video can be switched from full-screen playing to full-screen playing in the first area or from full-screen playing in the first area based on the designated key, and the playing component can control the size, the position and the like of the video playing interface. One main interface may display two or more windows playing video data and displaying associated objects. And the display interfaces of different associated objects can be switched among the second areas, the second areas can be switched according to the instruction of the user, and the switching can be performed according to the habit of the user and the like.
A display diagram of a non-full-screen playing mode is shown in fig. 1B, where video data is displayed in a first area in the middle of a target interface, and certainly, the first area may be set at four corners or other positions of the target interface, which is not limited in this embodiment. When video data is played, second areas can be arranged at other positions of the target interface, each second area can bear one display component, and the display components are adopted to display prompt messages, wherein one display component corresponds to one prompt message.
And step 410, receiving a response operation corresponding to the prompt message.
Step 412, obtaining corresponding operation information according to the response operation, and sending the operation information.
After the prompt information is displayed in the interface, a user can execute response operation according to the prompt information, so that an instruction can be sent by the remote control equipment, the terminal equipment correspondingly receives the response operation corresponding to the prompt information, the instruction can be sent by a designated key of the remote control equipment or other keys, and the instruction can be specifically set according to requirements. The terminal device can execute corresponding response operation according to the instruction, such as screenshot marking, answer uploading and the like. One prompt message corresponds to one or more response operations which can be received, for example, if the prompt message marks 5 times that the advertisement A appears in the video, 5 or more response operations can be received, and the image displayed in the screen is intercepted according to each triggering indication.
Determining an operation type according to the prompt information, and executing a response operation corresponding to the operation type; and acquiring operation information corresponding to the response operation, and sending the operation information. The corresponding operation type may be determined according to the prompt letter, and the operation type may be carried in an attribute of the prompt information, such as a mark type, a question type, a selection type, and the like, so that, according to different operation types, a response operation corresponding to the corresponding type may be executed after receiving the trigger operation, and the operation information corresponding to the response operation may be acquired, where the prompt information may give an indication according to different keys prompting the operation type, and may of course be the same, and then, after receiving the indication, the interaction type may be determined according to the indication and the corresponding response operation may be executed.
The response operation includes at least one of: screen capture indicating operation, acquisition indicating operation and selection indicating operation. The step of obtaining corresponding operation information according to the response operation comprises at least one of the following steps:
and intercepting image data correspondingly displayed by the video data according to the response operation, and taking the intercepted image data as operation information. That is, after receiving the response operation, the image data currently played in the video data may be intercepted according to the response operation, for example, time information may be determined according to the response operation, the image data of the corresponding video frame may be acquired according to the time information, of course, the image data currently played in the screen may also be intercepted through a screen interception operation or the like, and then the image data may be used as the operation information, and the operation information may further include information such as an operation identifier, which is convenient for server matching.
And acquiring the uploaded reply information according to the response operation, and taking the reply information as operation information. Some of the prompting information is a question, e.g., "who is the speaker of the XX brand? And for such prompt information, a user can upload after inputting an answer through a remote control device, so that after receiving a response operation, the uploaded reply information can be obtained according to the response operation and is used as operation information, and the operation information can also comprise information such as an operation identifier and the like, so that server matching is facilitated.
And selecting reply information according to the response operation, and taking the reply information as operation information. Some questions with prompt information can carry selectable answers, namely selection questions, response operation with answers can be sent through the remote control equipment, so that the operation information can also comprise information such as interactive identification and the like according to the response operation, and server matching is facilitated.
And 414, changing the display of the setting content after the response operation meets the setting condition.
The server side can determine corresponding setting conditions according to the operation information, adopts the response information to judge whether corresponding response operation meets the setting conditions or not, changes the display of the setting contents after the setting conditions are met, and feeds the setting contents back to the terminal equipment, so that the display of the setting contents in the terminal equipment can be changed. And the server can also obtain corresponding result prompt information to feed back to the terminal equipment after the response operation meets the set condition, the terminal equipment displays the result prompt information, and the result prompt information is the result prompt of the interactive execution, such as reward information for finishing the interaction, other interaction requirements needed to be executed for finishing the interaction, and the like. Of course, after the response operation does not meet the set condition, the result of interaction failure can also be returned to the terminal side as interaction result information, so as to prompt the user that the response operation is unsuccessful and please continue to execute other response operations. For example, if the target object in the video data needs to be changed by multiple interactions, the result prompt information can be fed back before the number of interactions is reached, and the user is prompted to execute the interactions for several times and whether the interaction is successful or not.
Therefore, after receiving the result prompting message, the terminal side can display the result prompting message, so as to prompt the user of the result of the interaction, such as the reward obtained by successful interaction, other response operations that need to be executed, the result of unsuccessful interaction, and the like.
Wherein the result prompt information includes: user level promotion information, electronic ticket information, commodity acquisition information, advertisement exemption information, and other result information. After the interaction is successful and the required times and other conditions are met, corresponding reward results can be obtained, such as user level improvement, electronic coupons, commodities, exemption advertisements and the like, and the reward results can also be configured with corresponding time limits, such as that the user rewards VIP for one month, the valid period of the electronic coupons and the valid time for getting the commodities, the time for exempting the advertisements and the like are determined, wherein the electronic coupons comprise discount coupons, viewing coupons and the like. The result prompt may also include other result information, such as a failed result, an incomplete prompt, and so on.
In the embodiment of the application, in the non-full screen mode, the second area may display, in addition to the prompt information, other recommendation information, for example, information of an associated object related to playing a video. The associated object may be determined based on all or part of the video frame to which the video data corresponds. Each associated object can correspond to an application type, and the application type corresponding to at least one associated object in the at least two associated objects is different from the application type of the currently played video data; the information displayed by the display interface at least comprises: information related to the characteristic information of the video data, or an information overview related to the characteristic information of the video data. All or part of the video frames corresponding to the video data can be determined in various ways, such as capturing the video frames of the video data, and recording the playing time points needing to be captured.
The server side can determine at least two corresponding associated objects according to the video frame, where the associated objects include an object related to the video frame, an object related to the video data, and an object related to the prompt information, such as interactive task information corresponding to the prompt information. Each associated object corresponds to an application type, the application type corresponding to at least one associated object in the at least two associated objects is different from the application type of the currently played video data, and the application type is determined according to the application bearing the associated object information. For example, the associated object is video data and audio data, which are borne by the player application, and the corresponding application type is a multimedia application type; if the associated object is information such as news and evaluation, the information is borne by the browser or an application of a corresponding provider and can be used as a webpage application class; shopping information is carried by shopping applications, and can be used as business application classes and the like. The embodiment may acquire information related to the feature information of the video data corresponding to the associated object, generate a data acquisition result by using the associated object and the information related to the feature information, and return the data acquisition result to the television terminal.
The second area can adopt a display component to display a display interface of the associated object, and the information displayed by the display interface at least comprises: the information overview is used for summarizing the information related to the characteristic information, such as thumbnails, summary information, titles and the like of the information related to the characteristic information. Therefore, the user can see the video, the prompt information and other various related objects related to the video on the screen, the film watching effect is improved, and the user requirements are met.
In addition, after the trigger for the associated object is received, the content of the associated object can be determined according to the corresponding indication information, and then the content of the associated object is displayed on the television terminal, for example, switching to displaying other videos, for example, displaying user evaluation information beside video data, for example, skipping to a commodity detail page of recommended commodities, and the like, and also displaying specific interactive task content, so as to provide interactivity of viewing.
Although there is a scheme for displaying a watching video in a manner of separating the watching video from an application interface, the scheme often displays video data in a manner of using a floating window, so that the video data can be displayed on a system interface or other application interfaces, but the video data window is irrelevant to the system interface or other application interfaces, and the like, and may block part of information of the interface, thereby affecting the user to acquire the information.
In the embodiment of the application, the indication can be triggered by a designated key, the designated key is arranged on the remote control device, and the designated key is arranged on the remote control device. Therefore, the prompting message can also prompt the XX key to be clicked for viewing. The remote control device may include various devices such as a remote controller, a mobile terminal, and the like. The remote controller can communicate with the television terminal in an infrared mode and the like, the designated key can be a key on the remote controller, and the user instruction can be sent out by clicking the key; the mobile terminal can adopt APP to communicate with the television terminal in a wireless mode, so that a user interface for television control can be set on the APP, a button can be set on the user interface as an appointed key, and the button is triggered to send user instructions.
The second region may display various prompts, including prompts and other prompts, via a display assembly, which may have an expanded mode and a collapsed mode. The unfolding mode is used for displaying the complete content of the prompt message, the folding mode is used for displaying the preview content of the prompt message, and the preview content can be pictures, preview characters and the like. In this embodiment, the associated object may include display information and description information, where the display information is display information such as a picture in the feature information related information of the video data; the description information is the content describing the associated object in the characteristic information related information of the video data, such as a text introduction, a title and the like, and can display the display information and the description information in the expansion mode. An information overview relating to characteristic information of the video data may also be determined, which is displayed when in the collapsed mode.
In an optional embodiment, when the display component is used to display the prompt message, the display mode of the display component may be determined according to the focus position, wherein if the focus is located on the display component, the display component uses the display mode to display the complete prompt message, such as the title and the option, as well as the complete interaction requirement; if the focus is not located on the display component, the display component displays the overview information of the prompt message in a retraction mode, for example, only including all or part of the titles without options, and for example, only prompting that the interaction can be performed without prompting a detailed interaction mode. Where focus refers to an area of interest, such as a location where a cursor is currently activated, or a location currently selected by touch. As shown in fig. 5, on the screen of the television terminal, when the focus is on the display unit displaying the guidance information, the display unit is in the expansion mode, and the description information and the like can be displayed in the dashed box area while the information overview of the guidance information is being performed. For example, if the prompt information is an interactive choice question, all or part of the questions may be displayed in the collapse mode, and all the questions and specific options may be displayed in the expansion mode. And the focus in the screen is not positioned on other display components, the other display components continue to display the associated object in the receiving mode. Other prompt information, that is, prompt information corresponding to various different associated objects, such as evaluation information 2, multi-camera video data 3, screenshot sharing data 4, interaction task information 5, guess information 6, weather application information 7, video related object information 8, and the like, may also be displayed in other second areas in fig. 5.
In the embodiment of the application, various associated objects can be pushed to a user when a video is played, and the associated objects include at least one of the following categories: viewing enhancement type, chatting type, interaction type, service type and application type.
The associated object of the viewing enhancement class is an associated object for improving the viewing effect, and the associated object of the viewing enhancement class is related to the played video data, so that the interest degree of a user in viewing can be improved, and various requirements of the user in viewing can be met. The associated object of the view enhancement class comprises at least one of the following: the video data comprises associated video data, associated audio data, evaluation information, video description information, multi-machine video data and video related object information. The associated video data includes other video data associated with the video, for example, videos such as a festoon and a preview of the video, videos such as other movies, tv, and a variety shown by a star in the video, and other movie works of a director. The associated audio data includes other audio data associated with the video, such as audio data of a beginning piece, an end piece, an episode, background music, and the like of the video. The evaluation information comprises evaluation data of the video, such as scores and comments of users on various movie and comment websites and video websites, comments of professional movie and comment persons, movie and comment data of users on social websites and the like. The video description information includes description information of the video data, such as staff information, introduction of a scenario, a diversity scenario, an update/finalization situation, and the like. The multi-machine-position video data comprises video data shot by the video at multiple machine-position angles, such as video data shot by different machine positions in live broadcasting, and video data shot by different machine positions in large-scale performances such as concerts, evenings and the like. If the video data shot by the machine positions 1-9 can be displayed on a screen, the user can select one or more angles of the video data. And the video data of different machine positions can also be set with grades, the machine positions which can be selected according to the different grades of users are different, for example, the machine positions which can be selected by common users are 1-6, and VIP users can select all machine positions. In addition, the viewing enhancement information may further include video-related object information, such as data of peripheral products of the video, and data of the same-style clothing pages of the stars in the drama.
The associated object of the chat class refers to an object related to chat communication executed in the process of playing the video, and comprises at least one of the following objects: chat content data, screenshot sharing data and video sharing data. The chat content data comprises chat data sent by a user through various instant messaging modes, such as chat data sent through an instant messaging APP. The screenshot sharing data comprises data shared by the video screenshot, for example, screenshot sharing indication information, and accordingly the corresponding triggered recommended content is screenshot sharing information, such as information of a link address of a screenshot data storage position, a two-dimensional code and the like. The video sharing data includes sharing information of the video data and/or the associated video data, such as video sharing indication information, so that the user can share one or more videos with the associated friend user.
The associated object of the interactive class refers to an object interacted through various interactive modes, and the associated object of the interactive class comprises at least one of the following objects: guessing information, voice barrage data, interactive object information and interactive task information. The guessing information includes guessing of the content played in the video data, such as guessing of the result of the played singing game, guessing of the played competition such as football, basketball and the like, and the guessing options can be displayed by a display component, and the guessing information can also be displayed by the display component to display various guessing information such as voice guessing of the robot. The voice barrage data comprises barrage data input in a voice mode, wherein the voice data can be received, the voice data is converted into text content, and the text content is displayed on the video data as a barrage. The interactive object information includes information of a business object performing the interaction, such as information of a red envelope, benefit information of a movie ticket, a VIP member, and the like. The interactive task information includes information of an interactive task, for example, a task of intercepting a specified object in a video, for example, an advertisement picture may be displayed in the video at variable times, and the task is to intercept X screenshots including the advertisement picture.
The associated object of the application class refers to an object related to the associated application, and the associated object of the application class comprises at least one of the following objects: sports application information, game application information, timing prompt information, weather application information, information application information. The exercise application information includes information of various exercise applications, so that the user can do exercise while watching a video, for example, running while watching the video, and the exercise application can be used to calculate information such as complement number of running, heart rate and the like. The game application information includes information of the game application, and may be information of the game application related to the video, such as application information of the video recomposition game, or information of the original game corresponding to the video recomposition game, and may be other game information. The timing prompt information includes various time prompt information, and the timing prompt information can be alarm clock and other prompt time set by the user, or can be general time prompt, such as prompting the user that the user has a meal, prompting the user that the user is going to sleep, prompting the user that the user should send mails at a certain time, and the like. The weather application information refers to weather-related prompt information, such as weather information of a positioning position acquired by a weather application, and information such as some weather early warning and dressing indexes. The information application information comprises push information of various current affairs news, such as the current headline, the hot news, the entertainment bagua and the like.
The video related object information, the interactive object and the like can comprise various operation information such as advertisements, shopping information, purchase links and the like, so that various objects can be associated with video frames to recommend various information. Therefore, when watching a video, a user can enter a mode of watching X at the same time through the designated key, and can watch the video and simultaneously view information of various associated objects, such as various push information, so as to meet various requirements of the user, wherein X is various operations executed according to the requirements of the user, various displayed information and the like.
The foregoing discusses interacting and displaying related steps based mainly on a television terminal side, wherein setting a trigger point for video data in advance includes: determining a time point corresponding to a video frame where a characteristic object is located in the video data; and setting a corresponding trigger point at the time point. The determining, in the video data, a time point corresponding to a video frame where a feature object is located includes: identifying image data corresponding to each video frame in the video data, and determining a characteristic object; and determining the image data where the characteristic object is located and the time point of the video frame corresponding to the image data. The steps of setting the trigger point at the server side are as follows:
referring to fig. 6, a flowchart illustrating steps of an embodiment of a method for setting a trigger point according to the present application is shown, which may specifically include the following steps:
step 602, identifying image data corresponding to each video frame in the video data, and determining a feature object.
Step 604, determining image data where the feature object is located and a time point of a video frame corresponding to the image data.
Step 606, setting a corresponding trigger point at the time point.
Trigger points can be set in the video data in advance, so that the interaction can be triggered conveniently in the playing process. The trigger point is related to a feature object displayed in a video frame corresponding to the video data, so that a time point corresponding to the video frame where the feature object is located in the video data can be determined, and then the corresponding trigger point is set at the time point.
In this embodiment, the feature object is one of contents displayed by video data, so that image data corresponding to each video frame in the video data can be identified, the video frame of the image data where the feature object is located is determined by the identification, the time point of the video frame is determined, and then the trigger point is set at the time point. Of course, the feature object may also be added to the video data, and then the video frame corresponding to the image data to which the feature object is added and the time point corresponding to the video frame may be recorded after the feature object is added, so that the subsequent addition of the trigger point is facilitated.
In the embodiment of the present application, a plurality of feature objects may exist in one video data, and the same or different feature objects may be displayed in different image data, so that both the feature objects identified in the video data and the feature objects inserted in the video data may be recorded, but the trigger points may be set according to the requirements. That is, a trigger point may be set for all or part of the feature objects identified in the video data, for example, for the same feature object, the trigger point may be set when the feature object is displayed for the first time, and other time points displayed by the feature object may be recorded, which is convenient for determining the response operation in the subsequent interaction. Since for several time periods in the video data partitioning, only one trigger point of the feature object is set per time period, etc.
In the embodiment of the application, various characteristic objects such as related scenes, commodities, stars (characters), brands and the like can be identified by utilizing a video image identification grading technology based on video contents displayed by a television terminal.
The method is applied to advertisement interaction, and resources such as scenes, commodities, stars, brands and the like can be packaged and provided for advertisers. Such as beach scene packaging, may be provided to advertisers associated with the scene, as well as a star resource packaging, may be provided to advertisers associated with the star introduction, and the like. In addition, information such as estimated flow and interactive advertising mode can be provided for the advertiser, and the advertiser can conveniently select the information.
Advertisers can be divided into two categories: one category of advertisers that have been placed in related movies and television shows, and the other category of advertisers that have not been placed in related movies and television shows. For advertisers who do not implant advertisements, the episode and the relevant time point can be provided through a video image recognition analysis technology, advertisements are put in relevant scenes, and post-implanted advertisements are formed, wherein the implanted advertisements include but are not limited to pictures (including static pictures and/or dynamic pictures), documentaries and the like. Therefore, according to the time point or even the trigger point of the advertiser corresponding to the advertisement, when the episode released by the relevant advertiser is played in the television terminal, the user is reminded to participate in the relevant interactive advertisement.
Referring to fig. 7, a flowchart illustrating steps of another embodiment of a server side of a playing processing method according to the present application is shown, which may specifically include the following steps:
step 702, sending the video data.
Step 704, a data request is received.
Step 706, determining a corresponding trigger point according to the data request, and determining prompt information of a corresponding feature object according to the trigger point.
Step 708, sending the prompt message of the feature object.
When the terminal side requests to play the video, the server side can obtain the requested video data and return the video data. Then, in the process of playing the video, the terminal side sends a data request to the server side according to the trigger point, after receiving the data request, the server side can acquire a trigger parameter, such as a trigger identifier, from the data request, so that a feature object corresponding to the trigger point can be determined according to the trigger parameter, then prompt information corresponding to the feature object, such as various information including interactive content, an operation mode and the like, is acquired, and then the prompt information is sent to the terminal side.
Step 710, receiving operation information of response operation corresponding to the prompt information.
Step 712, determining a corresponding setting condition according to the prompt message, and determining whether the corresponding response operation satisfies the setting condition by using the response message.
After receiving the prompt information, the terminal side can determine operation information according to the received response operation and then send the operation information to the server side, and after receiving the operation information, the server side can determine corresponding prompt information according to the operation information so as to determine the set condition of the type corresponding to the prompt information, and then judges whether the corresponding response operation meets the set condition of the type by adopting the operation information.
If yes, go to step 714; if not, go to step 716.
Wherein, the corresponding setting conditions are different according to different types. In an optional embodiment, the determining whether the corresponding response operation satisfies the set condition by using the operation information includes: acquiring intercepted image data from the operation information, and identifying the image data; and judging whether the target characteristic object exists in the identification result. The intercepted image data can be obtained from the operation information, then the intercepted image data is identified, whether a target characteristic object can be identified or not is determined, the target characteristic object is a characteristic object corresponding to the trigger point, the target characteristic object is confirmed to meet the set condition after being identified, and otherwise, the set condition is not met. For example, if the interactive content is a beverage marked with brand XX, the screenshot can identify whether the beverage marked with brand XX exists in the image, so as to determine whether the marking is successful.
In another optional embodiment, the determining whether the corresponding response operation satisfies the set condition by using the operation information includes: and acquiring reply information from the operation information, and judging whether the reply information meets set conditions. Reply information can be obtained from the operation information, whether the reply information meets the correct answer or not is judged, if yes, the set condition is confirmed to be met, and otherwise, the set condition is not met. For example, check whether the reply message is the correct answer B, and determine whether the reply message corresponds to the name of the speaker with the XX brand.
In step 714, the display of the setting content is changed.
In one example, the feature object corresponding to the trigger point in the video data may be determined as a set content, and the feature object in the video data is changed, that is, a part of the feature object in the video data is used as a set content, for example, a feature object such as an advertisement, so that the feature object in the video data may be changed after a set condition is met, for example, an advertisement implanted in a later period in the video data is eliminated, or a mosaic is added to an advertisement implanted when the video data is shot, and the like.
In another example, the number of times that the response operation satisfies a set condition is counted; and changing the set content from the video data when the times reach a time threshold value. That is, after determining that the response operation satisfies the interaction condition, the number of times that the setting condition is satisfied may be recorded, that is, the number of times that the setting condition is satisfied is correspondingly recorded in one video data playing process, then, after the number of times reaches a number threshold, the setting content is changed from the video data, for example, the feature object as the setting content, including the object inserted later and/or the original content of the video, is deleted from the video data, and then the video data from which the feature object is eliminated is sent to the terminal device. Therefore, the advertisement can be removed through operations such as screenshot, question answering and the like.
And step 716, generating result prompt information.
Step 718, sending the result prompt message.
After confirming that the condition is not set, result prompt information corresponding to interaction failure can be generated to prompt the user that the interaction is unsuccessful. And may send the result prompt.
In addition, after confirming that the set condition is satisfied, result prompt information may be generated, and the result prompt information may prompt the success of the operation or information of the number of times of the operation, so as to prompt the user of the operation. After confirming that the set conditions are met, determining result prompt information of the characteristic object corresponding to the trigger point; and sending the result prompt information. Wherein, the result prompt information corresponding to the successful interaction can be generated. After the interaction is successful and the required times and other conditions are met, corresponding reward results can be obtained, such as user level improvement, electronic coupons, commodities, exemption advertisements and the like, and the reward results can also be configured with corresponding time limits, such as that the user rewards VIP for one month, the valid period of the electronic coupons and the valid time for getting the commodities, the time for exempting the advertisements and the like are determined, wherein the electronic coupons comprise discount coupons, viewing coupons and the like.
Whether the prompt information needs to have other interaction conditions and the like can be judged, for example, the times, time and the like of successful response operation are obtained, and the corresponding other interaction conditions which need to be met are obtained and added into the result prompt information to prompt the user. Certainly, after all the conditions are met, the reward information corresponding to the interaction can be acquired as result prompt information.
The server can identify resources such as scenes, commodities, stars, brands and the like based on the video data, so that various interactions are provided; and the video data can be played to a trigger point and then prompt information is provided for the user, so that after a trigger instruction is received and response operation is executed, whether the response operation is successful or not can be determined according to the operation information, corresponding interaction result information is returned, and the user can conveniently execute various interactions.
In the embodiment of the application, the server may have a timeline task module, the timeline task module may generate a resource library of movies based on time, and the resource library may include resources such as related scenes, commodities, stars, brands, and the like. The resources can be identified and acquired after the video data is acquired, and corresponding resources can also be identified in real time in the process of playing the video data for the first time, and the resource information, such as specific names, identifications, time and other information, is cached. So that real-time recognition is not required during the subsequent playing process. And in the video playing process, the TV terminal can execute playing interactive processing according to trigger points and the like set in the video data and display the interactive processing on the TV terminal. Therefore, a corresponding time shaft and a resource library can be formed for each video, and trigger points required by interaction are set based on the time shaft and the resource library, so that the interaction in the video playing process is realized.
In the embodiment of the application, the server can also be provided with an account module, and the account module can record interaction completion conditions, such as advertisement task completion conditions and user advertisement preferences, in the interaction execution process, so that the capability of intelligently recommending advertisements is provided.
The server may determine that the played video data corresponds to a video frame, for example, the video frame may be directly obtained from the data obtaining request, or the video frame may be determined according to the video playing and playing time, or according to the video identifier and the feature value of the video frame, and then the video frame is matched with the tag, where the tag may be obtained by identifying the video frame. The tag is tag data for matching the associated object, and the embodiment of the present application may have various tags, for example, a category tag, such as a star class, a scenario class, an interaction class, and the like, or a detailed content tag, such as a tag with the name of a star, a tag with the name of a tv series or a movie, and the like. In short, the keyword required to match the tag can be determined through the tag, and then the database is queried according to the keyword, namely the tag, and at least two associated objects are matched from the database. And generating a data acquisition result according to the at least two associated objects, and feeding the data acquisition result back to the television terminal for display. Therefore, various associated objects can be provided for the television through the assistance of the server, and the requirements of users are met.
For a video frame, a corresponding playing time point or characteristic value of the video frame may be obtained, and then it is determined whether the corresponding video frame has the identified characteristic information or not, the playing time point or characteristic value. When other users have inquired at the time point or have analyzed the changed time point in advance, the playing time point corresponds to the video data and has the characteristic information, and the characteristic information and the label corresponding to the characteristic information can be directly obtained from the database. If the video data corresponding to the playing time point does not have the characteristic information, image recognition can be carried out on the video frame to obtain the characteristic information, and the characteristic information comprises: character characteristic information and content characteristic information. The character feature information includes various information of the video character, such as which star the character is, who the character is in the drama, and the like, and the content feature information includes information of a scene corresponding to the drama, a scene position, and the like. Various image recognition methods can be adopted for recognition, for example, the character characteristic information is determined by face recognition, and for example, the content characteristic information is determined by presetting key points and the like. Therefore, characteristic information is determined according to the identified characters, plots and the like, and then the database is queried by taking the characteristic information as query words to determine at least one matched associated object.
For the prompt parameter, the corresponding associated object may be determined according to the prompt parameter, or the associated object may be matched in combination with the video data, the playing time point, and the like, for example, the application information corresponding to the prompt parameter is determined, or the operation information is matched in combination with the data such as the video and the like.
The embodiment may also determine user behavior information according to a browsing track collected by the user identifier within a preset time, for example, the user may register in the main server, so that the user behavior information may be collected according to the account information of the user and other mobile phone user behavior information, or according to the IP address of the user, the terminal identifier and other collected user behavior information, and the tag is matched according to the user behavior information.
In the embodiment of the application, the data set is used for determining the associated object, and the data set may be a data source such as a database, a data table, or the like, or an index information set of the data source, so that the data set may be queried through the tag and the feature information to determine the associated object. The data source may store content data or index data of various associated objects, and the data may be from a database of a platform where the main server is located, or from a network, so that the data may be obtained from other service platforms, and the data may be obtained through an interface provided by the service platform. The video data of the associated object, such as a video type, can be stored in a database of a platform corresponding to the main service, and can also be sourced from an external video website.
After the server returns the associated objects, the recommended contents of some associated objects are still provided by the main server, and the recommended contents of some associated objects may need other service servers. For the recommended content which needs to be provided by the main server, an acquisition request sent by the television terminal can be received, the recommended content is determined according to the acquisition request, response information is generated by adopting the recommended content, and then the response information is sent to the television terminal. For example, for operation information such as guessing, the corresponding interaction state and interaction result can be returned to the user of the television terminal as recommended content.
In summary, the embodiment of the application can separate scenes, stars, brands and commodities from images in real time based on a video analysis technology, so that more forms of interaction are provided, the images can be automatically identified, and the images can be determined without manual intervention of a user. The interaction has a strong association relationship with the current content of the video through a video analysis technology, for example, the interactive advertisement is a star introduction product advertisement in the video, a product advertisement appearing in the video, a product advertisement related to a scene in the video, and the like.
The user can switch between the full-screen playing mode (interface) and the non-full-screen playing mode (interface) through a designated key of the remote control equipment, and prompt information can be provided before switching. The user can select whether to perform interaction, such as entering a special playing state (non-full screen playing mode) with interactive advertisement. For the interactive advertisement form based on the smart television, the whole advertisement interaction process can be realized by utilizing a TV screen and remote control equipment. Compared with the existing double-screen interaction mode of a mobile phone and a television, the embodiment of the application independently depends on a TV end to complete the interaction mode, can effectively reduce the operation cost and the cost of a user, and does not influence the playing of videos and the watching of the user. Namely, the interactive advertisement based on the video playing state does not interrupt the film watching process of the user.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
On the basis of the foregoing embodiments, the present application further provides a play processing apparatus, which can be applied to terminal devices such as a television terminal.
Referring to fig. 8, a block diagram of a playing processing apparatus according to an embodiment of the present application is shown, which may specifically include the following modules:
the interactive prompt module 802 is configured to, during a process of playing video data, obtain prompt information according to a trigger point corresponding to the video data.
And the playing module 804 is configured to display the prompt message.
An operation module 806, configured to receive a response operation corresponding to the prompt information, and change display of the setting content after the response operation meets a setting condition. The setting contents are displayed after the number of times the setting condition is satisfied in response to the operation exceeds a number threshold.
The setting content includes feature objects in the video data, such as advertisements.
Referring to fig. 9, a block diagram of an alternative embodiment of a playback processing apparatus according to the present application is shown, and specifically, the structure may include the following modules:
wherein, the interactive prompt module 802 includes: a trigger judgment submodule 8022, configured to judge whether a trigger point corresponding to the video data is reached according to the playing time; and after the trigger point corresponding to the video data is reached, sending a data request according to the trigger point. The obtaining submodule 8024 is configured to obtain the prompt information according to the data request.
The playing module 804 is configured to play the video data in a full screen mode in the target interface and display the interaction prompt information. The video data can be played in a full-screen mode in the television terminal, namely the video data is displayed in the whole screen, so that the prompt information can be displayed in the target interface in a floating window mode, a playing component mode and the like. According to the specific interactive content, the corresponding prompt information is displayed, for example, "3 times of the picture marked with the M key XX is displayed, so that the advertisement in one day can be avoided", or "2 times of the picture marked with the XX is displayed, and 1 time of marking is performed, so that the advertisement in one day can be avoided", and the like.
The playing module 804 is configured to play the video data in a first area of the target interface and display the prompt information in a second area. The video data can also be played in the television terminal in a non-full screen mode, one non-full screen mode can be provided with a first area and at least one second area in the target interface, the video data is played in the first area, and prompt information including the prompt information and other prompt information is displayed in the second area.
The operation module 806 is further configured to obtain corresponding operation information according to the response operation, and send the operation information. The response operation includes at least one of: screen capture indicating operation, acquisition indicating operation and selection indicating operation. The interaction module may determine an operation type corresponding to the response operation according to the prompt information, where the operation type may be carried in attributes of the prompt information, such as a mark type, a answer type, a selection type, and the like, so that different response operations may be executed and the operation information corresponding to the response operation may be obtained according to different operation types, where the prompt information may issue an indication according to different keys prompting the operation type, and may of course be the same, and then the operation type may be determined and the response operation may be executed according to the indication.
The operation module 806 is configured to intercept image data displayed corresponding to the video data according to the response operation, and use the intercepted image data as operation information. That is, after receiving the trigger instruction, the interaction module may intercept image data currently played in the video data according to the response operation, for example, may determine time information according to the response operation, acquire image data of a corresponding video frame according to the time information, certainly, may also intercept image data currently played in the screen through a screen capture operation, and then use the image data as operation information, where the operation information may further include information such as an identifier, which is convenient for server matching.
The operation module 806 is configured to obtain the uploaded reply information according to the response operation, and use the reply information as operation information. Some of the prompting information is a question, e.g., "who is the speaker of the XX brand? And for such prompt information, the user can injure the place after inputting an answer through the remote control device, so that after receiving a response operation, the operation module can acquire uploaded response information according to the response operation and use the response information as operation information, and the operation information can also comprise information such as interactive identification and the like, so that the server matching is facilitated.
The operation module 806 is configured to select reply information according to the response operation, and use the reply information as operation information. Some questions with prompt information can carry selectable answers, namely, the questions are selected, execution can be carried out through the remote control equipment, answers are selected according to response operation, so that the operation module determines the selected items as response information according to the response operation, then the response information is used as operation information, the operation information can also comprise information such as interactive identification, and server matching is facilitated.
The playing module 804 is further configured to display the recommendation information of the associated object in another second area if the target interface has more than one second area, where the another second area is a second area where the prompt information is not displayed.
Wherein the prompt message comprises a prompt message of the advertisement. Video data of the target object, comprising: the video data that changes the corresponding advertisement is displayed.
The operation module 806 is further configured to receive corresponding result prompt information and display the result prompt information after the response operation meets a set condition.
The embodiment of the application also provides a playing processing device which can be applied to a server.
Referring to fig. 10, a block diagram of another embodiment of a playback processing apparatus according to the present application is shown, and specifically, the block diagram may include the following modules:
the setting module 1002 is configured to set a trigger point for video data in advance.
And a prompt processing module 1004, configured to send prompt information corresponding to the trigger point in the video data playing process.
A result processing module 1006, configured to determine whether a response operation corresponding to the prompt information meets a set condition; and changing the display of the setting content after the response operation satisfies the setting condition. The setting content includes a feature object in the video data.
Referring to fig. 11, a block diagram of a structure of another alternative embodiment of the playback processing apparatus of the present application is shown, which may specifically include the following modules:
wherein the setting module 1002 includes: the time point determining submodule 10022 is configured to determine a time point corresponding to a video frame where the feature object is located in the video data; the trigger point setting submodule 10024 is configured to set a corresponding trigger point at the time point.
The time point determining submodule 10022 is configured to identify image data corresponding to each video frame in the video data, and determine a feature object; and determining the image data where the characteristic object is located and the time point of the video frame corresponding to the image data.
The prompt processing module 1004 is configured to receive a data request during the video data playing process; determining a corresponding trigger point according to the data request, and determining prompt information of a corresponding characteristic object according to the trigger point; and sending the prompt message of the characteristic object.
The result processing module 1006 is configured to receive operation information of a response operation corresponding to the prompt information; and determining corresponding set conditions according to the prompt information, and judging whether corresponding response operation meets the set conditions or not by adopting the response information. The response operation includes at least one of: screen capture indicating operation, acquisition indicating operation and selection indicating operation.
The result processing module 1006 is configured to obtain the intercepted image data from the operation information, and identify whether a feature object exists in the intercepted image data. The interaction judging module can acquire the intercepted image data from the operation information, then identifies the intercepted image data, determines whether a target characteristic object can be identified, wherein the target characteristic object is a characteristic object corresponding to the trigger point, and confirms that the set condition is met after the target characteristic object can be identified, otherwise, the set condition is not met. For example, if the interactive content is a beverage marked with brand XX, the screenshot can identify whether the beverage marked with brand XX exists in the image, so as to determine whether the marking is successful.
The result processing module 1006 is configured to obtain reply information from the operation information, and determine whether the reply information is correct. The interaction judgment module can acquire the reply information from the operation information, then judge whether the reply information accords with the correct answer, if so, confirm that the set condition is met, otherwise, the set condition is not met. For example, check whether the reply message is the correct answer B, and determine whether the reply message corresponds to the name of the speaker with the XX brand.
Wherein the feature object comprises at least one of: scene objects, product objects, character objects, brand objects.
The result processing module 1006 is configured to determine the number of times that the response operation satisfies a set condition; and when the times reach a time threshold value, changing the target object from the video data.
The result processing module 1006 is configured to determine that the feature object corresponding to the trigger point in the video data is the set content, and change the feature object in the video data.
The result processing module 1006 is configured to count the number of times that the response operation meets a set condition; and changing the set content from the video data when the times reach a time threshold value.
The result processing module 1006 is further configured to determine result prompt information of the feature object corresponding to the trigger point after the response operation meets a set condition, and send the result prompt information.
Whether the prompt information needs to have other conditions and the like can be judged, for example, the times, time and the like of successful response operation are obtained, and the corresponding other conditions which need to be met are obtained and added into the result prompt information to prompt the user. Of course, after all the conditions are met, corresponding reward information can be obtained as result prompt information. After confirming that the unset condition is met, result prompt information corresponding to the failure can be generated to prompt the user that the operation is unsuccessful.
Therefore, the server can identify resources such as scenes, commodities, stars, brands and the like based on the video data, and accordingly various interactions are provided; and the video data can be played to a trigger point and then prompt information is provided for the user, so that after response operation is received, whether the response operation is successful or not can be determined according to the operation information, corresponding interaction result information is returned, and the user can conveniently execute various interactions.
In the embodiment of the application, the server may have a timeline task module, the timeline task module may generate a resource library of movies based on time, and the resource library may include resources such as related scenes, commodities, stars, brands, and the like. The resources can be identified and acquired after the video data is acquired, and corresponding resources can also be identified in real time in the process of playing the video data for the first time, and the resource information, such as specific names, identifications, time and other information, is cached. So that real-time recognition is not required during the subsequent playing process. And in the video playing process, the TV terminal can execute playing interactive processing according to trigger points and the like set in the video data and display the interactive processing on the TV terminal. Therefore, a corresponding time shaft and a resource library can be formed for each video, and trigger points required by interaction are set based on the time shaft and the resource library, so that the interaction in the video playing process is realized.
In the embodiment of the application, the server can also be provided with an account module, and the account module can record interaction completion conditions, such as advertisement task completion conditions and user advertisement preferences, in the interaction execution process, so that the capability of intelligently recommending advertisements is provided.
In summary, the embodiment of the application can separate scenes, stars, brands and commodities from images in real time based on a video analysis technology, so that more forms of interaction are provided, the images can be automatically identified, and the images can be determined without manual intervention of a user. The interaction has a strong association relationship with the current content of the video through a video analysis technology, for example, the interactive advertisement is a star introduction product advertisement in the video, a product advertisement appearing in the video, a product advertisement related to a scene in the video, and the like.
The user can switch between the full-screen playing mode (interface) and the non-full-screen playing mode (interface) through a designated key of the remote control equipment, and prompt information can be provided before switching. The user can select whether to perform interaction, such as entering a special playing state (non-full screen playing mode) with interactive advertisement. For the interactive advertisement form based on the smart television, the whole advertisement interaction process can be realized by utilizing a TV screen and remote control equipment. Compared with the existing double-screen interaction mode of a mobile phone and a television, the embodiment of the application independently depends on a TV end to complete the interaction mode, can effectively reduce the operation cost and the cost of a user, and does not influence the playing of videos and the watching of the user. Namely, the interactive advertisement based on the video playing state does not interrupt the film watching process of the user.
The present application further provides a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a terminal device, the one or more modules may cause the terminal device to execute instructions (instructions) of method steps in the present application.
Fig. 12 is a schematic hardware structure diagram of an apparatus according to an embodiment of the present application. As shown in fig. 12, the device may include a terminal device such as a television terminal, a mobile terminal, etc., and may also include a server device such as a server, etc., which includes an input device 120, a processor 121, an output device 122, a memory 123, and at least one communication bus 124. The communication bus 124 is used to implement communication connections between the elements. The memory 123 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, where the memory 123 may store various programs for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 121 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 121 is coupled to the input device 120 and the output device 122 through a wired or wireless connection.
Optionally, the input device 120 may include a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 122 may include a display, a sound, or other output device.
In this embodiment, the processor of the device includes a module for executing the functions of the modules of the data processing apparatus in each device, and specific functions and technical effects are as described in the above embodiments, and are not described herein again.
Fig. 13 is a schematic hardware structure diagram of an apparatus according to another embodiment of the present application. FIG. 13 is a specific embodiment of the implementation of FIG. 12. As shown in fig. 13, the apparatus of the present embodiment includes a processor 131 and a memory 132.
The processor 131 executes the computer program codes stored in the memory 132 to implement the associated object determining and displaying method in fig. 1 to 8 in the above embodiments.
The memory 132 is configured to store various types of data to support operation at the device. Examples of such data include instructions for any application or method operating on the device, such as messages, pictures, videos, and so forth. The memory 132 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 131 is provided in the processing component 130. The apparatus may further include: a communication component 133, a power component 134, a multimedia component 135, an audio component 136, an input/output interface 137 and/or a sensor component 138. The specific components included in the device are set according to actual requirements, which is not limited in this embodiment.
The processing component 130 generally controls the overall operation of the device. The processing component 130 may include one or more processors 131 to execute instructions to perform all or a portion of the steps of the methods of fig. 1-8 described above. Further, the processing component 130 may include one or more modules that facilitate interaction between the processing component 130 and other components. For example, the processing component 130 may include a multimedia module to facilitate interaction between the multimedia component 135 and the processing component 130.
The power supply component 134 provides power to the various components of the device. The power components 134 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for a device.
The multimedia component 135 includes a display screen that provides an output interface between the device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 136 is configured to output and/or input audio signals. For example, the audio component 136 includes a Microphone (MIC) configured to receive an external audio signal when the device is in an operational mode, such as a speech recognition mode. The received audio signals may further be stored in the memory 132 or transmitted via the communication component 133. In some embodiments, audio assembly 136 also includes a speaker for outputting audio signals.
The input/output interface 137 provides an interface between the processing component 130 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 138 includes one or more sensors for providing various aspects of status assessment for the device. For example, the sensor assembly 138 may detect the open/closed status of the device, the relative positioning of the assemblies, the presence or absence of user contact with the device. The sensor assembly 138 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the device. In some embodiments, the sensor assembly 138 may also include a camera or the like.
The communication component 133 is configured to facilitate wired or wireless communication between the device and other devices. The device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the device may include a SIM card slot therein for inserting a SIM card so that the device can log onto a GPRS network to establish communication with the server via the internet.
From the above, the communication component 133, the audio component 136, the input/output interface 137, and the sensor component 138 referred to in the embodiment of fig. 13 can be implemented as the input device in the embodiment of fig. 12.
A television terminal in this embodiment includes: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the terminal device to perform one or more play processing methods at the terminal side as in embodiments of the present invention.
In one embodiment, a server includes one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the terminal device to perform one or more play processing methods of the server as described in embodiments of the present invention.
An embodiment of the present application further provides an operating system for a television terminal, and as shown in fig. 14, the operating system of the terminal device includes: a display unit 1402 and a communication unit 1404.
A display unit 1402 that plays video data; displaying the video data and the prompt information in a target interface; and changing the display of the setting contents after the response operation satisfies the setting condition.
A communication unit 1404, configured to obtain, during a process of playing video data, prompt information according to a trigger point corresponding to the video data; and receiving a response operation corresponding to the prompt message.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing describes in detail a playing processing method and apparatus, a terminal device, a server, a storage medium, and an operating system provided by the present application, and specific examples are applied in the present application to explain the principles and embodiments of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (34)

1. A playback processing method, comprising:
in the process of playing video data, acquiring prompt information according to a trigger point corresponding to the video data, wherein the trigger point is set according to a time point corresponding to image data where a characteristic object is located;
displaying the prompt information while maintaining the video data playing, and receiving a response operation corresponding to the prompt information; and
after the response operation meets a set condition, changing the display of set content while keeping the video data playing, wherein the set content comprises a characteristic object in the video data;
the display of the change setting content includes:
and determining the characteristic object corresponding to the trigger point in the video data as set content, and changing the characteristic object in the video data.
2. The method according to claim 1, wherein the setting contents are displayed after the number of times of satisfying the setting condition in response to the operation exceeds a number threshold.
3. The method of claim 1, wherein obtaining the prompt information according to the trigger point corresponding to the video data comprises:
judging whether a trigger point corresponding to the video data is reached according to the playing time;
after the trigger point corresponding to the video data is reached, sending a data request according to the trigger point;
and acquiring prompt information according to the data request.
4. The method of claim 1, wherein displaying the video data and interactive prompt information in a target interface comprises:
and displaying the video data in the target interface in a full screen mode, and displaying the interaction prompt information.
5. The method of claim 1, wherein displaying the video data and the prompt information in a target interface comprises:
and displaying the video data in a first area in the target interface, and displaying the prompt message in a second area.
6. The method of claim 1, wherein after receiving the response operation corresponding to the prompt message, further comprising:
and acquiring corresponding operation information according to the response operation, and sending the operation information.
7. The method of any of claims 1-6, wherein the response operation comprises at least one of: screen capture instruction operation, acquisition instruction operation of input information, and selection instruction operation.
8. The method according to claim 6, wherein the obtaining of the corresponding operation information according to the response operation includes at least one of:
intercepting image data correspondingly displayed by the video data according to the response operation, and taking the intercepted image data as operation information;
acquiring input reply information according to the response operation, and taking the reply information as operation information;
and selecting reply information according to the response operation, and taking the reply information as operation information.
9. The method of claim 5, further comprising:
if the target interface has more than one second area, displaying the recommendation information of the associated object in other second areas, wherein the other second areas are the second areas which do not display the prompt information.
10. The method according to any one of claims 1 to 6, wherein the reminder information includes reminder information of an advertisement, and the setting comprises: advertisement data displayed in the video.
11. The method of claim 1, further comprising:
and receiving corresponding result prompt information after the response operation meets the set condition, and displaying the result prompt information.
12. The method of claim 1, wherein the changing the display of the setting content further comprises at least one of:
canceling the display of the set content;
shortening the display time of the set content;
reducing the size of the display window corresponding to the set content;
reducing the playing volume corresponding to the set content;
reducing the resolution of the set content;
masks are added for the set content.
13. The method according to claim 1, wherein the response operation is triggered by a designated key provided on a remote control device including at least a remote controller.
14. The method according to claim 1, applied in a television terminal device.
15. A playback processing method, comprising:
determining a time point corresponding to a video frame where the characteristic object is located in the video data;
setting a corresponding trigger point at the time point;
sending prompt information corresponding to the trigger point in the video data playing process;
judging whether the response operation corresponding to the prompt message meets the set condition while keeping the video data playing; and
after the response operation meets a set condition, changing the display of set content while keeping the video data playing, wherein the set content comprises a characteristic object in the video data;
the display of the change setting content includes:
and determining the characteristic object corresponding to the trigger point in the video data as set content, and changing the characteristic object in the video data.
16. The method according to claim 15, wherein the determining, in the video data, a time point corresponding to a video frame where an aspect object is located comprises:
identifying image data corresponding to each video frame in the video data, and determining a characteristic object;
and determining the image data where the characteristic object is located and the time point of the video frame corresponding to the image data.
17. The method according to claim 15, wherein sending a prompt message corresponding to the trigger point during the playing of the video data comprises:
receiving a data request in the video data playing process;
determining a corresponding trigger point according to the data request, and determining prompt information of a corresponding characteristic object according to the trigger point;
and sending the prompt message of the characteristic object.
18. The method of claim 15, wherein determining whether the response operation corresponding to the prompt message satisfies a set condition comprises:
receiving operation information of response operation corresponding to the prompt information;
and determining corresponding set conditions according to the prompt information, and judging whether corresponding response operation meets the set conditions or not by adopting the operation information.
19. The method of claim 18, wherein the response operation comprises at least one of: screen capture instruction operation, acquisition instruction operation of input information, and selection instruction operation.
20. The method of claim 19, wherein determining whether the corresponding response operation satisfies the set condition using the operation information comprises at least one of:
acquiring intercepted image data from the operation information, and identifying whether a characteristic object exists in the intercepted image data;
and acquiring reply information from the operation information, and judging whether the reply information is correct or not.
21. The method of claim 16 or 20, wherein the feature objects comprise at least one of: scene objects, product objects, character objects, brand objects.
22. The method of claim 15, wherein the changing the display of the setting content further comprises:
counting the times that the response operation meets a set condition;
and changing the set content in the video data when the times reach a time threshold value.
23. The method of claim 16, wherein after the response operation satisfies a set condition, the method further comprises:
and determining result prompt information of the characteristic object corresponding to the trigger point, and sending the result prompt information.
24. The method of claim 15, wherein the reminder information comprises reminder information of an advertisement, and wherein the setting comprises: advertisement data displayed in the video.
25. The method of claim 15, wherein the changing the display of the setting content further comprises at least one of:
canceling the display of the set content;
shortening the display time of the set content;
reducing the size of the display window corresponding to the set content;
reducing the playing volume corresponding to the set content;
reducing the resolution of the set content;
masks are added for the set content.
26. A playback processing apparatus, comprising:
the interactive prompting module is used for acquiring prompting information according to a trigger point corresponding to the video data in the process of playing the video data, wherein the trigger point is set according to a time point corresponding to the image data of the characteristic object;
the playing module is used for displaying the prompt information while keeping the video data playing;
the operation module is used for receiving response operation corresponding to the prompt message while keeping video data playing, and changing the display of set content while keeping video data playing after the response operation meets set conditions, wherein the set content comprises a characteristic object in the video data;
the operation module is configured to determine that a feature object corresponding to a trigger point in the video data is a set content, and change the feature object in the video data.
27. The apparatus according to claim 26, wherein the setting contents are displayed after the number of times of satisfying the setting condition in response to the operation exceeds a number threshold.
28. A playback processing apparatus, comprising:
the setting module is used for determining a time point corresponding to a video frame where the characteristic object is located in the video data; setting a corresponding trigger point at the time point;
the prompt processing module is used for sending prompt information corresponding to the trigger point in the video data playing process;
the result processing module is used for judging whether the response operation corresponding to the prompt message meets the set condition while keeping the video data playing; and, after the response operation satisfies a setting condition, changing display of setting content while maintaining playback of the video data, the setting content including a feature object in the video data;
and the result processing module is used for determining the characteristic object corresponding to the trigger point in the video data as the set content and changing the characteristic object in the video data.
29. The apparatus of claim 28,
the result processing module is used for counting the times that the response operation meets the set condition; and changing the set content from the video data when the times reach a time threshold value.
30. A terminal device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform the method of any of claims 1-14.
31. A computer-readable storage medium having stored thereon instructions, which, when executed by one or more processors, cause a terminal device to perform the method of any one of claims 1-14.
32. A server, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the server to perform the method of any of claims 15-25.
33. A computer-readable storage medium having stored thereon instructions, which when executed by one or more processors, cause a server to perform the method of any one of claims 15-25.
34. An operating system for a television terminal, comprising:
a display unit playing video data; displaying prompt information while maintaining the video data playing; after the response operation meets the set condition, determining that the characteristic object corresponding to the trigger point in the video data is set content while the video data is kept playing, and changing the characteristic object in the video data;
the communication unit is used for acquiring prompt information according to the trigger point corresponding to the video data in the process of playing the video data; and receiving a response operation corresponding to the prompt message while keeping the video data playing, wherein the trigger point is set according to the time point corresponding to the image data of the characteristic object.
CN201710656522.0A 2017-08-03 2017-08-03 Playing processing method, device, equipment and storage medium Active CN109391834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710656522.0A CN109391834B (en) 2017-08-03 2017-08-03 Playing processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710656522.0A CN109391834B (en) 2017-08-03 2017-08-03 Playing processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109391834A CN109391834A (en) 2019-02-26
CN109391834B true CN109391834B (en) 2021-08-31

Family

ID=65412288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710656522.0A Active CN109391834B (en) 2017-08-03 2017-08-03 Playing processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109391834B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035324A (en) * 2019-04-16 2019-07-19 北京达佳互联信息技术有限公司 Information interacting method, device, terminal and storage medium
CN110225367A (en) * 2019-06-27 2019-09-10 北京奇艺世纪科技有限公司 It has been shown that, recognition methods and the device of object information in a kind of video
CN112714349B (en) * 2019-10-24 2023-06-27 阿里巴巴集团控股有限公司 Data processing method, commodity display method and video playing method
CN110830813B (en) * 2019-10-31 2020-11-06 北京达佳互联信息技术有限公司 Video switching method and device, electronic equipment and storage medium
WO2021102606A1 (en) * 2019-11-25 2021-06-03 吉安市井冈山开发区金庐陵经济发展有限公司 Apparatus for processing selection information
CN112887777B (en) * 2019-11-29 2022-12-23 阿里巴巴集团控股有限公司 Interactive prompting method and device for interactive video, electronic equipment and storage medium
CN110889076B (en) * 2019-11-29 2021-04-13 北京达佳互联信息技术有限公司 Comment information publishing method, device, client, server, system and medium
CN113286181A (en) * 2020-02-20 2021-08-20 阿里巴巴集团控股有限公司 Data display method and device
CN112533048B (en) * 2020-11-05 2022-11-11 北京达佳互联信息技术有限公司 Video playing method, device and equipment
CN115086734A (en) * 2021-03-12 2022-09-20 北京字节跳动网络技术有限公司 Information display method, device, equipment and medium based on video
CN113392625B (en) * 2021-06-25 2023-08-11 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for determining annotation information
CN113299135A (en) * 2021-07-26 2021-08-24 北京易真学思教育科技有限公司 Question interaction method and device, electronic equipment and storage medium
CN114189542A (en) * 2021-11-23 2022-03-15 阿里巴巴(中国)有限公司 Interaction control method and device
CN115271891B (en) * 2022-09-29 2022-12-30 深圳市人马互动科技有限公司 Product recommendation method based on interactive novel and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103414940A (en) * 2013-08-02 2013-11-27 南京邮电大学 System and method for playing interactive internet video advertisements
CN104168491A (en) * 2013-05-17 2014-11-26 腾讯科技(北京)有限公司 Information processing method and device in video playing processes
CN104216990A (en) * 2014-09-09 2014-12-17 科大讯飞股份有限公司 Method and system for playing video advertisement
CN104754419A (en) * 2015-03-13 2015-07-01 腾讯科技(北京)有限公司 Video-based interaction method and device
CN105847998A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Video playing method, playing terminal, and media server
CN106534941A (en) * 2016-10-31 2017-03-22 腾讯科技(深圳)有限公司 Method and device for realizing video interaction

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080152300A1 (en) * 2006-12-22 2008-06-26 Guideworks, Llc Systems and methods for inserting advertisements during commercial skip
CN103997691B (en) * 2014-06-02 2016-01-13 合一网络技术(北京)有限公司 The method and system of video interactive
CN106101846B (en) * 2016-08-15 2020-02-04 腾讯科技(深圳)有限公司 Information processing method and device, and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104168491A (en) * 2013-05-17 2014-11-26 腾讯科技(北京)有限公司 Information processing method and device in video playing processes
CN103414940A (en) * 2013-08-02 2013-11-27 南京邮电大学 System and method for playing interactive internet video advertisements
CN104216990A (en) * 2014-09-09 2014-12-17 科大讯飞股份有限公司 Method and system for playing video advertisement
CN104754419A (en) * 2015-03-13 2015-07-01 腾讯科技(北京)有限公司 Video-based interaction method and device
CN105847998A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Video playing method, playing terminal, and media server
CN106534941A (en) * 2016-10-31 2017-03-22 腾讯科技(深圳)有限公司 Method and device for realizing video interaction

Also Published As

Publication number Publication date
CN109391834A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN109391834B (en) Playing processing method, device, equipment and storage medium
TWI744368B (en) Play processing method, device and equipment
US11741110B2 (en) Aiding discovery of program content by providing deeplinks into most interesting moments via social media
US10235025B2 (en) Various systems and methods for expressing an opinion
US10595071B2 (en) Media information delivery method and system, terminal, server, and storage medium
CN105701217B (en) Information processing method and server
CN102722517B (en) Enhanced information for viewer-selected video object
CN109118290B (en) Method, system, and computer-readable non-transitory storage medium
US9134875B2 (en) Enhancing public opinion gathering and dissemination
US9583148B2 (en) Systems and methods for providing electronic cues for time-based media
US20100086283A1 (en) Systems and methods for updating video content with linked tagging information
US20120260158A1 (en) Enhanced World Wide Web-Based Communications
CN103997691A (en) Method and system for video interaction
CN104205854A (en) Method and system for providing a display of social messages on a second screen which is synched to content on a first screen
CN108600818B (en) Method and device for displaying multimedia resources
US20130312049A1 (en) Authoring, archiving, and delivering time-based interactive tv content
US10440435B1 (en) Performing searches while viewing video content
US20150319509A1 (en) Modified search and advertisements for second screen devices
CN108401173B (en) Mobile live broadcast interactive terminal, method and computer readable storage medium
CN112073738B (en) Information processing method and device
CA3181874A1 (en) Aggregating media content using a server-based system
US8520018B1 (en) Media distribution system
CN111970563B (en) Video processing method and device and electronic equipment
KR101519035B1 (en) Smart display having icon loading part
CN114663115A (en) Order processing method and device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant