CN111107437A - Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium - Google Patents

Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium Download PDF

Info

Publication number
CN111107437A
CN111107437A CN201911400420.8A CN201911400420A CN111107437A CN 111107437 A CN111107437 A CN 111107437A CN 201911400420 A CN201911400420 A CN 201911400420A CN 111107437 A CN111107437 A CN 111107437A
Authority
CN
China
Prior art keywords
movie
information
television
text content
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911400420.8A
Other languages
Chinese (zh)
Inventor
徐永泽
赖长明
薛凯文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN201911400420.8A priority Critical patent/CN111107437A/en
Publication of CN111107437A publication Critical patent/CN111107437A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an interaction method, a system, a display terminal and a readable storage medium for the after-view feeling of a movie, wherein the method comprises the following steps: if the voice interaction information of the target user to the movie and television is detected, converting the voice interaction information into text content; inputting the text content into a preset interaction model; and acquiring data information associated with the film and the television based on a preset interaction model, and responding the text content according to the data information. Therefore, voice interaction information of the target user to the movie is converted into text content, data information associated with the movie is acquired by combining the text content and the preset interaction model, and the text content is responded, so that the user can interact based on the data information associated with the movie when watching the movie, the technical effect of timely sharing the feeling after watching is achieved, the interaction requirement of the user can be met, and the user experience is improved.

Description

Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium
Technical Field
The invention relates to the technical field of movie and television playing, in particular to an interaction method and system for movie and television after-view feeling, a display terminal and a readable storage medium.
Background
With the rapid development of science and technology, in the television industry, a television is not a simple display terminal any more, and a network technology is introduced to become a network television, namely an internet television, so as to provide various entertainment information for users.
At present, a network television can output corresponding prompt information according to a voice control instruction of a user. However, when people watch various programs such as movies and dramas using a large screen, no one can easily watch the programs. Many users do not have the opportunity to share the after-view feeling with other people and discuss the video content when watching the video or playing the movie, so that the users often lack some satisfaction when watching the video. Therefore, it is difficult to satisfy the interaction requirements of the user when watching the movie, and the experience of the user is further influenced.
Disclosure of Invention
The invention mainly aims to provide an interaction method, an interaction system, a display terminal and a readable storage medium for the after-view feeling of a movie and television, and aims to solve the technical problems that the interaction requirement of a user is difficult to meet and the experience of the user is influenced when the movie and television are watched in the prior art.
In order to achieve the above object, the present invention provides an interaction method for the after-view feeling of a movie, comprising:
if the voice interaction information of the target user to the movie is detected, converting the voice interaction information into text content;
inputting the text content into a preset interaction model;
and acquiring data information associated with the film and the television based on the preset interaction model, and responding the text content according to the data information.
Further, before the step of converting the voice interaction information into text content when the voice interaction information of the target user to the movie is detected, the method includes:
and after the movie and television playing is finished or paused, if an operation instruction for triggering interaction is detected, starting to detect whether the voice interaction information of the target user to the movie and television exists.
Further, the step of starting to detect whether the voice interaction information of the target user to the movie exists includes:
starting to collect environmental voice data;
judging whether the environment voice data is matched with preset voiceprint information or not;
if the environment voice data is matched with the preset voiceprint information, the voice interaction information of the target user to the film and the television exists;
and if the environment voice data is not matched with the preset voiceprint information, the voice interaction information of the target user to the film and television does not exist.
Further, the step of feeding back the response information to the target user includes:
and feeding back the response information to the target user through text display and/or voice broadcast.
Further, the data information associated with the movies comprises movie label information, movie content information and movie public opinion information;
prestoring the movie label information, the movie content information and the movie public opinion information which are associated with the movie;
extracting keywords of the text content, and determining the intention of the target user according to the keywords;
and acquiring response information matched with the intention of the target user from the movie label information, the movie content information and the movie public opinion information based on the preset interaction model, and feeding back the response information to the target user.
Further, according to the currently played movie, recording the name, duration and type of the current movie to serve as the movie label information;
according to the currently played movie, recording the movie content information of the current movie by a plurality of recording bars respectively, wherein the recording bars comprise characters, time periods, scenes and events;
according to the currently played movie, recording historical scoring and evaluation contents of other users on the current movie to serve as the movie public opinion information.
Further, the step of inputting the text content into a preset interaction model includes:
acquiring sample text content and data information which is used for responding to the sample text content and is associated with the film and television as a training set;
inputting the training set into a deep neural network for training so as to construct the preset interaction model;
and if the text content is detected, inputting the text content into the preset interaction model.
The invention also provides an interactive system for the after-view feeling of the film and television, which comprises:
the detection module is used for converting the voice interaction information into text content if the voice interaction information of the target user to the movie is detected;
the input module is used for inputting the text content to a preset interaction model;
and the acquisition module is used for acquiring data information associated with the film and the television based on the preset interaction model and responding the text content according to the data information.
The present invention also provides a display terminal, including: the interactive program for the after-view movie and television feelings is executed by the processor to realize the steps of the interactive method for the after-view movie and television feelings.
The invention also provides a readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the steps of the interaction method for the after-view feeling of the film and the television.
According to the interaction method for the after-view feeling of the film and the television, provided by the embodiment of the invention, if the voice interaction information of the target user to the film and the television is detected, the voice interaction information is converted into text content; inputting the text content into a preset interaction model; and acquiring data information associated with the film and the television based on a preset interaction model, and responding the text content according to the data information. Therefore, voice interaction information of the target user to the movie is converted into text content, data information associated with the movie is acquired by combining the text content and the preset interaction model, and the text content is responded, so that the user can interact based on the data information associated with the movie when watching the movie, the technical effect of timely sharing the feeling after watching is achieved, the interaction requirement of the user can be met, and the user experience is improved.
Drawings
Fig. 1 is a schematic structural diagram of a display terminal in which hardware according to an embodiment of the present invention operates;
FIG. 2 is a flowchart illustrating a first embodiment of a method for interaction of a movie and television after viewing;
FIG. 3 is a schematic diagram illustrating the interaction of the interaction method for post-viewing feeling of film and television according to the present invention;
FIG. 4 is a schematic diagram of a frame structure of an embodiment of the interactive system for post-viewing of movies and videos.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a display terminal of a hardware operating environment according to an embodiment of the present invention.
The display terminal of the embodiment of the invention can be a PC, and can also be a display terminal device with a display function, such as a smart phone, a tablet computer, a smart television, a portable computer and the like.
As shown in fig. 1, the display terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the display terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts brightness of the display screen according to brightness of ambient light, and a proximity sensor that turns off the display screen and/or backlight when the display terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the device is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the display terminal, and related functions (such as pedometer and tapping) for vibration recognition; of course, the display terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the display terminal configuration shown in fig. 1 is not intended to be limiting of display terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include an operating system, a network communication module, a user interface module, and an interactive program for a movie viewing afterview.
In the display terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; the processor 1001 may be configured to call the interactive program for the movie viewing experience stored in the memory 1005, and perform the following operations:
if the voice interaction information of the target user to the movie is detected, converting the voice interaction information into text content;
inputting the text content into a preset interaction model;
and acquiring data information associated with the film and the television based on the preset interaction model, and responding the text content according to the data information.
Further, the processor 1001 may call the interactive program for the after-view feeling of the movie stored in the memory 1005, and further perform the following operations: and after the movie and television playing is finished or paused, if an operation instruction for triggering interaction is detected, starting to detect whether the voice interaction information of the target user to the movie and television exists.
Further, starting to collect the environmental voice data;
judging whether the environment voice data is matched with preset voiceprint information or not;
if the environment voice data is matched with the preset voiceprint information, the voice interaction information of the target user to the film and the television exists;
and if the environment voice data is not matched with the preset voiceprint information, the voice interaction information of the target user to the film and television does not exist.
Further, the movie label information, the movie content information and the movie public opinion information which are associated with the movie are prestored;
extracting keywords of the text content, and determining the intention of the target user according to the keywords;
and acquiring response information matched with the intention of the target user from the movie label information, the movie content information and the movie public opinion information based on the preset interaction model, and feeding back the response information to the target user.
Further, the response information is fed back to the target user through text display and/or voice broadcast.
Further, according to the currently played movie, recording the name, duration and type of the current movie to serve as the movie label information;
according to the currently played movie, recording the movie content information of the current movie by a plurality of recording bars respectively, wherein the recording bars comprise characters, time periods, scenes and events;
according to the currently played movie, recording historical scoring and evaluation contents of other users on the current movie to serve as the movie public opinion information.
Further, sample text content and data information which is used for responding to the sample text content and is associated with the film and television are obtained to serve as a training set;
inputting the training set into a deep neural network for training so as to construct the preset interaction model;
and if the text content is detected, inputting the text content into the preset interaction model.
Referring to fig. 2, the present invention provides various embodiments of the method of the present invention based on the hardware structure of the display terminal.
The invention provides an interaction method for the after-view feeling of a movie, which is applied to a display terminal, and in a first embodiment of the interaction method, referring to fig. 2, the method comprises the following steps:
step S10, if the voice interaction information of the target user to the film and television is detected, the voice interaction information is converted into text content;
and when the display terminal determines that the voice interaction information of the target user to the film and television is detected, converting the voice interaction information into text content. The display terminal can be an intelligent television, or a display terminal device with a display function, such as a smart phone, a tablet computer, a portable computer, and the like. In this embodiment, the voice interaction information may be inquiry information related to movies and televisions. For example, the voice of the target user obtained by the display terminal is "who the main actor of the tv series" wants to know the "returning bagger", that is, the voice interaction information is "who the main actor of the tv series" wants to know the "returning bagger". The voice interaction information may be converted into text content through a voice recognition technology, and it should be noted that the voice recognition technology is a prior art for those skilled in the art, and is not described herein in detail.
Step S20, inputting the text content into a preset interaction model;
and the display terminal inputs the converted text content into a preset interaction model. The preset interaction model is constructed by collecting a large amount of sample text content in advance and training the sample text content in response to data information related to movies and televisions. In this embodiment, when determining the voice interaction information of the target user for the movie, the display terminal converts the voice interaction information into text content, inputs the text content into the preset interaction model, and outputs data information for responding to the text content and the movie in association based on the preset interaction model.
And step S30, acquiring data information associated with the film and television based on the preset interaction model, and responding the text content according to the data information.
The display terminal acquires voice interaction information of a target user to the movie, inputs text content to a preset interaction model, acquires data information associated with the movie according to the preset interaction model, and responds to the text content according to the data information. In the embodiment, the data information associated with the movies includes, but is not limited to, movie label information, movie content information, and movie public opinion information. Further, the display terminal can extract the movie label information from the basic information of the movie. In addition, the movie content information can be obtained according to a structured video analyzer or manual labeling. The movie and television public opinion information can be obtained from public opinion data information published on a network movie and television platform, or from mainstream news media, posts, forums, apps, public numbers and the like based on a data crawler technology. It should be noted that the display terminal may also obtain the data information associated with the movie through other channels, but is not limited in particular herein. The movie label information includes, but is not limited to, the name, duration, genre, main actors, director, and other basic information of the movie. The movie content information includes, but is not limited to, people, time periods, scenes, events, and the like. The movie public opinion information includes but is not limited to historical scoring and evaluation content of other users on the current movie. When the display terminal acquires the voice interaction information of the target user to the movie, the voice interaction information is converted into text content, the text content is input into a preset interaction model, movie content information of the movie is acquired based on the preset interaction model, and the target user is responded according to the movie content information of the movie. For example, when the voice interaction information of the target user to the movie is "i want to know what content this movie speaks in the 30 to 35 minutes", the display terminal responds to the movie content information of the current movie of the target user, i.e., "30 to 35 minutes, the protagonist (liude hua) runs at the airport lounge".
In this embodiment, the display terminal inputs the text content into the preset interaction model by acquiring the voice interaction information of the target user for the movie and the television and converting the voice interaction information into the text content, and the preset interaction model acquires the data information associated with the movie and the television and responds to the text content according to the data information. Therefore, voice interaction information of the target user to the movie is converted into text content, data information associated with the movie is acquired by combining the text content and the preset interaction model, and the text content is responded, so that the user can interact based on the data information associated with the movie when watching the movie, the technical effect of timely sharing the feeling after watching is achieved, the interaction requirement of the user can be met, and the user experience is improved.
Further, in step S10 of the above-mentioned first embodiment, before the step of converting the voice interaction information into text content when the voice interaction information of the target user to the movie is detected, the method includes:
step S11, after the movie playing is finished or paused, if an operation instruction for triggering interaction is detected, it starts to detect whether there is voice interaction information of the target user for the movie.
After the movie and television playing is finished or paused, if a triggered operation instruction is detected, whether voice interaction information of a target user for the movie and television exists or not is detected, and if the voice interaction information of the target user exists, text information is sent. In this embodiment, the target user may click a virtual end button or pause button on the interface of the display terminal, may press an entity end button or pause button on a remote controller controlling the display terminal, or may perform operations such as voice control pause and end on the display terminal. The operation instruction for triggering interaction may be an interaction button on an interface of the display terminal, an entity interaction button of a remote controller for controlling the display terminal, or an operation for triggering interaction through the voice control display terminal. For example, after the end of playing the movie and television, when the display terminal acquires the operation instruction for triggering the voice control interaction, the target user starts to detect whether the voice interaction information of the target user exists. Therefore, the target user can select different modes to trigger the interactive operation instruction according to the requirement of the target user, and the experience of the target user is better.
Optionally, step S11 may specifically include:
step S110, starting to collect environmental voice data;
step S111, judging whether the environmental voice data is matched with preset voiceprint information;
step S112, if the environment voice data is matched with the preset voiceprint information, voice interaction information of the target user to the film and television exists;
step S113, if the environmental voice data does not match the preset voiceprint information, the voice interaction information of the target user to the movie does not exist.
When detecting an operation instruction for triggering interaction, the display terminal starts to acquire environment voice data and judges whether the environment voice data is matched with preset voiceprint information, if so, voice interaction information of a target user to the movie and television exists, and if not, voice interaction information of the target user to the movie and television does not exist. The preset voiceprint information can be voiceprint information set by a user in advance, and only the environment voice data matched with the preset voiceprint information is used as voice interaction information of the target user to the movie. For example, when there are A, B two users, wherein the voiceprint information of the user a is used as the preset voiceprint information, and when the user B triggers an interactive operation instruction, the voice interaction information of the user a and the user B is detected, and the voice interaction information of the user a to the movie is executed. Therefore, the interference of other users or noise can be eliminated, and the misoperation of the display terminal is avoided.
Further, the step S30 of the first embodiment, obtaining data information associated with the movie based on the preset interaction model, and responding to the text content according to the data information, includes:
step S31, prestoring the film label information, the film content information and the film public opinion information which are associated with the film;
step S32, extracting keywords of the text content, and determining the target user intention according to the keywords;
step S33, obtaining response information matching the intention of the target user from the movie label information, the movie content information, and the movie public opinion information based on the preset interaction model, and feeding back the response information to the target user.
In this embodiment, the display terminal stores movie label information, movie content information and movie public opinion information associated with movies in advance, converts the voice interaction information into text content if the voice interaction information of the target user to the movies is detected, extracts a keyword corresponding to the text content and a near meaning word of the keyword, wherein the keyword can also be a word keyword, determines the intention of the target user according to the keyword or the near meaning word of the keyword, acquires response information matched with the intention of the target user from the movie label information, the movie content information and the movie public opinion information based on a preset interaction model, and feeds the response information back to the target user. Specifically, the keywords include: the target user may be determined to be aware of the tag information associated with the movie, the character, the time slot, the scene, the event, and the like, and the evaluation content, the score, and the like may be determined to be aware of the movie public opinion information associated with the movie. If the keywords in the text content are extracted, determining the intention of the target user according to the keywords, acquiring response information matched with the intention of the target user from the movie label information, the movie content information and the movie public opinion information based on a preset interaction model, and feeding back the response information to the target user.
For example, the display terminal detects that the voice interaction information of the user is "i want to know the evaluation content of the movie of" Pipi ", and converts the voice interaction information into text content to extract keywords in the text content: "Piezo", "movie", "evaluation content", and the like. Optionally, the extracted keywords are further recombined into a complete short sentence: the evaluation content of the film of the 'drawing of a skin', namely the intention of the target user, is to acquire public opinion information associated with the film and television so as to acquire response information matched with the intention of the target user, namely the response information is that the film of the 'drawing of a skin' is made very well and the drama is also good.
In some embodiments, the display terminal may further collect voice interaction information of the plurality of target users with respect to the movie, and input the voice interaction information of the plurality of target users with respect to the movie to the deep neural network, so as to output the attention value of the target users. Therefore, the movie label information, the movie content information and the movie public opinion information related to the movie can be recommended directly according to the attention value of the target user, so that some movie information more conforming to the target user can be recommended, and the experience of the target user is improved.
Optionally, step S33 may specifically include:
step S331, the response information is fed back to the target user through text display and/or voice playing.
In this embodiment, the target user may select the feedback mode on the interface of the terminal, and may also control and display the feedback mode selected by the terminal through voice. The terminal can feed back the response information to the target user through text display, can also convert the response information into voice through voice broadcast and feed back the voice to the target user, or can feed back the voice to the target user through text display and voice broadcast simultaneously. Therefore, the target user can select a feedback mode according to the self requirement, and the experience of the target user is better.
Optionally, step S31 may specifically include:
step S311, recording the name, duration and type of the current movie according to the currently played movie to be used as the tag information of the movie;
step S312, according to the currently played movie, recording the movie content information of the current movie by a plurality of recording bars respectively, wherein the recording bars comprise characters, time periods, scenes and events;
and step 313, recording historical scoring and evaluation contents of other users on the current movie according to the currently played movie to serve as public opinion information of the movie.
The display terminal can record the name, duration and type of the current movie according to the currently played movie to serve as tag information of the movie, and can also be divided into a plurality of recording strips according to the currently played movie, and the content information of the movie of the current movie is recorded by the plurality of recording strips respectively, wherein the recording strips comprise characters, time periods, scenes and events. One of the record bars herein is recorded with the character, time period, scene and event of the currently playing movie. For example, the movie currently being played is "Pipi," Zhao Wei riding on a horse to walk on the street in 10 to 15 cents ". Or, according to the currently played movie, recording the historical scoring and evaluation content of the current movie by other users to serve as the public opinion information of the movie. The scope of other users can be defined additionally, for example, only all video viewers of the target user can be divided, even other platform users can be covered, or only users with higher evaluation quality are selected, or the attributes of the users are close to the attributes of the target user, that is, recommendation can be performed according to the similarity between the attributes of the target user.
Further, in the above-mentioned first embodiment step S20, the step of inputting the text content into the preset interaction mode includes:
step S21, obtaining sample text content and data information associated with film and television for responding to the sample text content as a training set;
step S22, inputting the training set into a deep neural network for training to construct the preset interaction model;
step S23, if the text content is detected, inputting the text content into the preset interaction model.
The display terminal can obtain a large amount of sample text contents and movie label information, movie content information and movie public opinion information which are used for responding the sample text contents and the movies in a relevant mode, and the sample text contents and the movie public opinion information are used as a training set; and inputting the training set into a deep neural network for training to construct a preset interaction model, and inputting the text content into the preset interaction model if the text content is detected. For example, if the sample text content is "who we want to know the director of the left ear", the movie label information for responding to the sample text content is "the director of the left ear is Su have punk", and it can be used as the training set.
In addition, the chat content of the target user and the display terminal can be used as a training set, and the training set is input into the preset interaction model so as to update the preset interaction model. Therefore, the chat content of the target user and the display terminal can be used as a training set for training, the preset interaction model can be more robust, and the accuracy of the preset interaction model is improved.
In this embodiment, the target user may use a video player of the display terminal to play the current movie, record, according to the currently played movie, movie tag information, movie content information, and movie public opinion information associated with the current movie, and use the current text content and the movie tag information, movie content information, and movie public opinion information for responding to the text content and the movie associated with the current movie as a training set to construct the preset interaction model. When the movie playing is finished or the movie playing is suspended and an operation instruction for triggering interaction is detected, converting the voice interaction information into text content and inputting the text content into a preset interaction model; acquiring movie label information, movie content information and movie public opinion information associated with movies based on a preset interaction model, and responding text content according to the movie label information, the movie content information and the movie public opinion information.
To assist understanding of the technical solution of the present embodiment, refer to fig. 3, where fig. 3 is a schematic diagram of interaction of an interaction method for viewing a movie and a television.
In addition, referring to fig. 4, an embodiment of the present invention further provides an interactive system for a movie feeling after viewing, where the interactive system for a movie feeling after viewing includes:
and the detection module is used for converting the voice interaction information into text content if the voice interaction information of the target user to the movie is detected.
And the input module is used for inputting the text content to a preset interaction model.
And the acquisition module is used for acquiring data information associated with the film and the television based on the preset interaction model and responding the text content according to the data information.
Further, the detection module is further configured to, after the movie playing is finished or the movie playing is paused, start detecting whether the voice interaction information of the target user to the movie exists if an operation instruction for triggering interaction is detected.
Further, a detection module comprising:
and the acquisition unit is used for acquiring environmental voice data.
And the judging unit is used for judging whether the environment voice data is matched with preset voiceprint information.
And the first matching unit is used for determining that the voice interaction information of the target user to the movie exists if the environment voice data is matched with the preset voiceprint information.
And the second matching unit is used for not having the voice interaction information of the target user to the movie if the environment voice data is not matched with the preset voiceprint information.
Further, the acquisition module includes:
and the pre-storing unit is used for pre-storing the film label information, the film content information and the film public opinion information which are associated with the film.
And the extraction unit is used for extracting the keywords of the text content and determining the target user intention according to the keywords.
And the feedback unit is used for acquiring response information matched with the intention of the target user from the movie label information, the movie content information and the movie public opinion information based on a preset interaction model, and feeding back the response information to the target user.
Further, the feedback unit is further used for feeding back the response information to the target user through text display and/or voice broadcast.
Further, the pre-storage unit is further configured to record, according to the currently played movie, a name, a duration, and a type of the current movie as the movie label information.
Further, the pre-storage unit is further configured to record, according to the currently played movie, the movie content information of the current movie with a plurality of record bars, respectively, where the record bars include a person, a time period, a scene, and an event.
Further, the pre-storage unit is further configured to record, according to the currently played movie, historical scoring and evaluation contents of the current movie by other users as the movie public opinion information.
Further, an input module comprising:
and the acquisition unit is used for acquiring sample text content and data information which is used for responding to the sample text content and is associated with the film and television as a training set.
And the construction unit is used for inputting the training set into a deep neural network for training so as to construct the preset interaction model.
And the input unit is used for inputting the text content into the preset interaction model if the text content is detected.
In addition, an embodiment of the present invention further provides a readable storage medium (i.e., a computer readable memory), where the readable storage medium stores a movie-viewing-after-viewing interaction program, and when executed by a processor, the movie-viewing-after-viewing interaction program implements the following operations:
if the voice interaction information of the target user to the movie is detected, converting the voice interaction information into text content;
inputting the text content into a preset interaction model;
and acquiring data information associated with the film and the television based on the preset interaction model, and responding the text content according to the data information.
Further, when executed by the processor, the interactive program for the movie and television viewing afterfeel further realizes the following operations: and after the movie and television playing is finished or paused, if an operation instruction for triggering interaction is detected, starting to detect whether the voice interaction information of the target user to the movie and television exists.
Further, starting to collect the environmental voice data;
judging whether the environment voice data is matched with preset voiceprint information or not;
if the environment voice data is matched with the preset voiceprint information, the voice interaction information of the target user to the film and the television exists;
and if the environment voice data is not matched with the preset voiceprint information, the voice interaction information of the target user to the film and television does not exist.
Further, the movie label information, the movie content information and the movie public opinion information which are associated with the movie are prestored;
extracting keywords of the text content, and determining the intention of the target user according to the keywords;
and acquiring response information matched with the intention of the target user from the movie label information, the movie content information and the movie public opinion information based on the preset interaction model, and feeding back the response information to the target user.
Further, the response information is fed back to the target user through text display and/or voice broadcast.
Further, according to the currently played movie, recording the name, duration and type of the current movie to serve as the movie label information;
according to the currently played movie, recording the movie content information of the current movie by a plurality of recording bars respectively, wherein the recording bars comprise characters, time periods, scenes and events;
according to the currently played movie, recording historical scoring and evaluation contents of other users on the current movie to serve as the movie public opinion information.
Further, sample text content and data information which is used for responding to the sample text content and is associated with the film and television are obtained to serve as a training set;
inputting the training set into a deep neural network for training so as to construct the preset interaction model;
and if the text content is detected, inputting the text content into the preset interaction model.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a display terminal device (e.g., a mobile phone, a computer, a server, a smart television, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An interactive method for the after-view feeling of a movie and television is characterized by comprising the following steps:
if the voice interaction information of the target user to the movie is detected, converting the voice interaction information into text content;
inputting the text content into a preset interaction model;
and acquiring data information associated with the film and the television based on the preset interaction model, and responding the text content according to the data information.
2. The method for interaction after the viewing of the movie and television, as claimed in claim 1, wherein before the step of converting the voice interaction information into the text content if the voice interaction information of the target user to the movie and television is detected, comprising:
and after the movie and television playing is finished or paused, if an operation instruction for triggering interaction is detected, starting to detect whether the voice interaction information of the target user to the movie and television exists.
3. The method for interaction of after-view feeling of film and television as claimed in claim 2, wherein said step of starting to detect whether there is voice interaction information of said target user to film and television comprises:
starting to collect environmental voice data;
judging whether the environment voice data is matched with preset voiceprint information or not;
if the environment voice data is matched with the preset voiceprint information, the voice interaction information of the target user to the film and the television exists;
and if the environment voice data is not matched with the preset voiceprint information, the voice interaction information of the target user to the film and television does not exist.
4. The method for interacting the afterview feeling of the film and television as claimed in claim 1, wherein the data information associated with the film and television comprises film and television label information, film and television content information and film and television public opinion information;
the method comprises the following steps of acquiring data information associated with the film and the television based on the preset interaction model, and responding the text content according to the data information, wherein the steps comprise:
prestoring the movie label information, the movie content information and the movie public opinion information which are associated with the movie;
extracting keywords of the text content, and determining the intention of the target user according to the keywords;
and acquiring response information matched with the intention of the target user from the movie label information, the movie content information and the movie public opinion information based on the preset interaction model, and feeding back the response information to the target user.
5. The method of claim 4, wherein the step of feeding back the response message to the target user comprises:
and feeding back the response information to the target user through text display and/or voice broadcast.
6. The method as claimed in claim 4, wherein the step of pre-storing the label information, content information and public opinion information of the movie and television related to the movie and television comprises:
recording the name, duration and type of the current movie according to the currently played movie to serve as the tag information of the movie;
according to the currently played movie, recording the movie content information of the current movie by a plurality of recording bars respectively, wherein the recording bars comprise characters, time periods, scenes and events;
according to the currently played movie, recording historical scoring and evaluation contents of other users on the current movie to serve as the movie public opinion information.
7. The method for interaction of after-view feeling of film and television as claimed in claim 1, wherein said step of inputting said text content into a preset interaction model comprises:
acquiring sample text content and data information which is used for responding to the sample text content and is associated with the film and television as a training set;
inputting the training set into a deep neural network for training so as to construct the preset interaction model;
and if the text content is detected, inputting the text content into the preset interaction model.
8. An interactive system for the afterview of a movie, said system comprising:
the detection module is used for converting the voice interaction information into text content if the voice interaction information of the target user to the movie is detected;
the input module is used for inputting the text content to a preset interaction model;
and the acquisition module is used for acquiring data information associated with the film and the television based on the preset interaction model and responding the text content according to the data information.
9. A display terminal, characterized in that the display terminal comprises: a memory, a processor and a program stored on the memory and operable on the processor, wherein the program for interacting the after-view feeling of the movie is executed by the processor to implement the steps of the method for interacting the after-view feeling of the movie according to any one of claims 1 to 7.
10. A readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing the steps of the method for interacting with the afterimage of a movie or television according to any one of claims 1 to 7.
CN201911400420.8A 2019-12-27 2019-12-27 Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium Pending CN111107437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911400420.8A CN111107437A (en) 2019-12-27 2019-12-27 Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911400420.8A CN111107437A (en) 2019-12-27 2019-12-27 Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium

Publications (1)

Publication Number Publication Date
CN111107437A true CN111107437A (en) 2020-05-05

Family

ID=70425211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911400420.8A Pending CN111107437A (en) 2019-12-27 2019-12-27 Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN111107437A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237401A (en) * 2021-12-28 2022-03-25 广州卓远虚拟现实科技有限公司 Seamless linking method and system for multiple virtual scenes

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010262413A (en) * 2009-04-30 2010-11-18 Nippon Hoso Kyokai <Nhk> Voice information extraction device
CN107368547A (en) * 2017-06-28 2017-11-21 西安交通大学 A kind of intelligent medical automatic question-answering method based on deep learning
CN107666536A (en) * 2016-07-29 2018-02-06 北京搜狗科技发展有限公司 A kind of method and apparatus for finding terminal, a kind of device for being used to find terminal
CN109697245A (en) * 2018-12-05 2019-04-30 百度在线网络技术(北京)有限公司 Voice search method and device based on video web page

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010262413A (en) * 2009-04-30 2010-11-18 Nippon Hoso Kyokai <Nhk> Voice information extraction device
CN107666536A (en) * 2016-07-29 2018-02-06 北京搜狗科技发展有限公司 A kind of method and apparatus for finding terminal, a kind of device for being used to find terminal
CN107368547A (en) * 2017-06-28 2017-11-21 西安交通大学 A kind of intelligent medical automatic question-answering method based on deep learning
CN109697245A (en) * 2018-12-05 2019-04-30 百度在线网络技术(北京)有限公司 Voice search method and device based on video web page

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237401A (en) * 2021-12-28 2022-03-25 广州卓远虚拟现实科技有限公司 Seamless linking method and system for multiple virtual scenes
CN114237401B (en) * 2021-12-28 2024-06-07 广州卓远虚拟现实科技有限公司 Seamless linking method and system for multiple virtual scenes

Similar Documents

Publication Publication Date Title
TWI744368B (en) Play processing method, device and equipment
CN104010220B (en) Content service provides method and apparatus
WO2020000973A1 (en) Information access method, client, information access apparatus, terminal, server, and storage medium
CN109688475B (en) Video playing skipping method and system and computer readable storage medium
GB2589997A (en) Video access method, client, video access apparatus, terminal, server, and storage medium
CN111464844A (en) Screen projection display method and display equipment
CN110087124A (en) Long-range control method, terminal device and the smart television of smart television
CN104113786A (en) Information acquisition method and device
CN110213661A (en) Control method, smart television and the computer readable storage medium of full video
US9866913B1 (en) Binary TV
CN113573155A (en) Voice bullet screen implementation method and device, intelligent device and readable storage medium
CN112788268A (en) Information pushing method based on video recording, smart television and storage medium
US20150347597A1 (en) Apparatus and method for providing information
CN112135170A (en) Display device, server and video recommendation method
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
CN108401173B (en) Mobile live broadcast interactive terminal, method and computer readable storage medium
US11997341B2 (en) Display apparatus and method for person recognition and presentation
CN111274449B (en) Video playing method, device, electronic equipment and storage medium
EP3896985A1 (en) Reception device and control method
WO2022037224A1 (en) Display device and volume control method
CN111107437A (en) Interaction method and system for movie and television after-viewing feeling, display terminal and readable storage medium
EP3026925B1 (en) Image display apparatus and information providing method thereof
KR102217490B1 (en) Method and apparatus for searching broadcasting image
KR102220198B1 (en) Display device and operating method thereof
CN112883144A (en) Information interaction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200505

RJ01 Rejection of invention patent application after publication