CN116132745A - Video interaction method and device for multiple terminals - Google Patents
Video interaction method and device for multiple terminals Download PDFInfo
- Publication number
- CN116132745A CN116132745A CN202310406638.4A CN202310406638A CN116132745A CN 116132745 A CN116132745 A CN 116132745A CN 202310406638 A CN202310406638 A CN 202310406638A CN 116132745 A CN116132745 A CN 116132745A
- Authority
- CN
- China
- Prior art keywords
- interaction
- video
- terminal
- label
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 242
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000006399 behavior Effects 0.000 claims abstract description 65
- 230000002452 interceptive effect Effects 0.000 claims abstract description 49
- 238000012216 screening Methods 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000010606 normalization Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application provides a video interaction method and device for multiple terminals, comprising the following steps: responding to an interaction message of a target terminal aiming at a first video, and creating a video interaction scene which is used for displaying the first video; inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein interaction behaviors exist between account numbers of the candidate terminals and the first video, including direct interaction behaviors; the direct interaction behavior is used for indicating that the account of the candidate terminal is executing interaction operation aiming at the first video; matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result; and selecting the interactive terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene. By adopting the method, the problem that when a user interacts, the object of the interaction cannot be precisely matched, so that the user experiences poorer when using the interaction function of the application product is solved.
Description
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to a method and an apparatus for video interaction between multiple terminals.
Background
With the development of multimedia technology and the change of user demands, various video and audio and social applications are layered endlessly, and the functions of software become more and more diverse. In order to improve user viscosity and enhance product use experience of users, application research and development manufacturers provide various types of interaction scenes for users by setting video barrages, comment area interactions or video columns and the like.
Under the current technology, when the user performs the interactive behavior, the object performing the interactive behavior cannot be precisely matched, so that the user experiences worse when using the interactive function of the application product.
At present, a video interaction method and device for multiple terminals are needed to solve the problems of the related art.
Disclosure of Invention
The application provides a video interaction method and device for multiple terminals, which are used for solving the problem that a user cannot be matched with an object performing interaction behavior accurately, so that the user experiences poorer when using the interaction function of an application product. By adopting the method, the requirements of the user on the current interaction behavior can be matched more accurately.
The first aspect of the present application provides a video interaction method for multiple terminals, the method comprising: responding to an interaction message of a target terminal aiming at a first video, and creating a video interaction scene which is used for displaying the first video; inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein the account numbers of the candidate terminals and the first video have interaction behaviors, and the interaction behaviors comprise direct interaction behaviors used for indicating that when the account numbers of the candidate terminals are executing interaction operations aiming at the first video; matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result; and selecting the interactive terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene.
By adopting the method, on the basis of providing the interaction scene aiming at the specific video for the user, the user can be matched with the interaction object which is more suitable for the current interaction scene and the self interaction requirement by matching the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals.
Optionally, the plurality of candidate terminals include a first candidate terminal, and the matching target terminal's historical interaction number and the plurality of candidate terminals' historical interaction data are matched to obtain a matching result, which specifically includes: acquiring a target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal; acquiring a first tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the first tag group comprises a first tag group used for displaying the account preferences of the first candidate terminals; and respectively matching the label matching degree of the target label group and the label groups to obtain a matching result.
By adopting the method, the tag group showing the account preferences of the plurality of candidate terminals and the target tag group showing the preferences of the target terminal are obtained, and the tag fitness of the target tag group and the plurality of tag groups is further obtained, so that the matching result is more accurate.
Optionally, according to the matching result, selecting an interaction terminal from the plurality of candidate terminals, specifically including: when the tag fitness of the first tag group is greater than or equal to the preset tag fitness, the first candidate terminal is confirmed to be an interactive terminal.
By adopting the method, the tag matching degree is greater than or equal to the preset tag matching degree selected from the plurality of candidate terminals, and a proper interactive terminal is matched for the user.
Optionally, the interaction behavior further includes an indirect interaction behavior, where the indirect interaction behavior is used to indicate that the account of the candidate terminal is executing an interaction operation with respect to the relevant video of the first video; matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result; the method specifically comprises the following steps:
in the matching result, when the label matching degree of any one of the plurality of label groups is smaller than the preset label matching degree; inquiring a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise a first alternative terminal; acquiring a second tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal; and when the label matching degree of the second label group and the target label group is larger than or equal to the preset label matching degree, the first alternative terminal is confirmed to be an interactive terminal.
By adopting the method, when the candidate terminals with direct interaction behaviors with the user can not meet the interaction requirement of the current interaction scene of the user, other terminals with indirect interaction behaviors are inquired; and screening the interactive terminals meeting the interactive requirements of the current interactive scene of the user. The situation that the user cannot be matched with the proper interactive terminal from the plurality of candidate terminals is avoided.
Optionally, based on the historical data of the target terminal, obtaining a target tag group for displaying the preference of the target terminal specifically includes: obtaining a plurality of labels from historical data of a target terminal by adopting keyword screening, and the coincidence ratio of the labels; acquiring the label type of the first video and the coincidence degree of the label type of the first video; and correcting the overlap ratio of the plurality of labels based on the overlap ratio of the label type of the first video to obtain a plurality of corrected label overlap ratios, and combining the plurality of labels and the plurality of corrected label overlap ratios to obtain a target label group.
By adopting the method, after the multiple labels of the target terminal and the overlap ratio of the multiple labels are obtained, the overlap ratio of the multiple labels is corrected according to the overlap ratio of the label type based on the first video, so that the subsequently obtained target label group can more accord with the interaction requirement of the user in the current interaction scene.
Optionally, the overlap ratio of the plurality of labels is corrected based on the overlap ratio of the label type of the first video, so as to obtain a plurality of corrected label overlap ratios, and the correction label overlap ratio is specifically determined according to the following formula:
before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
and->Represents the same label; wherein (1)>Is greater than->,And->The value range is +.>;Is the +.>Personal tag (S)>Is +.>Personal tag (S)>Is the +.>Overlap of individual labels->Is +.>Overlap of individual labels->For the +.>Overlap of the individual labels.
The application obtains the modified first label in the plurality of labels by adopting the methodOverlap of the individual labels.
Optionally, matching the tag fitness of the target tag group and the plurality of tag groups specifically includes: the method comprises the steps of screening keywords to obtain a plurality of labels of a first candidate terminal and the coincidence ratio of the labels; and acquiring cosine similarity between the coincidence degrees of the plurality of labels of the first candidate terminal and the coincidence degrees of the plurality of corrected labels, wherein the cosine similarity is the coincidence degree of the labels.
By adopting the method, the label matching degree is obtained by acquiring the cosine similarity between the overlapping degree of the labels of the first candidate terminal and the overlapping degree of the corrected labels.
A second aspect of the present application provides a video interaction device for multiple terminals, the device comprising: the system comprises a scene creation unit, a data query unit, a matching unit and an interaction unit;
the scene creation unit is used for responding to the interaction message of the target terminal aiming at the first video, and creating a video interaction scene which is used for displaying the first video;
the data query unit is used for querying historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein the account numbers of the candidate terminals and the first video have interaction behaviors, and the interaction behaviors comprise direct interaction behaviors which are used for indicating that when the account numbers of the candidate terminals are executing interaction operations aiming at the first video;
the matching unit is used for matching the historical interaction data of the target terminal with the historical interaction data of the plurality of candidate terminals to obtain a matching result;
and the interaction unit is used for selecting the interaction terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interaction terminal into the video interaction scene.
A third aspect of the present application provides an electronic device comprising a processor, a memory, a user interface and a network interface, the memory for storing instructions, the user interface and the network interface for communicating to other devices, the processor for executing the instructions stored in the memory to cause the electronic device to perform the method of any one of the above.
A fourth aspect of the present application provides a computer readable storage medium storing instructions that, when executed, perform a method of any one of the above.
Compared with the related art, the beneficial effects of the application are as follows:
1. on the basis of providing an interaction scene for a specific video for a user, the user can be matched with an interaction object which is more suitable for the current interaction scene and the self interaction requirement by matching the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals;
2. the label group showing account preferences of a plurality of candidate terminals and the target label group showing the preferences of the target terminals are obtained, so that the label matching degree of the target label group and the plurality of label groups is further obtained, and the matching result is more accurate;
3. When candidate terminals with direct interaction behaviors with a user cannot meet the interaction requirements of the current interaction scene of the user, other terminals with indirect interaction behaviors are inquired; and screening the interactive terminals meeting the interactive requirements of the current interactive scene of the user. The situation that a user cannot be matched with a proper interactive terminal from a plurality of candidate terminals is avoided;
4. after the multiple labels of the target terminal and the overlap ratio of the multiple labels are obtained, the overlap ratio of the multiple labels is corrected according to the overlap ratio of the label type based on the first video, so that the subsequently obtained target label group can more accord with the interaction requirement of the user in the current interaction scene.
Drawings
Fig. 1 is a first flow diagram of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 3 is a second flow diagram of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 4 is a third flow diagram of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 5 is a fourth flowchart of a video interaction method of multiple terminals according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a video interaction device with multiple terminals according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals: 11. a scene creation unit; 12. a data query unit; 13. a matching unit; 14. an interaction unit; 15. an alternative matching unit; 700. an electronic device; 701. a processor; 702. a communication bus; 703. a user interface; 704. a network interface; 705. a memory.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "exemplary," "such as" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "illustrative," "such as" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "illustratively," "such as" or "for example," etc., is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a alone, B alone, and both A and B. In addition, unless otherwise indicated, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The method in the embodiment of the application can be applied to a server to solve the problem that in the related art, when a user performs interactive behavior of a video, the object performing the interactive behavior cannot be matched accurately, so that the user experiences worse when using the interactive function of the application product.
As shown in fig. 1, the present application provides a flowchart of a video interaction method for multiple terminals, which is applied to a server and includes steps S11-S14.
S11, responding to the interaction message of the target terminal aiming at the first video, and creating a video interaction scene which is used for displaying the first video.
In this embodiment of the present application, when a user views a first video in a target terminal, a specific operation may be performed to enable the target terminal to send an interaction message for the first video to a server, and the server creates a video interaction scene in the target terminal after receiving the interaction message, as shown in fig. 2.
S12, inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein interaction behaviors exist between the account numbers of the candidate terminals and the first video, and the interaction behaviors comprise direct interaction behaviors used for indicating that when the account numbers of the candidate terminals are executing interaction operations aiming at the first video.
In the embodiment of the application, the candidate terminal is an operation terminal of a user account that is simultaneously showing the first video. The account number of the candidate terminal and the first video have interaction behaviors, and in the embodiment, the interaction behaviors of the account number of the candidate terminal include direct interaction behaviors; the account number of the candidate terminal is indicated to execute the interaction operation for the first video when the target terminal sends the interaction message for the first video to the server. Consider that in practical applications, the interaction behavior of the account number of the candidate terminal with respect to the first video may not be completely synchronized with the interaction message. Therefore, the embodiment of the application preliminarily identifies the matching degree of the direct interaction behavior of the account number of the candidate terminal and the current interaction requirement of the target terminal by comparing the time coincidence rate of the interaction operation and the interaction message.
S13, matching the historical interaction data of the target terminal with the historical interaction data of the plurality of candidate terminals to obtain a matching result.
In the embodiment of the present application, the historical interaction data of the plurality of candidate terminals refers to a data record of interaction behavior of the candidate terminals for historical viewing video in a historical period. In the embodiment of the application, in order to meet the interaction requirement of the account number of the target terminal, a more accurate interaction object is obtained. And in the process of matching the objects, selecting historical watching videos of the interaction behaviors in the candidate terminals. And analyzing the preferred video types when the candidate terminals do interactive behaviors. Therefore, in the embodiment of the present application, in the history interaction data, only the video of the simple viewing action is made to the target terminal, and the video is not taken as the history interaction data, so as to perform the next judgment. Through the judgment and screening, more accurate matching results can be obtained.
In one possible implementation, the plurality of candidate terminals includes a first candidate terminal; in step S13, the historical interaction data of the target terminal and the historical interaction data of the plurality of candidate terminals are matched to obtain a matching result, which specifically includes steps S131 to S133 shown in fig. 3.
S131, acquiring a target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal.
In the embodiment of the application, a target tag group showing the preference of the target terminal is obtained according to the historical interaction data of the target terminal. And judging and screening the historical interaction data of the target terminal in the same way as the historical interaction data of the candidate terminal. The video is only watched for the history of the existence of interaction behavior for the account of the target terminal as part of the historical interaction data of the target terminal in the embodiment of the present application. In the embodiment of the present application, for obtaining the target tag group, the following method is specifically referred to below.
In one possible implementation, as shown in fig. 4, the target tag group showing the preference of the target terminal is obtained based on the historical interaction data of the target terminal, and specifically includes steps S131a-S131c.
S131a, adopting keyword screening to obtain a plurality of labels from the historical interaction data of the target terminal and the coincidence degree of the labels.
Specifically, for a historical video with interactive behaviors of a target terminal, acquiring all tags of the historical interactive video based on audience custom tag evaluation and automatic tag pushing of a video platform; the higher the association degree between the tag and the historical interactive video is, the higher the overlap degree of the tag is. Before actually acquiring the label type of the first video and the overlap ratio of the label type of the first video, the label type with the too low overlap ratio can be screened by setting an overlap ratio threshold of the label.
In a possible implementation manner, the coincidence degree threshold of the tag is changed according to satisfaction of the account number of the target terminal on the matching result; when the number of the candidate terminals obtained in the matching result is large, the coincidence degree threshold of the label can be improved, so that fewer candidate terminals can meet the matching requirement; when the number of candidate terminals obtained in the matching result is small, the overlap ratio threshold of the tag can be reduced, so that more candidate terminals can meet the matching requirement.
S131b, acquiring the label type of the first video and the coincidence degree of the label type of the first video.
Specifically, in the embodiment of the present application, the tag type of the first video isThe label type of the first video has a coincidence degree of +.>。/>
S131c, correcting the overlap ratio of the plurality of labels based on the overlap ratio of the label type of the first video to obtain a plurality of corrected label overlap ratios, and combining the plurality of labels and the plurality of corrected label overlap ratios to obtain a target label group.
In one possible implementation, the overlap ratio of the plurality of labels is modified based on the overlap ratio of the label type of the first video, so as to obtain a plurality of modified label overlap ratios, which are specifically determined according to the following formula:
Before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
and->Representing the same label->Is greater than->,And->The value range is +.>;Is the first of a plurality of labelsPersonal tag (S)>Is +.>Personal tag (S)>Is the +.>Overlap of individual labels->Is +.>Overlap of individual labels->For the +.>Overlap of the individual labels.
In the above formula, since the tag types of the historical interactive video of the target terminal are richer than those of the first video. And the tag class of the historical interaction video of the target terminal will include the tag type of the first video. Therefore, when correction repair is performed on the overlap ratio of the plurality of tags based on the overlap ratio of the tag type of the first video, only the tag appearing in the tag type of the first video can be repaired. The remaining labels, i.e. in the above formulaThe overlap ratio of the corresponding label is kept unchanged.
In the above formula, the modified first of the plurality of labelsOverlap of individual labels->The formula of (2) is obtained based on the following process:
The following describes specific data in connection with the embodiments of the present application, and for the first video, the tag type of the first video is obtained Label overlap->. At the same time, a plurality of tags are obtained from the historical interaction data of the target terminal>And overlap of a plurality of labels. In the historical interaction data of the target terminal in the embodiment of the application, 10 tags actually exist, and the overlap ratio threshold of the tags is set to be 0.4, so that the tags are identifiedLabel removal with overlap ratio below 0.4. According to the first +.>The formula of the overlap ratio of the individual labels:
the overlap ratio after correction of a plurality of labels obtained from the historical interaction data of the target terminal is respectively as follows
The final corrected overlap ratio results of the plurality of labels are as follows:
since the overlap ratio of the corrected label is greater than 1, normalization processing is required for the overlap ratio of the corrected label. To be used forThe normalization processing result can be obtained as the reference value. In this embodiment of the present application, a description is not given of a normalization process.
s132, acquiring a first tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the first tag group comprises a first tag group for displaying the account preferences of the first candidate terminals.
S133, respectively matching the label matching degree of the target label group and the label groups to obtain a matching result.
In one possible embodiment, the matching of the tag fitness of the target tag population with the tag populations specifically includes steps S133a-S133b.
And S133a, screening by using keywords, and obtaining a plurality of labels of the first candidate terminal and the coincidence ratio of the labels.
In this embodiment of the present application, the plurality of tags of the first candidate terminal and the method for obtaining the overlap ratio of the plurality of tags may refer to the plurality of tags of the target terminal and the method for obtaining the overlap ratio of the plurality of tags, which are not described herein in detail. Exemplary embodiments of the present application provide a plurality of obtained first candidate terminalsThe overlap ratio of the labels isThe above-mentioned overlap ratio has been subjected to normalization processing.
And S133b, acquiring cosine similarity between the coincidence degrees of the plurality of labels of the first candidate terminal and the coincidence degrees of the plurality of corrected labels, wherein the cosine similarity is the coincidence degree of the labels.
In the embodiment of the application, calculation and And the correlation degree of the two groups of data is obtained, and the tag fitness is 0.9425.
In the embodiment of the present application, the tag fitness of the two sets of tag coincidence degrees may also be obtained by other manners, which is not limited in detail in the embodiment of the present application.
In a possible implementation manner, the interaction behavior further includes an indirect interaction behavior, where the indirect interaction behavior is used to indicate that the account number of the candidate terminal is performing an interaction operation with respect to the relevant video of the first video; after matching the historical interaction data of the target terminal and the historical interaction data of the plurality of candidate terminals to obtain a matching result in step S13, as shown in fig. 5, steps S151 to S154 are specifically included.
S151, in the matching result, when the label matching degree of any one of the label groups is smaller than the preset label matching degree.
Specifically, when the tag matching degree of the tags of the plurality of tag groups and the target terminal is smaller than the preset tag matching degree, which is obtained through the method. At the moment, the candidate terminals with direct interaction behaviors on the first video and the interaction requirement of the account numbers of the target terminals cannot be met are simply considered. Thus, to expand the scope of matching interactive objects, multiple alternative terminals for which there is indirect interaction with respect to the first video are queried.
S152, inquiring a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise the first alternative terminal.
In the embodiment of the application, the terminal account of the indirect interaction behavior is other videos related to the first video currently being watched, or videos related to the first video exist in the historical watched videos without watching the videos. Such as: the first video is assumed to be a second-season video of a certain variety, and the video watched by the account of the first alternative terminal in the historical time period is assumed to be the first-season video of the certain variety. The existence of indirect interaction behavior between the account number of the first alternative terminal and the first video can be judged. By setting the alternative terminal with indirect interaction with the first video, the range of the target terminal when the account number is matched with the interaction requirement can be enlarged.
S153, acquiring a second tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal.
In the embodiment of the present application, the historical interaction data of the candidate terminal and the subsequent terminal may be the same type of video viewing data.
And S154, when the label matching degree of the second label group and the target label group is larger than or equal to the preset label matching degree, the first alternative terminal is confirmed to be an interactive terminal.
And S14, selecting the interactive terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene.
In a possible implementation manner, according to the matching result, selecting an interactive terminal from a plurality of candidate terminals specifically includes: when the tag fitness of the first tag group is greater than or equal to the preset tag fitness, the first candidate terminal is confirmed to be an interactive terminal.
In the embodiment of the application, for the plurality of candidate terminals having one or more of the direct interaction behavior or the indirect interaction behavior, a plurality of candidate terminals satisfying the preset tag fitness are obtained. And sequentially sending interactive behavior invitations to the candidate terminals according to the tag fitness. If the account number of the candidate terminal is also performing video interaction matching action, the candidate terminal is directly pulled into a video interaction scene without inviting.
In a possible implementation manner, an account corresponding to the target terminal can have management authority of the video interaction scene.
By adopting the method embodiment, the beneficial effects which can be achieved are as follows:
1. on the basis of providing the user with the interaction scene aiming at the specific video, the user can be matched with the interaction object which is more suitable for the current interaction scene and the self interaction requirement by matching the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals.
2. The tag group showing the account preferences of the plurality of candidate terminals and the target tag group showing the preferences of the target terminal are obtained, so that the tag fitness of the target tag group and the plurality of tag groups is further obtained, and the matching result is more accurate.
3. When candidate terminals with direct interaction behaviors with a user cannot meet the interaction requirements of the current interaction scene of the user, other terminals with indirect interaction behaviors are inquired; and screening the interactive terminals meeting the interactive requirements of the current interactive scene of the user. The situation that the user cannot be matched with the proper interactive terminal from the plurality of candidate terminals is avoided.
4. After the multiple labels of the target terminal and the overlap ratio of the multiple labels are obtained, the overlap ratio of the multiple labels is corrected according to the overlap ratio of the label type based on the first video, so that the subsequently obtained target label group can more accord with the interaction requirement of the user in the current interaction scene.
The embodiment of the application provides a video interaction device of multiple terminals, as shown in fig. 6, the device includes: a scene creation unit 11, a data query unit 12, a matching unit 13, and an interaction unit 14;
the scene creation unit 11 is configured to create a video interaction scene for showing the first video in response to an interaction message of the target terminal for the first video.
The data querying unit 12 is configured to query historical interaction data of a plurality of candidate terminals based on the video interaction scenario, where the account numbers of the candidate terminals have interaction behaviors with the first video, and the interaction behaviors include direct interaction behaviors, where the direct interaction behaviors are used to indicate that when the account numbers of the candidate terminals are executing an interaction operation with respect to the first video.
And a matching unit 13 for matching the historical interaction data of the target terminal with the historical interaction data of the plurality of candidate terminals to obtain a matching result.
And the interaction unit 14 is used for selecting the interaction terminal from the candidate terminals according to the matching result so as to add the account of the interaction terminal into the video interaction scene.
In a possible implementation manner, the plurality of candidate terminals include a first candidate terminal, and the matching unit 13 includes a target tag group acquisition module, a candidate terminal tag acquisition module, and a matching result acquisition module.
The target tag group acquisition module is used for acquiring the target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal.
The terminal candidate tag acquisition module is used for acquiring a first tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the first tag group comprises a first tag group used for displaying the account preferences of the first candidate terminals.
The matching result acquisition module is used for respectively matching the label matching degree of the target label group and the label groups so as to obtain a matching result.
In one possible implementation, the interaction unit 14 comprises an interaction terminal confirmation module.
And the interactive terminal confirmation module is used for confirming the first candidate terminal as the interactive terminal when the label matching degree of the first label group is larger than or equal to the preset label matching degree.
In a possible embodiment, the interaction behavior further comprises an indirect interaction behavior for indicating that the account number of the candidate terminal is performing an interaction operation with respect to the relevant video of the first video, after the matching unit 13, the apparatus further comprises an alternative matching unit 15.
An alternative matching unit 15, configured to, in a matching result, when a tag fitness of any one of the plurality of tag groups is less than a preset tag fitness; inquiring a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise a first alternative terminal; acquiring a second tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal; and when the label matching degree of the second label group and the target label group is larger than or equal to the preset label matching degree, the first alternative terminal is confirmed to be an interactive terminal.
In one possible implementation, the target tag population acquisition module includes a first overlap ratio acquisition sub-module, a second overlap ratio acquisition sub-module, and an overlap ratio correction sub-module.
The first overlap ratio obtaining sub-module is used for obtaining a plurality of labels from the historical data of the target terminal by adopting keyword screening and the overlap ratio of the labels.
And the second coincidence degree acquisition sub-module acquires the label type of the first video and the coincidence degree of the label type of the first video.
And the overlap ratio correction submodule is used for correcting the overlap ratio of the plurality of labels based on the overlap ratio of the label type of the first video so as to obtain a plurality of corrected label overlap ratios, and combining the plurality of labels and the plurality of corrected label overlap ratios to obtain a target label group. Correcting the overlap ratio of the labels based on the overlap ratio of the label types of the first video to obtain a plurality of corrected label overlap ratios, wherein the overlap ratio is determined according to the following formula:
before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
Is the +.>Personal tag (S)>Is +. >Personal tag (S)>Is the +.>Overlap of individual labels->Is +.>Overlap of individual labels->For the +.>Overlap of the individual labels.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
Referring to fig. 7, a schematic structural diagram of an electronic device is provided in an embodiment of the present application. As shown in fig. 7, the electronic device 700 may include: at least one processor 701, at least one network interface 704, a user interface 703, a memory 705, at least one communication bus 702.
Wherein the communication bus 702 is used to enable connected communications between these components.
The user interface 703 may include a Display screen (Display), a Camera (Camera), and the optional user interface 703 may further include a standard wired interface, and a wireless interface.
The network interface 704 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 701 may include one or more processing cores. The processor 701 connects various portions of the overall server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 705, and invoking data stored in the memory 705. Alternatively, the processor 701 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (ProgrammableLogic Array, PLA). The processor 701 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 701 and may be implemented by a single chip.
The memory 705 may include a random access memory (Random Access Memory, RAM) or a Read-only memory (Read-only memory). Optionally, the memory 705 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 705 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 705 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 705 may also optionally be at least one storage device located remotely from the processor 701. As shown in fig. 7, an operating system, a network communication module, a user interface module, and an application program for video interaction with respect to a multi-terminal may be included in the memory 705 as one type of computer storage medium.
In the electronic device 700 shown in fig. 7, the user interface 703 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 701 may be configured to invoke application programs for video interactions of multiple terminals stored in memory 705, which when executed by one or more processors, cause electronic device 700 to perform the methods as described in one or more of the embodiments above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
Claims (10)
1. A video interaction method for multiple terminals, which is applied to a server, the method comprising:
responding to an interaction message of a target terminal aiming at a first video, and creating a video interaction scene, wherein the video interaction scene is used for displaying the first video;
inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein the account numbers of the candidate terminals and the first video have interaction behaviors, and the interaction behaviors comprise direct interaction behaviors used for indicating that the account numbers of the candidate terminals are executing interaction operations aiming at the first video;
Matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result;
and selecting an interactive terminal from the candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene.
2. The method according to claim 1, wherein the plurality of candidate terminals includes a first candidate terminal, and the matching the historical interaction data of the target terminal and the historical interaction data of the plurality of candidate terminals to obtain a matching result specifically includes:
acquiring a target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal;
acquiring a first tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the candidate terminals, wherein the first tag group comprises a first tag group for displaying the account preferences of the first candidate terminals;
and respectively matching the label matching degree of the target label group and the label groups to obtain the matching result.
3. The method according to claim 2, wherein the selecting an interactive terminal from a plurality of candidate terminals according to the matching result specifically comprises:
And when the label matching degree of the first label group is larger than or equal to a preset label matching degree, the first candidate terminal is confirmed to be the interactive terminal.
4. The method of claim 2, wherein the interaction behavior further comprises an indirect interaction behavior for indicating that the account of the candidate terminal is performing an interaction operation with respect to the relevant video of the first video; the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals are matched, so that a matching result is obtained; the method specifically comprises the following steps:
in the matching result, when the label matching degree of any one of the label groups is smaller than a preset label matching degree;
querying a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise a first alternative terminal;
acquiring a second tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal;
And when the label matching degree of the second label group and the target label group is larger than or equal to a preset label matching degree, the first alternative terminal is confirmed to be the interactive terminal.
5. The method according to claim 2, wherein obtaining a target tag group showing the target terminal preference based on the history data of the target terminal, specifically comprises:
obtaining a plurality of labels from the historical data of the target terminal by adopting keyword screening, and the coincidence degree of the labels;
acquiring the label type of the first video and the overlap ratio of the label type of the first video;
and correcting the overlap ratio of the labels based on the overlap ratio of the label types of the first video to obtain a plurality of corrected label overlap ratios, and combining the labels and the corrected label overlap ratios to obtain the target label group.
6. The method of claim 5, wherein the modifying the overlap of the plurality of the tags based on the overlap of the tag type of the first video to obtain a plurality of modified tag overlap is determined according to the following formula:
Before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
Is the +.>Personal tag (S)>A first video of the tag type of the first video>The number of tags to be used in the process of the label,is the +.>Overlap of individual labels->A first video of the tag type of the first video>Overlap of individual labels->For the modified plurality of said tags +.>Overlap of the individual labels.
7. The method of claim 5, wherein matching the tag fitness of the target tag population to a plurality of the tag populations, specifically comprises:
obtaining a plurality of labels of the first candidate terminal and the coincidence degrees of the labels by adopting keyword screening;
and acquiring cosine similarity between the coincidence degrees of the labels of the first candidate terminal and the coincidence degrees of the corrected labels, wherein the cosine similarity is the coincidence degree of the labels.
8. A multi-terminal video interaction device, the device comprising: the system comprises a scene creation unit, a data query unit, a matching unit and an interaction unit;
The scene creation unit is used for responding to the interaction message of the target terminal aiming at the first video, and creating a video interaction scene which is used for displaying the first video;
the data query unit is configured to query historical interaction data of a plurality of candidate terminals based on the video interaction scene, where the account numbers of the candidate terminals have interaction behaviors with the first video, and the interaction behaviors include direct interaction behaviors, where the direct interaction behaviors are used for indicating that when the account numbers of the candidate terminals are executing an interaction operation for the first video;
the matching unit is used for matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals so as to obtain a matching result;
and the interaction unit is used for selecting an interaction terminal from a plurality of candidate terminals according to the matching result so as to add the account of the interaction terminal into the video interaction scene.
9. A computer readable storage medium storing instructions which, when executed, perform the method of any one of claims 1-8.
10. An electronic device comprising a processor (701), a user interface (703), a network interface (704) and a memory (705), the memory (705) being configured to store instructions, the user interface (703) and the network interface (704) being configured to communicate to other devices, the processor (701) being configured to execute the instructions stored in the memory (705) to cause the electronic device (700) to perform the method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310406638.4A CN116132745A (en) | 2023-04-17 | 2023-04-17 | Video interaction method and device for multiple terminals |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310406638.4A CN116132745A (en) | 2023-04-17 | 2023-04-17 | Video interaction method and device for multiple terminals |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116132745A true CN116132745A (en) | 2023-05-16 |
Family
ID=86306649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310406638.4A Pending CN116132745A (en) | 2023-04-17 | 2023-04-17 | Video interaction method and device for multiple terminals |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116132745A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105022797A (en) * | 2015-06-30 | 2015-11-04 | 北京奇艺世纪科技有限公司 | Resource topic processing method and apparatus |
CN109544396A (en) * | 2019-01-10 | 2019-03-29 | 腾讯科技(深圳)有限公司 | Account recommended method, device, server, terminal and storage medium |
US20190182565A1 (en) * | 2017-12-13 | 2019-06-13 | Playable Pty Ltd | System and Method for Algorithmic Editing of Video Content |
CN110933456A (en) * | 2019-12-17 | 2020-03-27 | 北京爱奇艺科技有限公司 | Video-based interaction system, method and device and electronic equipment |
CN111767429A (en) * | 2020-06-29 | 2020-10-13 | 北京奇艺世纪科技有限公司 | Video recommendation method and device and electronic equipment |
CN114297475A (en) * | 2021-12-06 | 2022-04-08 | 新奥新智科技有限公司 | Object recommendation method and device, electronic equipment and storage medium |
-
2023
- 2023-04-17 CN CN202310406638.4A patent/CN116132745A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105022797A (en) * | 2015-06-30 | 2015-11-04 | 北京奇艺世纪科技有限公司 | Resource topic processing method and apparatus |
US20190182565A1 (en) * | 2017-12-13 | 2019-06-13 | Playable Pty Ltd | System and Method for Algorithmic Editing of Video Content |
CN109544396A (en) * | 2019-01-10 | 2019-03-29 | 腾讯科技(深圳)有限公司 | Account recommended method, device, server, terminal and storage medium |
CN110933456A (en) * | 2019-12-17 | 2020-03-27 | 北京爱奇艺科技有限公司 | Video-based interaction system, method and device and electronic equipment |
CN111767429A (en) * | 2020-06-29 | 2020-10-13 | 北京奇艺世纪科技有限公司 | Video recommendation method and device and electronic equipment |
CN114297475A (en) * | 2021-12-06 | 2022-04-08 | 新奥新智科技有限公司 | Object recommendation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108156507B (en) | Virtual article presenting method, device and storage medium | |
US20150289021A1 (en) | System and method for collecting viewer opinion information | |
CN111079529B (en) | Information prompting method and device, electronic equipment and storage medium | |
CN110297975B (en) | Recommendation strategy evaluation method and device, electronic equipment and storage medium | |
CN114938458B (en) | Object information display method and device, electronic equipment and storage medium | |
CN112511849A (en) | Game display method, device, equipment, system and storage medium | |
CN115657846A (en) | Interaction method and system based on VR digital content | |
CN109218817B (en) | Method and device for displaying virtual gift prompting message | |
CN111669622A (en) | Method and device for determining default play relationship of videos and electronic equipment | |
CN104954824A (en) | Method, device and system for setting video | |
CN117459662B (en) | Video playing method, video identifying method, video playing device, video playing equipment and storage medium | |
CN116132745A (en) | Video interaction method and device for multiple terminals | |
CN115983499A (en) | Box office prediction method and device, electronic equipment and storage medium | |
CN113515336B (en) | Live room joining method, creation method, device, equipment and storage medium | |
CN113553505A (en) | Video recommendation method and device and computing equipment | |
US10126821B2 (en) | Information processing method and information processing device | |
CN111885139A (en) | Content sharing method, device and system, mobile terminal and server | |
CN114257859A (en) | Video promotion data generation method and video promotion data display method | |
CN114115524B (en) | Interaction method of intelligent water cup, storage medium and electronic device | |
CN117010725B (en) | Personalized decision method, system and related device | |
CN117354570A (en) | Information display method, device, equipment and storage medium | |
CN111800651B (en) | Information processing method and information processing device | |
US11132239B2 (en) | Processing apparatus, processing system, and non-transitory computer readable medium | |
CN110585714B (en) | UGC element setting method, device and equipment based on block chain | |
CN114581146A (en) | Model evaluation method, model evaluation device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230516 |
|
RJ01 | Rejection of invention patent application after publication |