CN116132745A - Video interaction method and device for multiple terminals - Google Patents

Video interaction method and device for multiple terminals Download PDF

Info

Publication number
CN116132745A
CN116132745A CN202310406638.4A CN202310406638A CN116132745A CN 116132745 A CN116132745 A CN 116132745A CN 202310406638 A CN202310406638 A CN 202310406638A CN 116132745 A CN116132745 A CN 116132745A
Authority
CN
China
Prior art keywords
interaction
video
terminal
label
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310406638.4A
Other languages
Chinese (zh)
Inventor
寿哲男
章利军
俞伟柯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Panteng Technology Co ltd
Original Assignee
Beijing Panteng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Panteng Technology Co ltd filed Critical Beijing Panteng Technology Co ltd
Priority to CN202310406638.4A priority Critical patent/CN116132745A/en
Publication of CN116132745A publication Critical patent/CN116132745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a video interaction method and device for multiple terminals, comprising the following steps: responding to an interaction message of a target terminal aiming at a first video, and creating a video interaction scene which is used for displaying the first video; inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein interaction behaviors exist between account numbers of the candidate terminals and the first video, including direct interaction behaviors; the direct interaction behavior is used for indicating that the account of the candidate terminal is executing interaction operation aiming at the first video; matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result; and selecting the interactive terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene. By adopting the method, the problem that when a user interacts, the object of the interaction cannot be precisely matched, so that the user experiences poorer when using the interaction function of the application product is solved.

Description

Video interaction method and device for multiple terminals
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to a method and an apparatus for video interaction between multiple terminals.
Background
With the development of multimedia technology and the change of user demands, various video and audio and social applications are layered endlessly, and the functions of software become more and more diverse. In order to improve user viscosity and enhance product use experience of users, application research and development manufacturers provide various types of interaction scenes for users by setting video barrages, comment area interactions or video columns and the like.
Under the current technology, when the user performs the interactive behavior, the object performing the interactive behavior cannot be precisely matched, so that the user experiences worse when using the interactive function of the application product.
At present, a video interaction method and device for multiple terminals are needed to solve the problems of the related art.
Disclosure of Invention
The application provides a video interaction method and device for multiple terminals, which are used for solving the problem that a user cannot be matched with an object performing interaction behavior accurately, so that the user experiences poorer when using the interaction function of an application product. By adopting the method, the requirements of the user on the current interaction behavior can be matched more accurately.
The first aspect of the present application provides a video interaction method for multiple terminals, the method comprising: responding to an interaction message of a target terminal aiming at a first video, and creating a video interaction scene which is used for displaying the first video; inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein the account numbers of the candidate terminals and the first video have interaction behaviors, and the interaction behaviors comprise direct interaction behaviors used for indicating that when the account numbers of the candidate terminals are executing interaction operations aiming at the first video; matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result; and selecting the interactive terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene.
By adopting the method, on the basis of providing the interaction scene aiming at the specific video for the user, the user can be matched with the interaction object which is more suitable for the current interaction scene and the self interaction requirement by matching the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals.
Optionally, the plurality of candidate terminals include a first candidate terminal, and the matching target terminal's historical interaction number and the plurality of candidate terminals' historical interaction data are matched to obtain a matching result, which specifically includes: acquiring a target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal; acquiring a first tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the first tag group comprises a first tag group used for displaying the account preferences of the first candidate terminals; and respectively matching the label matching degree of the target label group and the label groups to obtain a matching result.
By adopting the method, the tag group showing the account preferences of the plurality of candidate terminals and the target tag group showing the preferences of the target terminal are obtained, and the tag fitness of the target tag group and the plurality of tag groups is further obtained, so that the matching result is more accurate.
Optionally, according to the matching result, selecting an interaction terminal from the plurality of candidate terminals, specifically including: when the tag fitness of the first tag group is greater than or equal to the preset tag fitness, the first candidate terminal is confirmed to be an interactive terminal.
By adopting the method, the tag matching degree is greater than or equal to the preset tag matching degree selected from the plurality of candidate terminals, and a proper interactive terminal is matched for the user.
Optionally, the interaction behavior further includes an indirect interaction behavior, where the indirect interaction behavior is used to indicate that the account of the candidate terminal is executing an interaction operation with respect to the relevant video of the first video; matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result; the method specifically comprises the following steps:
in the matching result, when the label matching degree of any one of the plurality of label groups is smaller than the preset label matching degree; inquiring a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise a first alternative terminal; acquiring a second tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal; and when the label matching degree of the second label group and the target label group is larger than or equal to the preset label matching degree, the first alternative terminal is confirmed to be an interactive terminal.
By adopting the method, when the candidate terminals with direct interaction behaviors with the user can not meet the interaction requirement of the current interaction scene of the user, other terminals with indirect interaction behaviors are inquired; and screening the interactive terminals meeting the interactive requirements of the current interactive scene of the user. The situation that the user cannot be matched with the proper interactive terminal from the plurality of candidate terminals is avoided.
Optionally, based on the historical data of the target terminal, obtaining a target tag group for displaying the preference of the target terminal specifically includes: obtaining a plurality of labels from historical data of a target terminal by adopting keyword screening, and the coincidence ratio of the labels; acquiring the label type of the first video and the coincidence degree of the label type of the first video; and correcting the overlap ratio of the plurality of labels based on the overlap ratio of the label type of the first video to obtain a plurality of corrected label overlap ratios, and combining the plurality of labels and the plurality of corrected label overlap ratios to obtain a target label group.
By adopting the method, after the multiple labels of the target terminal and the overlap ratio of the multiple labels are obtained, the overlap ratio of the multiple labels is corrected according to the overlap ratio of the label type based on the first video, so that the subsequently obtained target label group can more accord with the interaction requirement of the user in the current interaction scene.
Optionally, the overlap ratio of the plurality of labels is corrected based on the overlap ratio of the label type of the first video, so as to obtain a plurality of corrected label overlap ratios, and the correction label overlap ratio is specifically determined according to the following formula:
before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
Figure SMS_1
,
Figure SMS_2
,
Figure SMS_3
Figure SMS_4
Figure SMS_5
=
Figure SMS_6
×
Figure SMS_7
=1,2,3...
Figure SMS_8
Figure SMS_18
and->
Figure SMS_10
Represents the same label; wherein (1)>
Figure SMS_13
Is greater than->
Figure SMS_19
Figure SMS_23
And->
Figure SMS_21
The value range is +.>
Figure SMS_25
Figure SMS_17
Is the +.>
Figure SMS_22
Personal tag (S)>
Figure SMS_11
Is +.>
Figure SMS_16
Personal tag (S)>
Figure SMS_12
Is the +.>
Figure SMS_15
Overlap of individual labels->
Figure SMS_20
Is +.>
Figure SMS_24
Overlap of individual labels->
Figure SMS_9
For the +.>
Figure SMS_14
Overlap of the individual labels.
The application obtains the modified first label in the plurality of labels by adopting the method
Figure SMS_26
Overlap of the individual labels.
Optionally, matching the tag fitness of the target tag group and the plurality of tag groups specifically includes: the method comprises the steps of screening keywords to obtain a plurality of labels of a first candidate terminal and the coincidence ratio of the labels; and acquiring cosine similarity between the coincidence degrees of the plurality of labels of the first candidate terminal and the coincidence degrees of the plurality of corrected labels, wherein the cosine similarity is the coincidence degree of the labels.
By adopting the method, the label matching degree is obtained by acquiring the cosine similarity between the overlapping degree of the labels of the first candidate terminal and the overlapping degree of the corrected labels.
A second aspect of the present application provides a video interaction device for multiple terminals, the device comprising: the system comprises a scene creation unit, a data query unit, a matching unit and an interaction unit;
the scene creation unit is used for responding to the interaction message of the target terminal aiming at the first video, and creating a video interaction scene which is used for displaying the first video;
the data query unit is used for querying historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein the account numbers of the candidate terminals and the first video have interaction behaviors, and the interaction behaviors comprise direct interaction behaviors which are used for indicating that when the account numbers of the candidate terminals are executing interaction operations aiming at the first video;
the matching unit is used for matching the historical interaction data of the target terminal with the historical interaction data of the plurality of candidate terminals to obtain a matching result;
and the interaction unit is used for selecting the interaction terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interaction terminal into the video interaction scene.
A third aspect of the present application provides an electronic device comprising a processor, a memory, a user interface and a network interface, the memory for storing instructions, the user interface and the network interface for communicating to other devices, the processor for executing the instructions stored in the memory to cause the electronic device to perform the method of any one of the above.
A fourth aspect of the present application provides a computer readable storage medium storing instructions that, when executed, perform a method of any one of the above.
Compared with the related art, the beneficial effects of the application are as follows:
1. on the basis of providing an interaction scene for a specific video for a user, the user can be matched with an interaction object which is more suitable for the current interaction scene and the self interaction requirement by matching the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals;
2. the label group showing account preferences of a plurality of candidate terminals and the target label group showing the preferences of the target terminals are obtained, so that the label matching degree of the target label group and the plurality of label groups is further obtained, and the matching result is more accurate;
3. When candidate terminals with direct interaction behaviors with a user cannot meet the interaction requirements of the current interaction scene of the user, other terminals with indirect interaction behaviors are inquired; and screening the interactive terminals meeting the interactive requirements of the current interactive scene of the user. The situation that a user cannot be matched with a proper interactive terminal from a plurality of candidate terminals is avoided;
4. after the multiple labels of the target terminal and the overlap ratio of the multiple labels are obtained, the overlap ratio of the multiple labels is corrected according to the overlap ratio of the label type based on the first video, so that the subsequently obtained target label group can more accord with the interaction requirement of the user in the current interaction scene.
Drawings
Fig. 1 is a first flow diagram of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 3 is a second flow diagram of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 4 is a third flow diagram of a video interaction method of multiple terminals according to an embodiment of the present application;
fig. 5 is a fourth flowchart of a video interaction method of multiple terminals according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a video interaction device with multiple terminals according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals: 11. a scene creation unit; 12. a data query unit; 13. a matching unit; 14. an interaction unit; 15. an alternative matching unit; 700. an electronic device; 701. a processor; 702. a communication bus; 703. a user interface; 704. a network interface; 705. a memory.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "exemplary," "such as" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "illustrative," "such as" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "illustratively," "such as" or "for example," etc., is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a alone, B alone, and both A and B. In addition, unless otherwise indicated, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The method in the embodiment of the application can be applied to a server to solve the problem that in the related art, when a user performs interactive behavior of a video, the object performing the interactive behavior cannot be matched accurately, so that the user experiences worse when using the interactive function of the application product.
As shown in fig. 1, the present application provides a flowchart of a video interaction method for multiple terminals, which is applied to a server and includes steps S11-S14.
S11, responding to the interaction message of the target terminal aiming at the first video, and creating a video interaction scene which is used for displaying the first video.
In this embodiment of the present application, when a user views a first video in a target terminal, a specific operation may be performed to enable the target terminal to send an interaction message for the first video to a server, and the server creates a video interaction scene in the target terminal after receiving the interaction message, as shown in fig. 2.
S12, inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein interaction behaviors exist between the account numbers of the candidate terminals and the first video, and the interaction behaviors comprise direct interaction behaviors used for indicating that when the account numbers of the candidate terminals are executing interaction operations aiming at the first video.
In the embodiment of the application, the candidate terminal is an operation terminal of a user account that is simultaneously showing the first video. The account number of the candidate terminal and the first video have interaction behaviors, and in the embodiment, the interaction behaviors of the account number of the candidate terminal include direct interaction behaviors; the account number of the candidate terminal is indicated to execute the interaction operation for the first video when the target terminal sends the interaction message for the first video to the server. Consider that in practical applications, the interaction behavior of the account number of the candidate terminal with respect to the first video may not be completely synchronized with the interaction message. Therefore, the embodiment of the application preliminarily identifies the matching degree of the direct interaction behavior of the account number of the candidate terminal and the current interaction requirement of the target terminal by comparing the time coincidence rate of the interaction operation and the interaction message.
S13, matching the historical interaction data of the target terminal with the historical interaction data of the plurality of candidate terminals to obtain a matching result.
In the embodiment of the present application, the historical interaction data of the plurality of candidate terminals refers to a data record of interaction behavior of the candidate terminals for historical viewing video in a historical period. In the embodiment of the application, in order to meet the interaction requirement of the account number of the target terminal, a more accurate interaction object is obtained. And in the process of matching the objects, selecting historical watching videos of the interaction behaviors in the candidate terminals. And analyzing the preferred video types when the candidate terminals do interactive behaviors. Therefore, in the embodiment of the present application, in the history interaction data, only the video of the simple viewing action is made to the target terminal, and the video is not taken as the history interaction data, so as to perform the next judgment. Through the judgment and screening, more accurate matching results can be obtained.
In one possible implementation, the plurality of candidate terminals includes a first candidate terminal; in step S13, the historical interaction data of the target terminal and the historical interaction data of the plurality of candidate terminals are matched to obtain a matching result, which specifically includes steps S131 to S133 shown in fig. 3.
S131, acquiring a target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal.
In the embodiment of the application, a target tag group showing the preference of the target terminal is obtained according to the historical interaction data of the target terminal. And judging and screening the historical interaction data of the target terminal in the same way as the historical interaction data of the candidate terminal. The video is only watched for the history of the existence of interaction behavior for the account of the target terminal as part of the historical interaction data of the target terminal in the embodiment of the present application. In the embodiment of the present application, for obtaining the target tag group, the following method is specifically referred to below.
In one possible implementation, as shown in fig. 4, the target tag group showing the preference of the target terminal is obtained based on the historical interaction data of the target terminal, and specifically includes steps S131a-S131c.
S131a, adopting keyword screening to obtain a plurality of labels from the historical interaction data of the target terminal and the coincidence degree of the labels.
Specifically, for a historical video with interactive behaviors of a target terminal, acquiring all tags of the historical interactive video based on audience custom tag evaluation and automatic tag pushing of a video platform; the higher the association degree between the tag and the historical interactive video is, the higher the overlap degree of the tag is. Before actually acquiring the label type of the first video and the overlap ratio of the label type of the first video, the label type with the too low overlap ratio can be screened by setting an overlap ratio threshold of the label.
In a possible implementation manner, the coincidence degree threshold of the tag is changed according to satisfaction of the account number of the target terminal on the matching result; when the number of the candidate terminals obtained in the matching result is large, the coincidence degree threshold of the label can be improved, so that fewer candidate terminals can meet the matching requirement; when the number of candidate terminals obtained in the matching result is small, the overlap ratio threshold of the tag can be reduced, so that more candidate terminals can meet the matching requirement.
S131b, acquiring the label type of the first video and the coincidence degree of the label type of the first video.
Specifically, in the embodiment of the present application, the tag type of the first video is
Figure SMS_27
The label type of the first video has a coincidence degree of +.>
Figure SMS_28
。/>
S131c, correcting the overlap ratio of the plurality of labels based on the overlap ratio of the label type of the first video to obtain a plurality of corrected label overlap ratios, and combining the plurality of labels and the plurality of corrected label overlap ratios to obtain a target label group.
In one possible implementation, the overlap ratio of the plurality of labels is modified based on the overlap ratio of the label type of the first video, so as to obtain a plurality of modified label overlap ratios, which are specifically determined according to the following formula:
Before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
Figure SMS_29
,
Figure SMS_30
,
Figure SMS_31
Figure SMS_32
wherein ,
Figure SMS_33
=
Figure SMS_34
×
Figure SMS_35
=1,2,3...
Figure SMS_36
Figure SMS_47
and->
Figure SMS_37
Representing the same label->
Figure SMS_44
Is greater than->
Figure SMS_46
Figure SMS_50
And->
Figure SMS_51
The value range is +.>
Figure SMS_53
Figure SMS_48
Is the first of a plurality of labels
Figure SMS_52
Personal tag (S)>
Figure SMS_38
Is +.>
Figure SMS_41
Personal tag (S)>
Figure SMS_39
Is the +.>
Figure SMS_43
Overlap of individual labels->
Figure SMS_45
Is +.>
Figure SMS_49
Overlap of individual labels->
Figure SMS_40
For the +.>
Figure SMS_42
Overlap of the individual labels.
In the above formula, since the tag types of the historical interactive video of the target terminal are richer than those of the first video. And the tag class of the historical interaction video of the target terminal will include the tag type of the first video. Therefore, when correction repair is performed on the overlap ratio of the plurality of tags based on the overlap ratio of the tag type of the first video, only the tag appearing in the tag type of the first video can be repaired. The remaining labels, i.e. in the above formula
Figure SMS_54
The overlap ratio of the corresponding label is kept unchanged.
In the above formula, the modified first of the plurality of labels
Figure SMS_55
Overlap of individual labels->
Figure SMS_56
The formula of (2) is obtained based on the following process:
Figure SMS_57
i.e.
Figure SMS_58
=
Figure SMS_59
×
Figure SMS_60
=1,2,3...
Figure SMS_61
The following describes specific data in connection with the embodiments of the present application, and for the first video, the tag type of the first video is obtained
Figure SMS_62
Label overlap->
Figure SMS_63
. At the same time, a plurality of tags are obtained from the historical interaction data of the target terminal>
Figure SMS_64
And overlap of a plurality of labels
Figure SMS_65
. In the historical interaction data of the target terminal in the embodiment of the application, 10 tags actually exist, and the overlap ratio threshold of the tags is set to be 0.4, so that the tags are identifiedLabel removal with overlap ratio below 0.4. According to the first +.>
Figure SMS_66
The formula of the overlap ratio of the individual labels:
Figure SMS_67
=
Figure SMS_68
×
Figure SMS_69
=1,2,3...
Figure SMS_70
the overlap ratio after correction of a plurality of labels obtained from the historical interaction data of the target terminal is respectively as follows
Figure SMS_72
=
Figure SMS_76
*0.81;
Figure SMS_78
=
Figure SMS_71
*0.65;
Figure SMS_74
=
Figure SMS_77
*0.53;
Figure SMS_80
=
Figure SMS_73
*0.36;
Figure SMS_75
=
Figure SMS_79
*0.18;
The final corrected overlap ratio results of the plurality of labels are as follows:
Figure SMS_81
since the overlap ratio of the corrected label is greater than 1, normalization processing is required for the overlap ratio of the corrected label. To be used for
Figure SMS_82
The normalization processing result can be obtained as the reference value. In this embodiment of the present application, a description is not given of a normalization process.
Finally, the normalization processing result is obtained as follows:
Figure SMS_83
s132, acquiring a first tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the first tag group comprises a first tag group for displaying the account preferences of the first candidate terminals.
S133, respectively matching the label matching degree of the target label group and the label groups to obtain a matching result.
In one possible embodiment, the matching of the tag fitness of the target tag population with the tag populations specifically includes steps S133a-S133b.
And S133a, screening by using keywords, and obtaining a plurality of labels of the first candidate terminal and the coincidence ratio of the labels.
In this embodiment of the present application, the plurality of tags of the first candidate terminal and the method for obtaining the overlap ratio of the plurality of tags may refer to the plurality of tags of the target terminal and the method for obtaining the overlap ratio of the plurality of tags, which are not described herein in detail. Exemplary embodiments of the present application provide a plurality of obtained first candidate terminalsThe overlap ratio of the labels is
Figure SMS_84
The above-mentioned overlap ratio has been subjected to normalization processing.
And S133b, acquiring cosine similarity between the coincidence degrees of the plurality of labels of the first candidate terminal and the coincidence degrees of the plurality of corrected labels, wherein the cosine similarity is the coincidence degree of the labels.
In the embodiment of the application, calculation
Figure SMS_85
and
Figure SMS_86
And the correlation degree of the two groups of data is obtained, and the tag fitness is 0.9425.
In the embodiment of the present application, the tag fitness of the two sets of tag coincidence degrees may also be obtained by other manners, which is not limited in detail in the embodiment of the present application.
In a possible implementation manner, the interaction behavior further includes an indirect interaction behavior, where the indirect interaction behavior is used to indicate that the account number of the candidate terminal is performing an interaction operation with respect to the relevant video of the first video; after matching the historical interaction data of the target terminal and the historical interaction data of the plurality of candidate terminals to obtain a matching result in step S13, as shown in fig. 5, steps S151 to S154 are specifically included.
S151, in the matching result, when the label matching degree of any one of the label groups is smaller than the preset label matching degree.
Specifically, when the tag matching degree of the tags of the plurality of tag groups and the target terminal is smaller than the preset tag matching degree, which is obtained through the method. At the moment, the candidate terminals with direct interaction behaviors on the first video and the interaction requirement of the account numbers of the target terminals cannot be met are simply considered. Thus, to expand the scope of matching interactive objects, multiple alternative terminals for which there is indirect interaction with respect to the first video are queried.
S152, inquiring a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise the first alternative terminal.
In the embodiment of the application, the terminal account of the indirect interaction behavior is other videos related to the first video currently being watched, or videos related to the first video exist in the historical watched videos without watching the videos. Such as: the first video is assumed to be a second-season video of a certain variety, and the video watched by the account of the first alternative terminal in the historical time period is assumed to be the first-season video of the certain variety. The existence of indirect interaction behavior between the account number of the first alternative terminal and the first video can be judged. By setting the alternative terminal with indirect interaction with the first video, the range of the target terminal when the account number is matched with the interaction requirement can be enlarged.
S153, acquiring a second tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal.
In the embodiment of the present application, the historical interaction data of the candidate terminal and the subsequent terminal may be the same type of video viewing data.
And S154, when the label matching degree of the second label group and the target label group is larger than or equal to the preset label matching degree, the first alternative terminal is confirmed to be an interactive terminal.
And S14, selecting the interactive terminal from the plurality of candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene.
In a possible implementation manner, according to the matching result, selecting an interactive terminal from a plurality of candidate terminals specifically includes: when the tag fitness of the first tag group is greater than or equal to the preset tag fitness, the first candidate terminal is confirmed to be an interactive terminal.
In the embodiment of the application, for the plurality of candidate terminals having one or more of the direct interaction behavior or the indirect interaction behavior, a plurality of candidate terminals satisfying the preset tag fitness are obtained. And sequentially sending interactive behavior invitations to the candidate terminals according to the tag fitness. If the account number of the candidate terminal is also performing video interaction matching action, the candidate terminal is directly pulled into a video interaction scene without inviting.
In a possible implementation manner, an account corresponding to the target terminal can have management authority of the video interaction scene.
By adopting the method embodiment, the beneficial effects which can be achieved are as follows:
1. on the basis of providing the user with the interaction scene aiming at the specific video, the user can be matched with the interaction object which is more suitable for the current interaction scene and the self interaction requirement by matching the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals.
2. The tag group showing the account preferences of the plurality of candidate terminals and the target tag group showing the preferences of the target terminal are obtained, so that the tag fitness of the target tag group and the plurality of tag groups is further obtained, and the matching result is more accurate.
3. When candidate terminals with direct interaction behaviors with a user cannot meet the interaction requirements of the current interaction scene of the user, other terminals with indirect interaction behaviors are inquired; and screening the interactive terminals meeting the interactive requirements of the current interactive scene of the user. The situation that the user cannot be matched with the proper interactive terminal from the plurality of candidate terminals is avoided.
4. After the multiple labels of the target terminal and the overlap ratio of the multiple labels are obtained, the overlap ratio of the multiple labels is corrected according to the overlap ratio of the label type based on the first video, so that the subsequently obtained target label group can more accord with the interaction requirement of the user in the current interaction scene.
The embodiment of the application provides a video interaction device of multiple terminals, as shown in fig. 6, the device includes: a scene creation unit 11, a data query unit 12, a matching unit 13, and an interaction unit 14;
the scene creation unit 11 is configured to create a video interaction scene for showing the first video in response to an interaction message of the target terminal for the first video.
The data querying unit 12 is configured to query historical interaction data of a plurality of candidate terminals based on the video interaction scenario, where the account numbers of the candidate terminals have interaction behaviors with the first video, and the interaction behaviors include direct interaction behaviors, where the direct interaction behaviors are used to indicate that when the account numbers of the candidate terminals are executing an interaction operation with respect to the first video.
And a matching unit 13 for matching the historical interaction data of the target terminal with the historical interaction data of the plurality of candidate terminals to obtain a matching result.
And the interaction unit 14 is used for selecting the interaction terminal from the candidate terminals according to the matching result so as to add the account of the interaction terminal into the video interaction scene.
In a possible implementation manner, the plurality of candidate terminals include a first candidate terminal, and the matching unit 13 includes a target tag group acquisition module, a candidate terminal tag acquisition module, and a matching result acquisition module.
The target tag group acquisition module is used for acquiring the target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal.
The terminal candidate tag acquisition module is used for acquiring a first tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the first tag group comprises a first tag group used for displaying the account preferences of the first candidate terminals.
The matching result acquisition module is used for respectively matching the label matching degree of the target label group and the label groups so as to obtain a matching result.
In one possible implementation, the interaction unit 14 comprises an interaction terminal confirmation module.
And the interactive terminal confirmation module is used for confirming the first candidate terminal as the interactive terminal when the label matching degree of the first label group is larger than or equal to the preset label matching degree.
In a possible embodiment, the interaction behavior further comprises an indirect interaction behavior for indicating that the account number of the candidate terminal is performing an interaction operation with respect to the relevant video of the first video, after the matching unit 13, the apparatus further comprises an alternative matching unit 15.
An alternative matching unit 15, configured to, in a matching result, when a tag fitness of any one of the plurality of tag groups is less than a preset tag fitness; inquiring a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise a first alternative terminal; acquiring a second tag group for displaying account preferences of the plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal; and when the label matching degree of the second label group and the target label group is larger than or equal to the preset label matching degree, the first alternative terminal is confirmed to be an interactive terminal.
In one possible implementation, the target tag population acquisition module includes a first overlap ratio acquisition sub-module, a second overlap ratio acquisition sub-module, and an overlap ratio correction sub-module.
The first overlap ratio obtaining sub-module is used for obtaining a plurality of labels from the historical data of the target terminal by adopting keyword screening and the overlap ratio of the labels.
And the second coincidence degree acquisition sub-module acquires the label type of the first video and the coincidence degree of the label type of the first video.
And the overlap ratio correction submodule is used for correcting the overlap ratio of the plurality of labels based on the overlap ratio of the label type of the first video so as to obtain a plurality of corrected label overlap ratios, and combining the plurality of labels and the plurality of corrected label overlap ratios to obtain a target label group. Correcting the overlap ratio of the labels based on the overlap ratio of the label types of the first video to obtain a plurality of corrected label overlap ratios, wherein the overlap ratio is determined according to the following formula:
before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
Figure SMS_87
,
Figure SMS_88
,
Figure SMS_89
Figure SMS_90
wherein ,
Figure SMS_91
=
Figure SMS_92
×
Figure SMS_93
=1,2,3...
Figure SMS_94
;
Figure SMS_95
and->
Figure SMS_96
Representing the same label->
Figure SMS_97
Is greater than->
Figure SMS_98
Figure SMS_99
And->
Figure SMS_100
The value range is +.>
Figure SMS_101
Figure SMS_103
Is the +.>
Figure SMS_107
Personal tag (S)>
Figure SMS_110
Is +. >
Figure SMS_104
Personal tag (S)>
Figure SMS_105
Is the +.>
Figure SMS_108
Overlap of individual labels->
Figure SMS_111
Is +.>
Figure SMS_102
Overlap of individual labels->
Figure SMS_106
For the +.>
Figure SMS_109
Overlap of the individual labels.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
Referring to fig. 7, a schematic structural diagram of an electronic device is provided in an embodiment of the present application. As shown in fig. 7, the electronic device 700 may include: at least one processor 701, at least one network interface 704, a user interface 703, a memory 705, at least one communication bus 702.
Wherein the communication bus 702 is used to enable connected communications between these components.
The user interface 703 may include a Display screen (Display), a Camera (Camera), and the optional user interface 703 may further include a standard wired interface, and a wireless interface.
The network interface 704 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 701 may include one or more processing cores. The processor 701 connects various portions of the overall server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 705, and invoking data stored in the memory 705. Alternatively, the processor 701 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (ProgrammableLogic Array, PLA). The processor 701 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 701 and may be implemented by a single chip.
The memory 705 may include a random access memory (Random Access Memory, RAM) or a Read-only memory (Read-only memory). Optionally, the memory 705 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 705 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 705 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 705 may also optionally be at least one storage device located remotely from the processor 701. As shown in fig. 7, an operating system, a network communication module, a user interface module, and an application program for video interaction with respect to a multi-terminal may be included in the memory 705 as one type of computer storage medium.
In the electronic device 700 shown in fig. 7, the user interface 703 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 701 may be configured to invoke application programs for video interactions of multiple terminals stored in memory 705, which when executed by one or more processors, cause electronic device 700 to perform the methods as described in one or more of the embodiments above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.

Claims (10)

1. A video interaction method for multiple terminals, which is applied to a server, the method comprising:
responding to an interaction message of a target terminal aiming at a first video, and creating a video interaction scene, wherein the video interaction scene is used for displaying the first video;
inquiring historical interaction data of a plurality of candidate terminals based on the video interaction scene, wherein the account numbers of the candidate terminals and the first video have interaction behaviors, and the interaction behaviors comprise direct interaction behaviors used for indicating that the account numbers of the candidate terminals are executing interaction operations aiming at the first video;
Matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals to obtain a matching result;
and selecting an interactive terminal from the candidate terminals according to the matching result so as to add the account of the interactive terminal into the video interactive scene.
2. The method according to claim 1, wherein the plurality of candidate terminals includes a first candidate terminal, and the matching the historical interaction data of the target terminal and the historical interaction data of the plurality of candidate terminals to obtain a matching result specifically includes:
acquiring a target tag group showing the preference of the target terminal based on the historical interaction data of the target terminal;
acquiring a first tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the candidate terminals, wherein the first tag group comprises a first tag group for displaying the account preferences of the first candidate terminals;
and respectively matching the label matching degree of the target label group and the label groups to obtain the matching result.
3. The method according to claim 2, wherein the selecting an interactive terminal from a plurality of candidate terminals according to the matching result specifically comprises:
And when the label matching degree of the first label group is larger than or equal to a preset label matching degree, the first candidate terminal is confirmed to be the interactive terminal.
4. The method of claim 2, wherein the interaction behavior further comprises an indirect interaction behavior for indicating that the account of the candidate terminal is performing an interaction operation with respect to the relevant video of the first video; the historical interaction data of the target terminal and the historical interaction data of a plurality of candidate terminals are matched, so that a matching result is obtained; the method specifically comprises the following steps:
in the matching result, when the label matching degree of any one of the label groups is smaller than a preset label matching degree;
querying a plurality of alternative terminals with indirect interaction behaviors aiming at the first video, wherein the plurality of alternative terminals comprise a first alternative terminal;
acquiring a second tag group for displaying account preferences of a plurality of candidate terminals based on historical interaction data of the plurality of candidate terminals, wherein the second tag group comprises a second tag group; the second tag group is used for displaying account preferences of the first alternative terminal;
And when the label matching degree of the second label group and the target label group is larger than or equal to a preset label matching degree, the first alternative terminal is confirmed to be the interactive terminal.
5. The method according to claim 2, wherein obtaining a target tag group showing the target terminal preference based on the history data of the target terminal, specifically comprises:
obtaining a plurality of labels from the historical data of the target terminal by adopting keyword screening, and the coincidence degree of the labels;
acquiring the label type of the first video and the overlap ratio of the label type of the first video;
and correcting the overlap ratio of the labels based on the overlap ratio of the label types of the first video to obtain a plurality of corrected label overlap ratios, and combining the labels and the corrected label overlap ratios to obtain the target label group.
6. The method of claim 5, wherein the modifying the overlap of the plurality of the tags based on the overlap of the tag type of the first video to obtain a plurality of modified tag overlap is determined according to the following formula:
Before and after correction, the mapping relation formula of the coincidence degree of the labels is as follows:
Figure QLYQS_1
,
Figure QLYQS_2
,
Figure QLYQS_3
Figure QLYQS_4
wherein ,
Figure QLYQS_5
=
Figure QLYQS_6
×
Figure QLYQS_7
=1,2,3...
Figure QLYQS_8
;
Figure QLYQS_9
and->
Figure QLYQS_10
Representing the same label->
Figure QLYQS_11
Is greater than->
Figure QLYQS_12
Figure QLYQS_13
And->
Figure QLYQS_14
The value range is +.>
Figure QLYQS_15
Figure QLYQS_17
Is the +.>
Figure QLYQS_20
Personal tag (S)>
Figure QLYQS_24
A first video of the tag type of the first video>
Figure QLYQS_18
The number of tags to be used in the process of the label,
Figure QLYQS_21
is the +.>
Figure QLYQS_23
Overlap of individual labels->
Figure QLYQS_25
A first video of the tag type of the first video>
Figure QLYQS_16
Overlap of individual labels->
Figure QLYQS_19
For the modified plurality of said tags +.>
Figure QLYQS_22
Overlap of the individual labels.
7. The method of claim 5, wherein matching the tag fitness of the target tag population to a plurality of the tag populations, specifically comprises:
obtaining a plurality of labels of the first candidate terminal and the coincidence degrees of the labels by adopting keyword screening;
and acquiring cosine similarity between the coincidence degrees of the labels of the first candidate terminal and the coincidence degrees of the corrected labels, wherein the cosine similarity is the coincidence degree of the labels.
8. A multi-terminal video interaction device, the device comprising: the system comprises a scene creation unit, a data query unit, a matching unit and an interaction unit;
The scene creation unit is used for responding to the interaction message of the target terminal aiming at the first video, and creating a video interaction scene which is used for displaying the first video;
the data query unit is configured to query historical interaction data of a plurality of candidate terminals based on the video interaction scene, where the account numbers of the candidate terminals have interaction behaviors with the first video, and the interaction behaviors include direct interaction behaviors, where the direct interaction behaviors are used for indicating that when the account numbers of the candidate terminals are executing an interaction operation for the first video;
the matching unit is used for matching the historical interaction data of the target terminal with the historical interaction data of a plurality of candidate terminals so as to obtain a matching result;
and the interaction unit is used for selecting an interaction terminal from a plurality of candidate terminals according to the matching result so as to add the account of the interaction terminal into the video interaction scene.
9. A computer readable storage medium storing instructions which, when executed, perform the method of any one of claims 1-8.
10. An electronic device comprising a processor (701), a user interface (703), a network interface (704) and a memory (705), the memory (705) being configured to store instructions, the user interface (703) and the network interface (704) being configured to communicate to other devices, the processor (701) being configured to execute the instructions stored in the memory (705) to cause the electronic device (700) to perform the method according to any one of claims 1-8.
CN202310406638.4A 2023-04-17 2023-04-17 Video interaction method and device for multiple terminals Pending CN116132745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310406638.4A CN116132745A (en) 2023-04-17 2023-04-17 Video interaction method and device for multiple terminals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310406638.4A CN116132745A (en) 2023-04-17 2023-04-17 Video interaction method and device for multiple terminals

Publications (1)

Publication Number Publication Date
CN116132745A true CN116132745A (en) 2023-05-16

Family

ID=86306649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310406638.4A Pending CN116132745A (en) 2023-04-17 2023-04-17 Video interaction method and device for multiple terminals

Country Status (1)

Country Link
CN (1) CN116132745A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105022797A (en) * 2015-06-30 2015-11-04 北京奇艺世纪科技有限公司 Resource topic processing method and apparatus
CN109544396A (en) * 2019-01-10 2019-03-29 腾讯科技(深圳)有限公司 Account recommended method, device, server, terminal and storage medium
US20190182565A1 (en) * 2017-12-13 2019-06-13 Playable Pty Ltd System and Method for Algorithmic Editing of Video Content
CN110933456A (en) * 2019-12-17 2020-03-27 北京爱奇艺科技有限公司 Video-based interaction system, method and device and electronic equipment
CN111767429A (en) * 2020-06-29 2020-10-13 北京奇艺世纪科技有限公司 Video recommendation method and device and electronic equipment
CN114297475A (en) * 2021-12-06 2022-04-08 新奥新智科技有限公司 Object recommendation method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105022797A (en) * 2015-06-30 2015-11-04 北京奇艺世纪科技有限公司 Resource topic processing method and apparatus
US20190182565A1 (en) * 2017-12-13 2019-06-13 Playable Pty Ltd System and Method for Algorithmic Editing of Video Content
CN109544396A (en) * 2019-01-10 2019-03-29 腾讯科技(深圳)有限公司 Account recommended method, device, server, terminal and storage medium
CN110933456A (en) * 2019-12-17 2020-03-27 北京爱奇艺科技有限公司 Video-based interaction system, method and device and electronic equipment
CN111767429A (en) * 2020-06-29 2020-10-13 北京奇艺世纪科技有限公司 Video recommendation method and device and electronic equipment
CN114297475A (en) * 2021-12-06 2022-04-08 新奥新智科技有限公司 Object recommendation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108156507B (en) Virtual article presenting method, device and storage medium
US20150289021A1 (en) System and method for collecting viewer opinion information
CN111079529B (en) Information prompting method and device, electronic equipment and storage medium
CN110297975B (en) Recommendation strategy evaluation method and device, electronic equipment and storage medium
CN114938458B (en) Object information display method and device, electronic equipment and storage medium
CN112511849A (en) Game display method, device, equipment, system and storage medium
CN115657846A (en) Interaction method and system based on VR digital content
CN109218817B (en) Method and device for displaying virtual gift prompting message
CN111669622A (en) Method and device for determining default play relationship of videos and electronic equipment
CN104954824A (en) Method, device and system for setting video
CN117459662B (en) Video playing method, video identifying method, video playing device, video playing equipment and storage medium
CN116132745A (en) Video interaction method and device for multiple terminals
CN115983499A (en) Box office prediction method and device, electronic equipment and storage medium
CN113515336B (en) Live room joining method, creation method, device, equipment and storage medium
CN113553505A (en) Video recommendation method and device and computing equipment
US10126821B2 (en) Information processing method and information processing device
CN111885139A (en) Content sharing method, device and system, mobile terminal and server
CN114257859A (en) Video promotion data generation method and video promotion data display method
CN114115524B (en) Interaction method of intelligent water cup, storage medium and electronic device
CN117010725B (en) Personalized decision method, system and related device
CN117354570A (en) Information display method, device, equipment and storage medium
CN111800651B (en) Information processing method and information processing device
US11132239B2 (en) Processing apparatus, processing system, and non-transitory computer readable medium
CN110585714B (en) UGC element setting method, device and equipment based on block chain
CN114581146A (en) Model evaluation method, model evaluation device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230516

RJ01 Rejection of invention patent application after publication