CN112256890A - Information display method and device, electronic equipment and storage medium - Google Patents

Information display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112256890A
CN112256890A CN202011191748.6A CN202011191748A CN112256890A CN 112256890 A CN112256890 A CN 112256890A CN 202011191748 A CN202011191748 A CN 202011191748A CN 112256890 A CN112256890 A CN 112256890A
Authority
CN
China
Prior art keywords
information
aggregation
piece
interaction
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011191748.6A
Other languages
Chinese (zh)
Inventor
戴陆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011191748.6A priority Critical patent/CN112256890A/en
Publication of CN112256890A publication Critical patent/CN112256890A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The disclosure relates to an information display method, an information display device, an electronic device and a storage medium, wherein the method comprises the following steps: and acquiring interactive information of the target multimedia, performing aggregation processing on the interactive information according to the semantic similarity to obtain at least one piece of aggregation information, and displaying the at least one piece of aggregation information. The interactive information of the target multimedia is acquired, the interactive information is aggregated according to the semantic similarity to obtain at least one piece of aggregated information, and the aggregated information is obtained by performing semantic aggregation on at least one piece of interactive information with the semantic similarity reaching the similarity threshold, so that information loss is avoided and the efficiency of a user for checking the information is improved by displaying the aggregated information after clustering the interactive information.

Description

Information display method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an information display method and apparatus, an electronic device, and a storage medium.
Background
With the development of internet technology, more and more users are used to acquire information in a mode of watching multimedia such as videos, and the users can comment on the watched content after watching the multimedia such as videos or interact with the users providing multimedia resources such as videos through comments, so that the comment information can be displayed in different modes.
In the related technology, taking a live broadcast scene as an example, in the live broadcast process, comment information of a fan (namely, a user watching the live broadcast) can be displayed in a rolling manner in a live broadcast interface in a message flow manner, and the anchor mainly completes interaction with the fan by browsing the comment information of the fan and making a corresponding response. However, in a larger live broadcast room (i.e. a large number of fans are owned), due to the huge amount of information, it is difficult for a host to process comment information of fans one by one, and due to the limitation of a display interface, it is also difficult to display all comment information, so at present, comment information sent by fans is generally screened by fan grades, or comment information is screened by information types to filter part of information, so as to reduce the amount of display and processing of comment information. However, currently, filtering is used to reduce the amount of information, which is likely to result in information loss.
Disclosure of Invention
The present disclosure provides an information display method, an information display apparatus, an electronic device, and a storage medium, so as to at least solve the problem in the related art that information is easily lost by filtering to reduce the amount of information. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an information display method, including:
acquiring interaction information of a target multimedia;
performing aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information, wherein the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
and displaying the at least one piece of aggregation information.
In one embodiment, the aggregating the interaction information according to the semantic similarity includes: and when the condition of aggregation is detected to be met, performing aggregation processing on the interaction information according to the semantic similarity.
In one embodiment, when it is detected that an aggregation condition is satisfied, performing aggregation processing on the interaction information according to semantic similarity includes any one of: if the quantity of the interactive information reaches a set quantity threshold value, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity; if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity; and if an information aggregation instruction of the target account for the interactive information is received, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity.
In one embodiment, the aggregating the interaction information according to the semantic similarity to obtain at least one piece of aggregated information includes: performing semantic recognition on each piece of interaction information, and acquiring semantic feature vectors corresponding to each piece of interaction information according to a semantic recognition result; acquiring the similarity between the semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and the semantic feature vectors corresponding to other pieces of interaction information; and if the similarity is greater than a similarity threshold value, performing aggregation classification on the interaction information corresponding to the similarity to obtain at least one classification category, and generating the aggregation information corresponding to the classification category.
In one embodiment, the performing semantic recognition on each piece of interaction information and obtaining semantic feature vectors corresponding to each piece of interaction information according to semantic recognition results includes: performing word segmentation processing on each piece of interaction information according to semantic recognition to obtain a plurality of words of the interaction information; and extracting word vectors corresponding to each participle of the interactive information respectively, and performing feature fusion and normalization processing on the word vectors corresponding to each participle respectively to obtain semantic feature vectors corresponding to the interactive information.
In one embodiment, the generating the aggregation information corresponding to the classification category includes: acquiring interaction information under the classification category; extracting content keywords of each piece of interactive information under the classification category to obtain the content keywords of each piece of interactive information; generating a content keyword set of the interactive information under the classification category according to the content keywords of each piece of interactive information, wherein the content keyword set comprises extracted content keywords and corresponding extraction times; and generating the aggregation information corresponding to the classification category based on the extraction times and the content keywords.
In one embodiment, the presenting the at least one piece of aggregated information includes: acquiring the quantity of interaction information under each aggregation information in the at least one aggregation information; and displaying the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information.
In one embodiment, the interaction information has corresponding account weight data; the presenting the at least one piece of aggregated information comprises: acquiring the quantity of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information; weighting the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information according to a set weighting coefficient to obtain weighted data corresponding to the aggregation information; and displaying the at least one piece of aggregation information according to the aggregation information and the corresponding weighted data.
In one embodiment, the presenting the at least one piece of aggregation information according to the aggregation information and corresponding weighting data includes: and determining the display mode of the at least one piece of aggregation information according to the size of the weighted data, and displaying the aggregation information according to the display modes corresponding to the at least one piece of aggregation information respectively.
In one embodiment, after obtaining the at least one piece of aggregation information, the method further includes: receiving new interaction information of the target multimedia; and generating corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information.
In one embodiment, the generating, according to the at least one piece of aggregation information, corresponding aggregation information for the new interaction information includes: matching the new interaction information with the at least one piece of aggregation information; if the matched aggregation information exists, classifying the new interaction information into a classification category corresponding to the matched aggregation information, and taking the matched aggregation information as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
In one embodiment, the matching the new interaction information with the at least one piece of aggregation information includes: performing semantic recognition on the new interaction information and the at least one piece of aggregation information respectively, and acquiring a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively according to a semantic recognition result; acquiring the similarity between a first semantic feature vector of the new interaction information and a second feature vector corresponding to at least one piece of aggregation information respectively; if the aggregation information with the similarity larger than the similarity threshold exists, determining that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist.
In one embodiment, the aggregating the interaction information according to the semantic similarity to obtain at least one piece of aggregated information includes: and sending an aggregation request to a server, wherein the aggregation request is used for triggering the server to aggregate the interaction information according to the semantic similarity and returning at least one piece of aggregated information after aggregation.
According to a second aspect of the embodiments of the present disclosure, there is provided an information display method, including:
acquiring interaction information of a target multimedia;
performing aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information, wherein the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
and returning the at least one piece of aggregation information to the client, so that the client can display the at least one piece of aggregation information to the target account.
In one embodiment, the aggregating the interaction information according to the semantic similarity includes: and when the condition of aggregation is detected to be met, performing aggregation processing on the interaction information according to the semantic similarity.
In one embodiment, when it is detected that an aggregation condition is satisfied, performing aggregation processing on the interaction information according to semantic similarity includes any one of: if the quantity of the interactive information reaches a set quantity threshold value, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity; if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity; and if an information aggregation instruction from the client is received, determining that aggregation conditions are met, and performing aggregation processing on the interaction information according to semantic similarity.
In one embodiment, the aggregating the interaction information according to the semantic similarity to obtain at least one piece of aggregated information includes: performing semantic recognition on each piece of interaction information, and acquiring semantic feature vectors corresponding to each piece of interaction information according to a semantic recognition result; acquiring the similarity between the semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and the semantic feature vectors corresponding to other pieces of interaction information; and if the similarity is greater than a similarity threshold value, performing aggregation classification on the interaction information corresponding to the similarity to obtain at least one classification category, and generating the aggregation information corresponding to the classification category.
In one embodiment, the performing semantic recognition on each piece of interaction information and obtaining semantic feature vectors corresponding to each piece of interaction information according to semantic recognition results includes: performing word segmentation processing on each piece of interaction information according to semantic recognition to obtain a plurality of words of the interaction information; and extracting word vectors corresponding to each participle of the interactive information respectively, and performing feature fusion and normalization processing on the word vectors corresponding to each participle respectively to obtain semantic feature vectors corresponding to the interactive information.
In one embodiment, the generating the aggregation information corresponding to the classification category includes: acquiring interaction information under the classification category; extracting content keywords of each piece of interactive information under the classification category to obtain the content keywords of each piece of interactive information; generating a content keyword set of the interactive information under the classification category according to the content keywords of each piece of interactive information, wherein the content keyword set comprises extracted content keywords and corresponding extraction times; and generating the aggregation information corresponding to the classification category based on the extraction times and the content keywords.
In one embodiment, the returning the at least one piece of aggregation information to the client includes: acquiring the quantity of interaction information under each aggregation information in the at least one aggregation information; and returning the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to the client, so that the client can display the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to a target account.
In one embodiment, the interaction information has corresponding account weight data; the returning the at least one piece of aggregation information to the client comprises: acquiring the quantity of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information; weighting the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information according to a set weighting coefficient to obtain weighted data corresponding to the aggregation information; determining a display mode of the at least one piece of aggregation information according to the size of the weighted data; and returning the at least one piece of aggregation information and the corresponding display mode to the client, so that the client can display the at least one piece of aggregation information to the target account by adopting the display mode corresponding to the aggregation information.
In one embodiment, after obtaining the at least one piece of aggregation information, the method further includes: receiving new interaction information of the target multimedia; and generating corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information.
In one embodiment, the generating, according to the at least one piece of aggregation information, corresponding aggregation information for the new interaction information includes: matching the new interaction information with the at least one piece of aggregation information, if the matched aggregation information exists, classifying the new interaction information into a classification category corresponding to the matched aggregation information, and taking the matched aggregation information as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
In one embodiment, the matching the new interaction information with the at least one piece of aggregation information includes: performing semantic recognition on the new interaction information and the at least one piece of aggregation information respectively, and acquiring a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively according to a semantic recognition result; acquiring the similarity between a first semantic feature vector of the new interaction information and a second feature vector corresponding to at least one piece of aggregation information respectively; if the aggregation information with the similarity larger than the similarity threshold exists, determining that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist.
According to a third aspect of the embodiments of the present disclosure, there is provided an information presentation apparatus including:
the interactive information acquisition module is configured to acquire interactive information of the target multimedia;
the aggregation module is configured to perform aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information, wherein the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
a presentation module configured to perform presentation of the at least one aggregated information.
In one embodiment, the apparatus further includes an aggregation condition detection module configured to perform, when it is detected that an aggregation condition is satisfied, aggregation processing on the interaction information according to semantic similarity.
In one embodiment, the aggregation condition detection module is configured to perform: if the number of the interactive information reaches a set number threshold, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity; if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity; and if an information aggregation instruction of the target account for the interactive information is received, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity.
In one embodiment, the aggregation module comprises: the semantic recognition unit is configured to perform semantic recognition on each piece of interaction information and acquire semantic feature vectors corresponding to the interaction information according to semantic recognition results; the similarity obtaining unit is configured to execute obtaining of similarity between a semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and semantic feature vectors corresponding to other pieces of interaction information; and the aggregation information generation unit is configured to perform aggregation classification on the interaction information corresponding to the similarity if the similarity is greater than a similarity threshold to obtain at least one classification category, and generate aggregation information corresponding to the classification category.
In one embodiment, the semantic recognition unit is configured to perform: performing word segmentation processing on each piece of interaction information according to semantic recognition to obtain a plurality of words of the interaction information; and extracting word vectors corresponding to each participle of the interactive information respectively, and performing feature fusion and normalization processing on the word vectors corresponding to each participle respectively to obtain semantic feature vectors corresponding to the interactive information.
In one embodiment, the aggregation information generation unit is configured to perform: acquiring interaction information under the classification category; extracting content keywords of each piece of interactive information under the classification category to obtain the content keywords of each piece of interactive information; generating a content keyword set of the interactive information under the classification category according to the content keywords of each piece of interactive information, wherein the content keyword set comprises extracted content keywords and corresponding extraction times; and generating the aggregation information corresponding to the classification category based on the extraction times and the content keywords.
In one embodiment, the presentation module is configured to perform: acquiring the quantity of interaction information under each aggregation information in the at least one aggregation information; and displaying the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information. In one embodiment, the interaction information has corresponding account weight data; the display module comprises: the data acquisition unit is configured to acquire the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each piece of interactive information; the weighting processing unit is configured to perform weighting processing on the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information according to a set weighting coefficient, and acquire the weighting data corresponding to the aggregation information; a presentation unit configured to perform presentation of the at least one piece of aggregation information according to the aggregation information and the corresponding weighting data.
In one embodiment, the presentation unit is configured to perform: and determining the display mode of the at least one piece of aggregation information according to the size of the weighted data, and displaying the aggregation information according to the display modes corresponding to the at least one piece of aggregation information respectively.
In one embodiment, the aggregation module further comprises: a new interactive information receiving unit configured to perform receiving new interactive information for the target multimedia; and the new interaction information processing unit is configured to execute the generation of corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information.
In one embodiment, the new interactive information processing unit includes: a matching subunit configured to perform matching the new interaction information with the at least one piece of aggregation information; the processing subunit is configured to execute, if there is matched aggregation information, classifying the new interaction information into a classification category corresponding to the matched aggregation information, and using the matched aggregation information as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
In one embodiment, the matching subunit is configured to perform: performing semantic recognition on the new interaction information and the at least one piece of aggregation information respectively, and acquiring a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively according to a semantic recognition result; acquiring the similarity between a first semantic feature vector of the new interaction information and a second feature vector corresponding to at least one piece of aggregation information respectively; if the aggregation information with the similarity larger than the similarity threshold exists, determining that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist.
In one embodiment, the aggregation module is configured to perform: and sending an aggregation request to a server, wherein the aggregation request is used for triggering the server to aggregate the interaction information according to the semantic similarity and returning at least one piece of aggregated information after aggregation.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an information presentation apparatus including:
the interactive information acquisition module is configured to acquire interactive information of the target multimedia;
the aggregation module is configured to perform aggregation processing on the interaction information according to semantic similarity to obtain at least one piece of aggregation information, and the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
and the information returning module is configured to execute returning the at least one piece of aggregation information to the client for the client to display the at least one piece of aggregation information to the target account.
In one embodiment, the apparatus further includes an aggregation condition detection module configured to perform, when it is detected that an aggregation condition is satisfied, aggregation processing on the interaction information according to semantic similarity.
In one embodiment, the aggregation condition detection module is configured to perform: if the quantity of the interactive information reaches a set quantity threshold value, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity; if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity; and if an information aggregation instruction from the client is received, determining that aggregation conditions are met, and performing aggregation processing on the interaction information according to semantic similarity.
In one embodiment, the aggregation module comprises: the semantic recognition unit is configured to perform semantic recognition on each piece of interaction information and acquire semantic feature vectors corresponding to the interaction information according to semantic recognition results; the similarity obtaining unit is configured to execute obtaining of similarity between a semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and semantic feature vectors corresponding to other pieces of interaction information; and the aggregation information generation unit is configured to perform aggregation classification on the interaction information corresponding to the similarity if the similarity is greater than a similarity threshold to obtain at least one classification category, and generate aggregation information corresponding to the classification category.
In one embodiment, the semantic recognition unit is configured to perform: performing word segmentation processing on each piece of interaction information according to semantic recognition to obtain a plurality of words of the interaction information; and extracting word vectors corresponding to each participle of the interactive information respectively, and performing feature fusion and normalization processing on the word vectors corresponding to each participle respectively to obtain semantic feature vectors corresponding to the interactive information.
In one embodiment, the aggregation information generation unit is configured to perform: acquiring interaction information under the classification category; extracting content keywords of each piece of interactive information under the classification category to obtain the content keywords of each piece of interactive information; generating a content keyword set of the interactive information under the classification category according to the content keywords of each piece of interactive information, wherein the content keyword set comprises extracted content keywords and corresponding extraction times; and generating the aggregation information corresponding to the classification category based on the extraction times and the content keywords.
In one embodiment, the information returning module is configured to perform: acquiring the quantity of interaction information under each aggregation information in the at least one aggregation information; and returning the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to the client, so that the client can display the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to a target account.
In one embodiment, the interaction information has corresponding account weight data; the information return module is configured to perform: acquiring the quantity of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information; weighting the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information according to a set weighting coefficient to obtain weighted data corresponding to the aggregation information; determining a display mode of the at least one piece of aggregation information according to the size of the weighted data; and returning the at least one piece of aggregation information and the corresponding display mode to the client, so that the client can display the at least one piece of aggregation information to the target account by adopting the display mode corresponding to the aggregation information.
In one embodiment, the aggregation module further comprises: a new interactive information receiving unit configured to perform receiving new interactive information for the target multimedia; and the new interaction information processing unit is configured to execute the generation of corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information.
In one embodiment, the new interactive information processing unit includes: a matching subunit configured to perform matching the new interaction information with the at least one piece of aggregation information; the processing subunit is configured to execute, if there is matched aggregation information, classifying the new interaction information into a classification category corresponding to the matched aggregation information, and using the matched aggregation information as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
In one embodiment, the matching subunit is configured to perform: performing semantic recognition on the new interaction information and the at least one piece of aggregation information respectively, and acquiring a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively according to a semantic recognition result; acquiring the similarity between a first semantic feature vector of the new interaction information and a second feature vector corresponding to at least one piece of aggregation information respectively; if the aggregation information with the similarity larger than the similarity threshold exists, determining that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to cause the electronic device to perform the information presentation method described in any embodiment of the first aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a server including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to cause the server to perform the information presentation method described in any embodiment of the second aspect.
According to a seventh aspect of the embodiments of the present disclosure, there is provided an information presentation system including the electronic device provided by the fifth aspect and the server provided by the sixth aspect.
According to an eighth aspect of the embodiments of the present disclosure, there is provided a storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the information presentation method described in any one of the embodiments of the first aspect; alternatively, the instructions in the storage medium, when executed by a processor of a server, enable the server to perform the information presentation method described in any of the embodiments of the second aspect.
According to a ninth aspect of embodiments of the present disclosure, there is provided a computer program product, the program product comprising a computer program, the computer program being stored in a readable storage medium, from which the at least one processor of the device reads and executes the computer program, so that the device performs the information presentation method described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the information display method, the interactive information of the target multimedia is obtained, the interactive information is subjected to aggregation processing according to the semantic similarity to obtain at least one piece of aggregation information, and the aggregation information is obtained by performing semantic aggregation on at least one piece of interactive information with the semantic similarity reaching the similarity threshold, so that information loss is avoided and the efficiency of a user for checking information is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a diagram illustrating an application environment of an information presentation method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of presenting information according to an example embodiment.
FIG. 3 is a flowchart illustrating the steps of an aggregation process in accordance with one illustrative embodiment.
Fig. 4 is a flowchart illustrating the step of generating aggregated information in accordance with an exemplary embodiment.
Fig. 5 is a flow diagram illustrating presenting aggregated information, according to an example embodiment.
Fig. 6 is a flow diagram illustrating matching aggregated information, according to an example embodiment.
Fig. 7 is a flowchart illustrating an information presentation method according to another exemplary embodiment.
Fig. 8 is a flowchart illustrating an information presentation method applied to a live scene according to another exemplary embodiment.
Fig. 9 is a schematic diagram illustrating an information presentation interface in a live scene according to another example embodiment.
FIG. 10 is a block diagram illustrating an information presentation device according to an example embodiment.
Fig. 11 is a block diagram illustrating an information presentation device according to another exemplary embodiment.
Fig. 12 is an internal block diagram of an electronic device shown in accordance with an example embodiment.
Fig. 13 is an internal block diagram of a server according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The information display method provided by the present disclosure may be applied to an application environment shown in fig. 1, as shown in fig. 1, including: the system comprises a first terminal 101, a second terminal 102 and a server 103, wherein the first terminal 101 and the second terminal 102 can be connected with the server 103 through a network for data interaction. Specifically, the first terminal 101 and the second terminal 102 both have a client of the application program, and the server 103 is a background server corresponding to the client of the application program. The first terminal 101 is a user terminal for sharing multimedia, and the second terminal 102 is a user terminal capable of acquiring multimedia based on the sharing of the first terminal 101 and initiating interactive information based on the multimedia, where the multimedia may be live video, dynamic information shared by a circle of friends, and the like. Taking a live scene as an example, the first terminal 101 may be a user terminal initiating a live broadcast (i.e., a main broadcast terminal), and the second terminal 102 may be a user terminal watching a live broadcast (i.e., a powdered silk terminal). Generally, when an anchor broadcasts directly through a first terminal 101, in the process of live broadcasting, a user watching the live broadcasting can send interactive information to the anchor through a second terminal 102, and after the first terminal 101 obtains the interactive information, the interactive information can be aggregated according to semantic similarity, so as to obtain at least one piece of aggregated information, wherein the aggregated information is obtained by performing semantic aggregation on at least one piece of interactive information of which the semantic similarity reaches a similarity threshold, and then displaying the at least one piece of aggregated information. The embodiment displays the interaction information after aggregation, so that information loss can not be caused, and the efficiency of viewing the information by the anchor can be improved. It can be understood that the first terminal 101 may also be a user terminal that watches live broadcast, and the second terminal 102 may also be a user terminal that initiates live broadcast, and in practical applications, the first terminal 101 and the second terminal 102 may initiate live broadcast or watch live broadcast. For convenience of description, in this embodiment, the first terminal 101 is taken as a user terminal initiating a live broadcast, and the second terminal 102 is taken as a user terminal watching the live broadcast. The first terminal 101 and the second terminal 102 may be, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server 103 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
As shown in fig. 2, an information displaying method is provided, which is applied to a first terminal (i.e., a terminal sharing multimedia) for example in the embodiment, and includes the following steps:
in step S210, interaction information for the target multimedia is acquired.
In step S220, aggregation processing is performed on the interaction information according to the semantic similarity, so as to obtain at least one piece of aggregation information.
The target multimedia refers to a certain live video shared by a certain user, dynamic information of a friend circle and the like. The interactive information is information that needs to be propagated in the process of generating the interdependence behavior, and in this embodiment, the interactive information may be comment information performed by other users based on the target multimedia, or information that other users interact with the user sharing the target multimedia based on the target multimedia, and the like. The aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information with the semantic similarity reaching the similarity threshold.
Specifically, taking a live broadcast scene as an example, the anchor can receive interaction information sent by a user watching the live broadcast during the live broadcast through the live broadcast terminal, in this embodiment, the anchor terminal can also perform aggregation processing on the received interaction information based on semantic similarity, that is, at least one piece of interaction information whose semantic similarity reaches a similarity threshold is aggregated into one type, and perform semantic aggregation on the aggregated interaction information into one type, so as to obtain a corresponding piece of aggregation information. Based on the above, the anchor terminal performs clustering on all received interactive information of a certain live broadcast based on semantic similarity to obtain at least one corresponding category, and performs semantic aggregation on the interactive information aggregated into the category to obtain at least one corresponding item of aggregated information.
In step S230, at least one piece of aggregation information is presented.
In the embodiment, the aggregation information after clustering the interactive information is displayed, so that information loss is avoided, and the efficiency of checking the information by a user can be improved.
According to the information display method, the interactive information of the target multimedia is acquired, the interactive information is aggregated according to the semantic similarity to obtain at least one piece of aggregated information, and the aggregated information obtained after clustering the interactive information is displayed, so that information loss is avoided, and the efficiency of viewing the information by a user can be improved.
In an exemplary embodiment, the aggregating the interaction information according to the semantic similarity includes: and when the condition of aggregation is detected to be met, performing aggregation processing on the interactive information according to the semantic similarity. Specifically, a starting condition for performing aggregation processing on the interactive information may be preset, so that when the terminal detects that the aggregation condition is satisfied, the interactive information is aggregated according to the semantic similarity. In this embodiment, the aggregation condition may be a preset number threshold for receiving the interactive information, and when the number of the received interactive information reaches the set number threshold, it is determined that the aggregation condition is satisfied, so as to start aggregation processing on the interactive information, that is, aggregate the received interactive information according to the semantic similarity. The aggregation condition may also be a preset time interval, and the time corresponding to each time interval when the time interval arrives is an aggregation time, so that if the current time matches the preset aggregation time, it is determined that the aggregation condition is satisfied, thereby starting aggregation processing on the interactive information, that is, performing aggregation processing on the received interactive information according to semantic similarity. The aggregation condition may also be an information aggregation instruction sent by a user, and when the information aggregation instruction of the target account for the interactive information is received, it is determined that the aggregation condition is satisfied, so that aggregation processing of the interactive information is started, that is, aggregation processing of the received interactive information is performed according to semantic similarity.
In this embodiment, a starting condition for performing aggregation processing on the interactive information can be set according to actual requirements, and therefore, when the aggregation condition is detected to be satisfied, the interactive information can be aggregated according to semantic similarity, so that different requirements in actual application can be satisfied.
In an exemplary embodiment, as shown in fig. 3, in step S220, the interaction information is aggregated according to the semantic similarity to obtain at least one piece of aggregated information, which may specifically be implemented by the following steps:
in step S221, semantic recognition is performed on each piece of interaction information, and semantic feature vectors corresponding to each piece of interaction information are obtained according to semantic recognition results.
The semantic recognition result refers to semantic features extracted after semantic recognition is performed on the interactive information. The semantic feature vector is obtained by performing vector conversion on the semantic features. Semantic recognition can be realized by using NLP (Natural Language Processing) technology. In this embodiment, each piece of interaction information is subjected to semantic recognition through an NLP technique, semantic features of the interaction information are extracted, and a semantic recognition result of the interaction information is obtained, and then feature conversion is performed on the semantic features of the interaction information based on the NLP technique, so that a corresponding semantic feature vector is obtained.
Specifically, each piece of interaction information may be subjected to word segmentation processing based on the NLP technology to obtain a plurality of words corresponding to the interaction information, that is, a semantic recognition result of performing semantic recognition on the interaction information is obtained, and then word vectors corresponding to each word segment of the interaction information are extracted, and feature fusion and normalization processing are performed on the word vectors corresponding to each word segment to obtain a semantic feature vector corresponding to the interaction information. The feature fusion is to generate new features from word vectors of different participles in the same interactive information by a certain method, so that the new features are more effective for classification. Normalization is a process of mapping the new features after feature fusion for the convenience of data processing and better benefit of feature classification. In this embodiment, a plurality of participles of the interactive information are obtained by performing participle processing on the interactive information, and further, word vectors corresponding to each participle are extracted, and feature fusion and normalization processing are performed on the word vectors corresponding to the plurality of participles under the interactive information, so that semantic feature vectors corresponding to the interactive information are obtained.
In step S222, a similarity between the semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and the semantic feature vectors corresponding to other pieces of interaction information is obtained.
The similarity is a metric value used for comparing the similarity between two objects, and generally, the distance between the features of the objects can be calculated, and if the distance is small, the similarity is large; if the distance is large, the similarity is small. In this embodiment, the corresponding similarity may be determined by calculating a distance between the semantic feature vector corresponding to each piece of interaction information in the interaction information to be aggregated and the semantic feature vectors corresponding to other pieces of interaction information, so as to obtain the similarity between each piece of interaction information and other pieces of interaction information. Specifically, the calculation of the distance may employ an euclidean distance, a manhattan distance, or the like, which is not limited in this embodiment.
In step S223, if the similarity is greater than the similarity threshold, performing aggregation classification on the interaction information corresponding to the similarity to obtain at least one classification category, and generating aggregation information corresponding to the classification category.
Specifically, after the similarity between each piece of interaction information and other pieces of interaction information is obtained through the above steps, if the similarity greater than the similarity threshold exists, performing aggregation classification on the interaction information corresponding to the similarity greater than the similarity threshold, that is, classifying the interaction information corresponding to the similarity greater than the similarity threshold into a classification category, and generating aggregation information corresponding to the classification category. For example, if there are interaction information a, b, c, and d, the similarity between a and b, the similarity between a and c, and the similarity between a and d are obtained respectively, and if both the similarity between a and b and the similarity between a and c are greater than a set similarity threshold, the interaction information a, b, and c are classified into a classification category, and aggregation information corresponding to the classification category is generated, where the aggregation information is obtained by performing semantic aggregation on the interaction information a, b, and c in the classification category.
In an exemplary embodiment, as shown in fig. 4, in step S223, the aggregation information corresponding to the classification category is generated, which may be specifically implemented by the following steps:
in step S410, the interaction information under the classification category is acquired.
In step S420, content keywords are extracted from each piece of interaction information under the classification category to obtain content keywords of each piece of interaction information.
The content keywords refer to words or phrases capable of expressing content of the corresponding interactive information center. In this embodiment, the content keywords of the interactive information may be extracted based on the NLP technology. Specifically, all the interactive information under the same classification category is obtained, and the content keywords of each piece of interactive information are extracted, so that the content keywords corresponding to all the interactive information under the same classification category are obtained.
In step S430, a content keyword set of the interaction information under the classification category is generated according to the content keyword of each piece of interaction information.
Specifically, a content keyword set of the interaction information under the same classification category is generated according to the content keywords corresponding to all the interaction information under the same classification category obtained in the previous step, wherein the content keyword set comprises content keywords extracted from all the interaction information under the same classification category and extraction times corresponding to each content keyword.
In step S440, the aggregation information corresponding to the classification category is generated based on the number of extractions and the content keyword.
In this embodiment, the aggregated information corresponding to the classification category may be generated based on the content keywords in the content keyword set under the same classification category and the number of times of extraction corresponding to each content keyword. Specifically, the aggregated information corresponding to the classification category may be generated according to one or more content keywords extracted the highest number of times from the content keyword set under the same classification category. When the content keyword set has a large number of content keywords, the content keywords in the set may be further clustered based on the content keywords and the corresponding extraction times, so as to generate aggregated information corresponding to the classification category based on the clustered content keywords and the clustered times, for example, the aggregated information corresponding to the classification category may be generated based on the content keyword with the highest clustered times. The clustered times are the sum of the extraction times corresponding to each content keyword in a plurality of content keywords which are clustered into a category.
In the embodiment, the interactive information under the classification category is obtained, the content keyword of each piece of interactive information under the classification category is extracted, the content keyword set of the interactive information under the classification category is generated according to the content keyword of each piece of interactive information, accordingly, the aggregation information corresponding to the classification category is generated based on the extraction times and the content keyword, namely, a plurality of pieces of similar interactive information are aggregated into one piece of aggregation information, and therefore the number of displayed information can be reduced under the condition that information is not lost.
In an exemplary embodiment, when at least one piece of aggregation information is displayed, the number of interaction information corresponding to each piece of aggregation information may also be displayed, where the number of interaction information refers to the number of interaction information corresponding to a certain piece of aggregation information before aggregation. Specifically, by obtaining the quantity of the interaction information under each aggregation information in the at least one aggregation information, when the at least one aggregation information is displayed, the quantity of the interaction information corresponding to each aggregation information in the at least one aggregation information can be displayed.
In an exemplary embodiment, if the interaction information has corresponding account weight data, as shown in fig. 5, in step S230, at least one piece of aggregation information is displayed, which may specifically be implemented by the following steps:
in step S231, the number of pieces of interaction information in the aggregated information and the sum of account weight data corresponding to each piece of interaction information are obtained.
The account weight data corresponding to the interaction information may be obtained based on attribute information of a user account sending the interaction information, where the attribute information of the user account includes, but is not limited to, an account level of the user account, an accumulated usage duration of the user account, and a historical operation behavior of the user account. Specifically, the account weight data corresponding to each piece of interaction information may be obtained based on the attribute information of the user account and a preset calculation rule, and then the number of interaction information under the aggregated information and the account weight data corresponding to each piece of interaction information are obtained, so that the weighted data of the aggregated information may be obtained according to the number of interaction information under the aggregated information and the account weight data corresponding to each piece of interaction information.
In step S232, the number of pieces of interaction information under the aggregation information and the sum of the account weight data corresponding to each piece of interaction information are weighted according to the set weighting coefficient, and the weighting data corresponding to the aggregation information is obtained.
The weighting is a process of giving a certain feature value to a certain element in order to emphasize the importance of the element in the whole element system, and the feature value represents a weighting coefficient of the element. In this embodiment, the element to be weighted is the sum of the number of the interactive information under the aggregated information and the account weight data corresponding to each piece of the interactive information, specifically, a corresponding weighting coefficient may be preset for the element based on the importance degree, so that after the number of the interactive information under the aggregated information and the sum of the account weight data corresponding to each piece of the interactive information are obtained, the sum of the number of the interactive information under the aggregated information and the account weight data corresponding to each piece of the interactive information may be weighted according to the set weighting coefficient, and the weighted data corresponding to the aggregated information is obtained. The weighting process may be understood as multiplying the number of pieces of interaction information by a weighting coefficient, and for example, if the number of pieces of interaction information in a certain aggregate information is N, a weighting coefficient preset for the number of pieces of interaction information is a, the sum of pieces of account weighting data corresponding to each piece of interaction information is M, and a weighting coefficient preset for the number of pieces of interaction information is b, the weighting data of the aggregate information is a N + b M.
In step S233, at least one piece of aggregation information is presented according to the aggregation information and the corresponding weighting data.
In this embodiment, the display mode of the at least one piece of aggregation information may be determined according to the size of the weighted data, so that the aggregation information is displayed according to the display modes corresponding to the at least one piece of aggregation information respectively. The display mode includes, but is not limited to, displaying the arrangement order of the aggregated information, and displaying effects, such as corresponding font size, color, animation, and the like, that are expressed when the aggregated information is displayed.
Specifically, the at least one piece of aggregation information may be sorted according to the size of the weighted data, so that the at least one piece of aggregation information is displayed in a sorted order from large to small according to the sorting result, that is, the aggregation information with the larger weighted data is displayed at the front position, and the aggregation information with the smaller weighted data is displayed at the rear position. When the aggregation information is displayed, one or more pieces of aggregation information with the highest weighted data can be highlighted by changing the size and the color of a font or adding animation and the like, so that the higher the weighted data, the more obvious the display effect of the aggregation information is. Meanwhile, the number of the corresponding interactive information can be displayed for each piece of aggregation information (the number can also be the number of people who provide the interactive information under the aggregation information), so that the identification degree of the displayed aggregation information is greatly improved.
In an exemplary embodiment, as shown in fig. 6, after obtaining at least one piece of aggregation information, the method further includes:
in step S610, new interaction information for the target multimedia is received.
The new interactive information refers to the latest received interactive information for the target multimedia. In this embodiment, after obtaining at least one piece of aggregation information obtained by aggregating the interactive information of the target multimedia, new interactive information of the target multimedia can be received.
In step S620, according to at least one piece of aggregation information, corresponding aggregation information is generated for the new interaction information.
In this embodiment, based on at least one piece of aggregation information obtained in the above steps, aggregation information corresponding to newly received new interaction information for the target multimedia may also be generated. Specifically, the new interaction information is matched with at least one piece of aggregation information which is obtained, that is, whether aggregation information matched with the new interaction information exists in the at least one piece of aggregation information which is obtained is judged, if the matched aggregation information exists, the new interaction information is classified into a classification type corresponding to the matched aggregation information, and the matched aggregation information is used as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
Further, semantic recognition can be performed on the new interaction information and the at least one piece of aggregation information respectively, a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively are obtained according to a semantic recognition result, the similarity between the first semantic feature vector of the new interaction information and the second feature vector corresponding to the at least one piece of aggregation information respectively is obtained, and if aggregation information with the similarity larger than a similarity threshold exists, it is determined that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist. If the matched aggregation information exists, the new interaction information is classified into the matched aggregation information, and if a plurality of pieces of matched aggregation information exist, the aggregation information corresponding to the maximum similarity can be determined according to the size of the similarity, and the new interaction information is classified into the aggregation information corresponding to the maximum similarity. If there is no matching aggregation information, a new piece of aggregation information may be created based on the new interaction information, i.e., the new interaction information is used alone as a piece of aggregation information, and the weighting data of the aggregation information is calculated based on the method shown in fig. 5. It can be understood that, after the new interaction information is classified into the aggregation information, when the aggregation information is displayed, the weighting data of the aggregation information may also be updated according to the method shown in fig. 5, so that information display may be performed according to the updated weighting data, so as to ensure the accuracy of the displayed information.
In an exemplary embodiment, the interactive information of the target multimedia is acquired, and the interactive information is aggregated according to the semantic similarity to obtain at least one piece of aggregated information, which can also be realized through interaction with a server. Specifically, after the client acquires the interactive information for the target multimedia, a corresponding aggregation request may be sent to the server, where the aggregation request is used to trigger the server to perform aggregation processing on the interactive information according to the semantic similarity, and return at least one piece of aggregated information after the aggregation processing. Therefore, the resource consumption of the client for processing data is reduced, and the processing performance of the client is improved.
In an exemplary embodiment, as shown in fig. 7, a page display method is provided, and this embodiment takes the application of the method to a server as an example for description, and specifically includes the following steps:
in step S710, interaction information for the target multimedia is acquired.
In step S720, the interactive information is aggregated according to the semantic similarity to obtain at least one piece of aggregated information.
The target multimedia refers to a certain live video shared by a certain user, dynamic information of a friend circle and the like. The interactive information may be comment information of other users based on the target multimedia, or may also be information of other users interacting with users sharing the target multimedia based on the target multimedia. In this embodiment, when other users send corresponding interaction information based on the target multimedia, the interaction information is usually forwarded to a user account sharing the target multimedia through a server, and therefore, the server can obtain the interaction information for the target multimedia and perform aggregation processing on the obtained interaction information according to semantic similarity, that is, at least one piece of interaction information whose semantic similarity reaches a similarity threshold is aggregated into one type, and perform semantic aggregation on the interaction information aggregated into one type, thereby obtaining a piece of corresponding aggregation information. Specifically, the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information whose semantic similarity reaches a similarity threshold.
In step S730, at least one piece of aggregation information is returned to the client, so that the client displays the at least one piece of aggregation information to the target account.
The target account refers to a user account for sharing the target multimedia. In the embodiment, at least one piece of aggregated information obtained by clustering the interactive information is returned to the client side, so that the client side can display the aggregated information to the target account, information loss is avoided, and the efficiency of the user for viewing the information can be improved.
According to the information display method, the server acquires the interactive information of the target multimedia, the interactive information is aggregated according to the semantic similarity to obtain at least one piece of aggregated information, and the at least one piece of aggregated information is returned to the client for the client to display to the target account, so that information loss is avoided, and the efficiency of the user for viewing the information can be improved. And because the client acquires the aggregation information from the server and directly displays the aggregation information, the resource consumption of the client for processing data is reduced, and the processing performance of the client is improved.
In an exemplary embodiment, the aggregating the interactive information according to semantic similarity includes: and when the condition of aggregation is detected to be met, performing aggregation processing on the interaction information according to the semantic similarity.
In an exemplary embodiment, when it is detected that an aggregation condition is satisfied, performing aggregation processing on the interaction information according to semantic similarity, where the aggregation processing includes any one of: if the quantity of the interactive information reaches a set quantity threshold value, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity; if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity; and if an information aggregation instruction from the client is received, determining that aggregation conditions are met, and performing aggregation processing on the interaction information according to semantic similarity.
In an exemplary embodiment, the aggregating the interaction information according to the semantic similarity to obtain at least one piece of aggregated information includes: performing semantic recognition on each piece of interaction information, and acquiring semantic feature vectors corresponding to each piece of interaction information according to a semantic recognition result; acquiring the similarity between the semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and the semantic feature vectors corresponding to other pieces of interaction information; and if the similarity is greater than a similarity threshold value, performing aggregation classification on the interaction information corresponding to the similarity to obtain at least one classification category, and generating the aggregation information corresponding to the classification category.
In an exemplary embodiment, performing semantic recognition on each piece of the interaction information, and obtaining semantic feature vectors corresponding to each piece of the interaction information according to a semantic recognition result, includes: performing word segmentation processing on each piece of interaction information according to semantic recognition to obtain a plurality of words of the interaction information; and extracting word vectors corresponding to each participle of the interactive information respectively, and performing feature fusion and normalization processing on the word vectors corresponding to each participle respectively to obtain semantic feature vectors corresponding to the interactive information.
In an exemplary embodiment, generating the aggregated information corresponding to the classification category includes: acquiring interaction information under the classification category; extracting content keywords of each piece of interactive information under the classification category to obtain the content keywords of each piece of interactive information; generating a content keyword set of the interactive information under the classification category according to the content keywords of each piece of interactive information, wherein the content keyword set comprises extracted content keywords and corresponding extraction times; and generating the aggregation information corresponding to the classification category based on the extraction times and the content keywords.
In an exemplary embodiment, returning the at least one piece of aggregated information to the client includes: acquiring the quantity of interaction information under each aggregation information in at least one aggregation information; and returning at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to the client, so that the client can display the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to the target account.
In an exemplary embodiment, the interaction information has corresponding account weight data; the returning the at least one piece of aggregation information to the client comprises: acquiring the quantity of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information; weighting the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information according to a set weighting coefficient to obtain weighted data corresponding to the aggregation information; determining a display mode of at least one piece of aggregation information according to the size of the weighted data; and returning the at least one piece of aggregation information and the corresponding display mode to the client, so that the client can display the at least one piece of aggregation information to the target account by adopting the display mode corresponding to the aggregation information.
In an exemplary embodiment, after obtaining at least one piece of aggregation information, the method further includes: and receiving new interaction information of the target multimedia, and generating corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information.
In an exemplary embodiment, generating corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information includes: matching the new interaction information with the at least one piece of aggregation information, if the matched aggregation information exists, classifying the new interaction information into a classification category corresponding to the matched aggregation information, and taking the matched aggregation information as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
In an exemplary embodiment, matching the new interaction information with the at least one piece of aggregation information includes: performing semantic recognition on the new interaction information and the at least one piece of aggregation information respectively, and acquiring a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively according to a semantic recognition result; acquiring the similarity between a first semantic feature vector of the new interaction information and a second feature vector corresponding to at least one piece of aggregation information respectively; if the aggregation information with the similarity larger than the similarity threshold exists, determining that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist.
In a specific embodiment, as shown in fig. 8, the following description takes a live broadcast scenario as an example, and the information display method of the present application is implemented through interaction between a main broadcast terminal, a fan terminal, and a server, and includes the following steps:
step 801, the server acquires the interactive information of the vermicelli terminal to the target multimedia.
Step 802, the server detects whether an aggregation condition is satisfied, and when the aggregation condition is detected to be satisfied, performs aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information.
For a specific aggregation processing process, reference may be made to the methods shown in fig. 3 and fig. 4, which is not described in detail in this embodiment.
Step 803, the server returns the aggregation information obtained by aggregating the interactive information to the anchor terminal.
And step 804, the anchor terminal renders and displays the received aggregation information. Reference may be made in particular to the presentation as shown in fig. 5.
Step 805, when the server receives new interactive information for the target multimedia, the new interactive information is matched with at least one piece of aggregation information. The specific matching process may refer to the steps shown in fig. 6.
In step 806, the server updates the aggregated information based on the matching results.
In step 807, the server returns the updated aggregation information to the anchor terminal.
And 808, the anchor terminal renders and displays the updated aggregation information.
A specific display interface is shown in fig. 9, except that a traditional message scrolling region is displayed in the live interface, that is, the interactive information is displayed in the region in a scrolling manner, a message aggregation display region for displaying the aggregation information is further added, and the specific message display manner in the region may refer to the manner shown in fig. 5, which is not described in detail in this embodiment.
According to the information display method, interaction between the terminal and the server is achieved, aggregation processing of the interactive information is performed by the server, the aggregated information after the aggregation processing is rendered and displayed by the terminal, the aggregated information is obtained from the server and displayed by the terminal, so that resource consumption of the terminal for processing data is reduced, the processing performance of the terminal is improved, and the disclosed aggregated information is obtained by performing semantic aggregation on at least one piece of interactive information with the semantic similarity reaching a similarity threshold, so that information loss is avoided, and the efficiency of a user for checking the information can be improved.
It should be understood that although the various steps in the flow charts of fig. 1-8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-8 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
FIG. 10 is a block diagram illustrating an information presentation device according to an example embodiment. Referring to fig. 10, the apparatus includes an interactive information acquiring module 1001, an aggregating module 1002, and a presenting module 1003.
An interactive information acquisition module 1001 configured to perform acquisition of interactive information on a target multimedia;
the aggregation module 1002 is configured to perform aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information, where the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information whose semantic similarity reaches a similarity threshold;
a presentation module 1003 configured to perform presenting the at least one piece of aggregated information.
In an exemplary embodiment, the apparatus further includes an aggregation condition detection module configured to perform, when it is detected that the aggregation condition is satisfied, aggregation processing on the interaction information according to the semantic similarity.
In an exemplary embodiment, the aggregation condition detection module is configured to perform: if the quantity of the interactive information reaches a set quantity threshold value, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity; if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity; and if an information aggregation instruction of the target account for the interactive information is received, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity.
In an exemplary embodiment, the aggregation module includes: the semantic recognition unit is configured to perform semantic recognition on each piece of interaction information and acquire semantic feature vectors corresponding to the interaction information according to semantic recognition results; the similarity obtaining unit is configured to execute obtaining of similarity between a semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and semantic feature vectors corresponding to other pieces of interaction information; and the aggregation information generation unit is configured to perform aggregation classification on the interaction information corresponding to the similarity if the similarity is greater than a similarity threshold to obtain at least one classification category, and generate aggregation information corresponding to the classification category.
In an exemplary embodiment, the semantic identification unit is configured to perform: performing word segmentation processing on each piece of interaction information according to semantic recognition to obtain a plurality of words of the interaction information; and extracting word vectors corresponding to each participle of the interactive information respectively, and performing feature fusion and normalization processing on the word vectors corresponding to each participle respectively to obtain semantic feature vectors corresponding to the interactive information.
In an exemplary embodiment, the aggregation information generation unit is configured to perform: acquiring interaction information under the classification category; extracting content keywords of each piece of interactive information under the classification category to obtain the content keywords of each piece of interactive information; generating a content keyword set of the interactive information under the classification category according to the content keywords of each piece of interactive information, wherein the content keyword set comprises extracted content keywords and corresponding extraction times; and generating the aggregation information corresponding to the classification category based on the extraction times and the content keywords.
In an exemplary embodiment, the presentation module is configured to perform: acquiring the quantity of interaction information under each aggregation information in the at least one aggregation information; and displaying the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information.
In an exemplary embodiment, the interaction information has corresponding account weight data; the display module comprises: the data acquisition unit is configured to acquire the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each piece of interactive information; the weighting processing unit is configured to perform weighting processing on the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information according to a set weighting coefficient, and acquire the weighting data corresponding to the aggregation information; a presentation unit configured to perform presentation of the at least one piece of aggregation information according to the aggregation information and the corresponding weighting data.
In an exemplary embodiment, the presentation unit is configured to perform: and determining the display mode of the at least one piece of aggregation information according to the size of the weighted data, and displaying the aggregation information according to the display modes corresponding to the at least one piece of aggregation information respectively.
In an exemplary embodiment, the aggregation module further comprises: a new interactive information receiving unit configured to perform receiving new interactive information for the target multimedia; and the new interaction information processing unit is configured to execute the generation of corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information.
In an exemplary embodiment, the new interactive information processing unit includes: a matching subunit configured to perform matching the new interaction information with the at least one piece of aggregation information; the processing subunit is configured to execute, if there is matched aggregation information, classifying the new interaction information into a classification category corresponding to the matched aggregation information, and using the matched aggregation information as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
In an exemplary embodiment, the matching subunit is configured to perform: performing semantic recognition on the new interaction information and the at least one piece of aggregation information respectively, and acquiring a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively according to a semantic recognition result; acquiring the similarity between a first semantic feature vector of the new interaction information and a second feature vector corresponding to at least one piece of aggregation information respectively; if the aggregation information with the similarity larger than the similarity threshold exists, determining that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist.
In an exemplary embodiment, the aggregation module is configured to perform: acquiring interaction information of a target multimedia, sending an aggregation request to a server, wherein the aggregation request is used for triggering the server to aggregate the interaction information according to semantic similarity, and returning at least one piece of aggregated information after aggregation.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 11 is a block diagram illustrating an information presentation device according to an example embodiment. Referring to fig. 11, the apparatus includes an interactive information acquiring module 1101, an aggregation module 1102, and an information returning module 1103.
An interactive information acquisition module 1101 configured to perform acquisition of interactive information on a target multimedia;
the aggregation module 1102 is configured to perform aggregation processing on the interaction information according to semantic similarity to obtain at least one piece of aggregation information, where the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information whose semantic similarity reaches a similarity threshold;
an information returning module 1103 configured to perform returning the at least one piece of aggregation information to the client for the client to expose the at least one piece of aggregation information to the target account.
In an exemplary embodiment, the apparatus further includes an aggregation condition detection module configured to perform, when it is detected that the aggregation condition is satisfied, aggregation processing on the interaction information according to the semantic similarity.
In an exemplary embodiment, the aggregation condition detection module is configured to perform: if the quantity of the interactive information reaches a set quantity threshold value, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity; if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity; and if an information aggregation instruction from the client is received, determining that aggregation conditions are met, and performing aggregation processing on the interaction information according to semantic similarity.
In an exemplary embodiment, the aggregation module includes: the semantic recognition unit is configured to perform semantic recognition on each piece of interaction information and acquire semantic feature vectors corresponding to the interaction information according to semantic recognition results; the similarity obtaining unit is configured to execute obtaining of similarity between a semantic feature vector corresponding to each piece of interaction information in each piece of interaction information and semantic feature vectors corresponding to other pieces of interaction information; and the aggregation information generation unit is configured to perform aggregation classification on the interaction information corresponding to the similarity if the similarity is greater than a similarity threshold to obtain at least one classification category, and generate aggregation information corresponding to the classification category.
In an exemplary embodiment, the semantic identification unit is configured to perform: performing word segmentation processing on each piece of interaction information according to semantic recognition to obtain a plurality of words of the interaction information; and extracting word vectors corresponding to each participle of the interactive information respectively, and performing feature fusion and normalization processing on the word vectors corresponding to each participle respectively to obtain semantic feature vectors corresponding to the interactive information.
In an exemplary embodiment, the aggregation information generation unit is configured to perform: acquiring interaction information under the classification category; extracting content keywords of each piece of interactive information under the classification category to obtain the content keywords of each piece of interactive information; generating a content keyword set of the interactive information under the classification category according to the content keywords of each piece of interactive information, wherein the content keyword set comprises extracted content keywords and corresponding extraction times; and generating the aggregation information corresponding to the classification category based on the extraction times and the content keywords.
In an exemplary embodiment, the information return module is configured to perform: acquiring the quantity of interaction information under each aggregation information in the at least one aggregation information; and returning the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to the client, so that the client can display the at least one piece of aggregation information and the quantity of the interaction information corresponding to the aggregation information to a target account.
In an exemplary embodiment, the interaction information has corresponding account weight data; the information return module is configured to perform: acquiring the quantity of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information; weighting the number of the interactive information under the aggregation information and the sum of the account weight data corresponding to each interactive information according to a set weighting coefficient to obtain weighted data corresponding to the aggregation information; determining a display mode of the at least one piece of aggregation information according to the size of the weighted data; and returning the at least one piece of aggregation information and the corresponding display mode to the client, so that the client can display the at least one piece of aggregation information to the target account by adopting the display mode corresponding to the aggregation information.
In an exemplary embodiment, the aggregation module further comprises: a new interactive information receiving unit configured to perform receiving new interactive information for the target multimedia; and the new interaction information processing unit is configured to execute the generation of corresponding aggregation information for the new interaction information according to the at least one piece of aggregation information.
In an exemplary embodiment, the new interactive information processing unit includes: a matching subunit configured to perform matching the new interaction information with the at least one piece of aggregation information; the processing subunit is configured to execute, if there is matched aggregation information, classifying the new interaction information into a classification category corresponding to the matched aggregation information, and using the matched aggregation information as the aggregation information corresponding to the new interaction information; and if the matched aggregation information does not exist, generating the aggregation information corresponding to the new interaction information.
In an exemplary embodiment, the matching subunit is configured to perform: performing semantic recognition on the new interaction information and the at least one piece of aggregation information respectively, and acquiring a first semantic feature vector of the new interaction information and a second feature vector corresponding to the at least one piece of aggregation information respectively according to a semantic recognition result; acquiring the similarity between a first semantic feature vector of the new interaction information and a second feature vector corresponding to at least one piece of aggregation information respectively; if the aggregation information with the similarity larger than the similarity threshold exists, determining that matched aggregation information exists; and if the aggregation information with the similarity larger than the similarity threshold does not exist, determining that the matched aggregation information does not exist.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In one exemplary embodiment, an information presentation system is provided that includes an electronic device and a server. The electronic device and the server will be described below with reference to fig. 12 and 13.
FIG. 12 is a block diagram illustrating an electronic device Z00 for information presentation in accordance with an exemplary embodiment. For example, electronic device Z00 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 12, device Z00 may include one or more of the following components: a processing component Z02, a memory Z04, a power component Z06, a multimedia component Z08, an audio component Z10, an interface to input/output (I/O) Z12, a sensor component Z14 and a communication component Z16.
The processing component Z02 generally controls the overall operation of the device Z00, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component Z02 may include one or more processors Z20 to execute instructions to perform all or part of the steps of the method described above. Further, the processing component Z02 may include one or more modules that facilitate interaction between the processing component Z02 and other components. For example, the processing component Z02 may include a multimedia module to facilitate interaction between the multimedia component Z08 and the processing component Z02.
The memory Z04 is configured to store various types of data to support operations at device Z00. Examples of such data include instructions for any application or method operating on device Z00, contact data, phonebook data, messages, pictures, videos, etc. The memory Z04 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component Z06 provides power to the various components of the device Z00. The power component Z06 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device Z00.
The multimedia component Z08 comprises a screen between the device Z00 and the user providing an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component Z08 includes a front facing camera and/or a rear facing camera. When device Z00 is in an operating mode, such as a capture mode or a video mode, the front-facing camera and/or the rear-facing camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component Z10 is configured to output and/or input an audio signal. For example, the audio component Z10 includes a Microphone (MIC) configured to receive external audio signals when the device Z00 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory Z04 or transmitted via the communication component Z16. In some embodiments, the audio component Z10 further includes a speaker for outputting audio signals.
The I/O interface Z12 provides an interface between the processing component Z02 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly Z14 includes one or more sensors for providing status assessment of various aspects to the device Z00. For example, sensor assembly Z14 may detect the open/closed state of device Z00, the relative positioning of the components, such as the display and keypad of device Z00, sensor assembly Z14 may also detect a change in the position of one component of device Z00 or device Z00, the presence or absence of user contact with device Z00, the orientation or acceleration/deceleration of device Z00, and a change in the temperature of device Z00. The sensor assembly Z14 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly Z14 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly Z14 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component Z16 is configured to facilitate wired or wireless communication between device Z00 and other devices. Device Z00 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component Z16 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component Z16 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device Z00 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as the memory Z04, comprising instructions executable by the processor Z20 of the device Z00 to perform the above method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
FIG. 13 is a block diagram illustrating an apparatus for information presentation S00, according to an example embodiment. For example, the device S00 may be a server. Referring to FIG. 13, device S00 includes a processing component S20 that further includes one or more processors and memory resources represented by memory S22 for storing instructions, e.g., applications, that are executable by processing component S20. The application program stored in the memory S22 may include one or more modules each corresponding to a set of instructions. Further, the processing component S20 is configured to execute instructions to perform the above-described information presentation method.
The device S00 may also include a power supply component S24 configured to perform power management of the device S00, a wired or wireless network interface S26 configured to connect the device S00 to a network, and an input-output (I/O) interface S28. The device S00 may operate based on an operating system stored in the memory S22, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, or the like.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as the memory S22 comprising instructions, executable by the processor of the device S00 to perform the above method. The storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An information display method, comprising:
acquiring interaction information of a target multimedia;
performing aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information, wherein the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
and displaying the at least one piece of aggregation information.
2. The method according to claim 1, wherein the aggregating the interaction information according to semantic similarity comprises:
and when the condition of aggregation is detected to be met, performing aggregation processing on the interaction information according to the semantic similarity.
3. The method according to claim 2, wherein when it is detected that the aggregation condition is satisfied, performing aggregation processing on the interaction information according to semantic similarity includes any one of:
if the quantity of the interactive information reaches a set quantity threshold value, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity;
if the current time is matched with the preset aggregation time, determining that the aggregation condition is met, and performing aggregation processing on the interaction information according to semantic similarity;
and if an information aggregation instruction of the target account for the interactive information is received, determining that an aggregation condition is met, and performing aggregation processing on the interactive information according to semantic similarity.
4. An information display method, comprising:
acquiring interaction information of a target multimedia;
performing aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information, wherein the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
and returning the at least one piece of aggregation information to the client, so that the client can display the at least one piece of aggregation information to the target account.
5. An information presentation device, comprising:
the interactive information acquisition module is configured to acquire interactive information of the target multimedia;
the aggregation module is configured to perform aggregation processing on the interaction information according to the semantic similarity to obtain at least one piece of aggregation information, wherein the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
a presentation module configured to perform presentation of the at least one aggregated information.
6. An information presentation device, comprising:
the interactive information acquisition module is configured to acquire interactive information of the target multimedia;
the aggregation module is configured to perform aggregation processing on the interaction information according to semantic similarity to obtain at least one piece of aggregation information, and the aggregation information is obtained by performing semantic aggregation on at least one piece of interaction information of which the semantic similarity reaches a similarity threshold;
and the information returning module is configured to execute returning the at least one piece of aggregation information to the client for the client to display the at least one piece of aggregation information to the target account.
7. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the information presentation method of any one of claims 1 to 3.
8. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the information presentation method of claim 4.
9. An information presentation system comprising an electronic device according to claim 7 and a server according to claim 8.
10. A storage medium in which instructions, when executed by a processor of a server, enable the server to perform the information presentation method of any one of claims 1 to 4.
CN202011191748.6A 2020-10-30 2020-10-30 Information display method and device, electronic equipment and storage medium Pending CN112256890A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011191748.6A CN112256890A (en) 2020-10-30 2020-10-30 Information display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011191748.6A CN112256890A (en) 2020-10-30 2020-10-30 Information display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112256890A true CN112256890A (en) 2021-01-22

Family

ID=74268428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011191748.6A Pending CN112256890A (en) 2020-10-30 2020-10-30 Information display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112256890A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947959A (en) * 2021-10-23 2022-01-18 首都医科大学附属北京天坛医院 Remote teaching system and live broadcast problem screening system based on MR technology
CN114222175A (en) * 2021-12-14 2022-03-22 北京达佳互联信息技术有限公司 Barrage display method and device, terminal equipment, server and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108055593A (en) * 2017-12-20 2018-05-18 广州虎牙信息科技有限公司 A kind of processing method of interactive message, device, storage medium and electronic equipment
CN108156148A (en) * 2017-12-21 2018-06-12 北京达佳互联信息技术有限公司 Comment polymerization methods of exhibiting, system, server and intelligent terminal
CN108235148A (en) * 2018-01-09 2018-06-29 武汉斗鱼网络科技有限公司 Similar barrage merging method, storage medium, electronic equipment and system in live streaming
CN109408639A (en) * 2018-10-31 2019-03-01 广州虎牙科技有限公司 A kind of barrage classification method, device, equipment and storage medium
CN111339295A (en) * 2020-02-19 2020-06-26 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for presenting information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108055593A (en) * 2017-12-20 2018-05-18 广州虎牙信息科技有限公司 A kind of processing method of interactive message, device, storage medium and electronic equipment
CN108156148A (en) * 2017-12-21 2018-06-12 北京达佳互联信息技术有限公司 Comment polymerization methods of exhibiting, system, server and intelligent terminal
CN108235148A (en) * 2018-01-09 2018-06-29 武汉斗鱼网络科技有限公司 Similar barrage merging method, storage medium, electronic equipment and system in live streaming
CN109408639A (en) * 2018-10-31 2019-03-01 广州虎牙科技有限公司 A kind of barrage classification method, device, equipment and storage medium
CN111339295A (en) * 2020-02-19 2020-06-26 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for presenting information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947959A (en) * 2021-10-23 2022-01-18 首都医科大学附属北京天坛医院 Remote teaching system and live broadcast problem screening system based on MR technology
CN114222175A (en) * 2021-12-14 2022-03-22 北京达佳互联信息技术有限公司 Barrage display method and device, terminal equipment, server and medium

Similar Documents

Publication Publication Date Title
US10372469B2 (en) Method and device for displaying information
RU2640632C2 (en) Method and device for delivery of information
CN106897937B (en) Method and device for displaying social sharing information
US11455491B2 (en) Method and device for training image recognition model, and storage medium
CN107463643B (en) Barrage data display method and device and storage medium
CN112131410A (en) Multimedia resource display method, device, system and storage medium
WO2021093427A1 (en) Visitor information management method and apparatus, electronic device, and storage medium
CN107784045B (en) Quick reply method and device for quick reply
CN109816495B (en) Commodity information pushing method, system, server and storage medium
CN110391966B (en) Message processing method and device and message processing device
CN113065008A (en) Information recommendation method and device, electronic equipment and storage medium
CN105373580A (en) Method and device for displaying subjects
CN111369271A (en) Advertisement sorting method and device, electronic equipment and storage medium
CN112256890A (en) Information display method and device, electronic equipment and storage medium
CN113254784A (en) Information display method and device, electronic equipment and storage medium
CN107402767B (en) Method and device for displaying push message
CN111435377A (en) Application recommendation method and device, electronic equipment and storage medium
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
CN112131466A (en) Group display method, device, system and storage medium
CN106331328B (en) Information prompting method and device
CN113901241A (en) Page display method and device, electronic equipment and storage medium
CN112000266B (en) Page display method and device, electronic equipment and storage medium
CN110213062B (en) Method and device for processing message
CN110020082B (en) Searching method and device
US11284127B2 (en) Method and apparatus for pushing information in live broadcast room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination