CN109255037B - Method and apparatus for outputting information - Google Patents

Method and apparatus for outputting information Download PDF

Info

Publication number
CN109255037B
CN109255037B CN201811015493.0A CN201811015493A CN109255037B CN 109255037 B CN109255037 B CN 109255037B CN 201811015493 A CN201811015493 A CN 201811015493A CN 109255037 B CN109255037 B CN 109255037B
Authority
CN
China
Prior art keywords
target entity
video
information
heat data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811015493.0A
Other languages
Chinese (zh)
Other versions
CN109255037A (en
Inventor
陈大伟
刘宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811015493.0A priority Critical patent/CN109255037B/en
Priority to PCT/CN2018/115949 priority patent/WO2020042376A1/en
Publication of CN109255037A publication Critical patent/CN109255037A/en
Application granted granted Critical
Publication of CN109255037B publication Critical patent/CN109255037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a method and a device for outputting information. One embodiment of the method comprises: determining at least one target entity from entities characterizing the video included in the pre-established knowledge graph; for a target entity in at least one target entity, determining at least one piece of heat data from the attribute information of the target entity, wherein the heat data is used for representing the attention degree of a video represented by the target entity; scoring the video characterized by the target entity based on the determined at least one heat data; and sorting the at least one target entity according to the obtained grading size, and outputting the related information of the target entity in the at least one target entity according to the sorting. The implementation method can improve the accuracy of sequencing the determined target entities, and is beneficial to displaying the relevant information of each target entity to a user in a targeted manner.

Description

Method and apparatus for outputting information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for outputting information.
Background
A Knowledge Graph (knowledgegraph) is a repository called semantic network (semantic network), i.e. a repository with a directed Graph structure, where nodes of the Graph represent entities (entries) or concepts (concepts) and edges of the Graph represent various semantic relationships between entities/concepts. The knowledge graph can be applied to various fields, such as information search, information recommendation and the like. By using the knowledge graph, other entities related to the entity representing certain information can be obtained, so that other information related to the information can be obtained more accurately.
Disclosure of Invention
The embodiment of the application provides a method and a device for outputting information.
In a first aspect, an embodiment of the present application provides a method for outputting information, where the method includes: determining at least one target entity from entities characterizing the video included in the pre-established knowledge graph; for a target entity in at least one target entity, determining at least one piece of heat data from the attribute information of the target entity, wherein the heat data is used for representing the attention degree of a video represented by the target entity; scoring the video characterized by the target entity based on the determined at least one heat data; and sorting the at least one target entity according to the obtained grading size, and outputting the related information of the target entity in the at least one target entity according to the sorting.
In some embodiments, scoring the video characterized by the target entity based on the determined at least one heat data comprises: obtaining a score corresponding to the heat data based on a preset conversion mode corresponding to the heat data for the heat data in the determined at least one heat data; and based on a preset weight value corresponding to the heat data, carrying out weighted summation on the determined scores to obtain the scores of the videos represented by the target entity.
In some embodiments, prior to ranking the at least one target entity by the size of the derived score, the method further comprises: and for a target entity in at least one target entity, acquiring preset authority cost information of the video represented by the target entity.
In some embodiments, outputting the related information of the target entities of the at least one target entity according to the ranking includes: and outputting the related information of the target entity in the at least one target entity and the authority cost information of the video represented by the target entity in the at least one target entity according to the sorting.
In some embodiments, the heat data includes at least one of: the video quality monitoring method comprises the following steps of playing quantity data of a video represented by a target entity, attention quantity data of the video represented by the target entity and comment quantity data of the video represented by the target entity.
In some embodiments, the information related to the target entity comprises at least one of: the title of the video represented by the target entity, the image included in the video represented by the target entity and the corresponding heat data of the target entity.
In a second aspect, an embodiment of the present application provides an apparatus for outputting information, including: a determining unit configured to determine at least one target entity from entities characterizing the video comprised by the pre-established knowledge-graph; the evaluation unit is configured to determine at least one piece of heat data from the attribute information of at least one target entity for the target entity, wherein the heat data is used for representing the attention degree of the video represented by the target entity; scoring the video characterized by the target entity based on the determined at least one heat data; and the output unit is configured to sort the at least one target entity according to the obtained grading size, and output the related information of the target entity in the at least one target entity according to the sorting.
In some embodiments, the scoring unit comprises: the conversion module is configured to obtain a score corresponding to the heat data based on a preset conversion mode corresponding to the heat data for the heat data in the determined at least one heat data; and the calculating module is configured to perform weighted summation on the determined scores based on preset weight values corresponding to the heat data to obtain the scores of the videos represented by the target entities.
In some embodiments, the apparatus further comprises: the obtaining unit is configured to obtain preset authority cost information of the video represented by the target entity for the target entity in at least one target entity.
In some embodiments, the output unit is further configured to: and outputting the related information of the target entity in the at least one target entity and the authority cost information of the video represented by the target entity in the at least one target entity according to the sorting.
In some embodiments, the heat data includes at least one of: the video quality monitoring method comprises the following steps of playing quantity data of a video represented by a target entity, attention quantity data of the video represented by the target entity and comment quantity data of the video represented by the target entity.
In some embodiments, the information related to the target entity comprises at least one of: the title of the video represented by the target entity, the image included in the video represented by the target entity and the corresponding heat data of the target entity.
In a third aspect, an embodiment of the present application provides a server, where the server includes: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for outputting the information, at least one target entity is determined from the entities which are included in the pre-established knowledge graph and represent videos, the heat data of each target entity is determined, the videos which are represented by each target entity are scored based on the heat data, and finally the target entities are sorted according to the score size and relevant information of the target entities in the at least one target entity is output according to the sorting, so that the attribute information of the entities in the knowledge graph can be effectively utilized, the accuracy of sorting the determined target entities is improved, and the relevant information of each target entity is favorably and pointedly displayed for a user.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for outputting information, according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for outputting information according to an embodiment of the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for outputting information according to an embodiment of the present application;
FIG. 5 is a block diagram of one embodiment of an apparatus for outputting information in accordance with an embodiment of the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which a method for outputting information or an apparatus for outputting information of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a video playing application, a web browser application, a search application, an instant messaging tool, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices, including but not limited to smart phones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background information processing server providing support for relevant information of target entities presented on the terminal devices 101, 102, 103. The background information processing server may process the entities included in the pre-established knowledge graph and obtain a processing result (e.g., related information of the ranked target entities).
It should be noted that the method for outputting information provided in the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for outputting information is generally disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present application is shown. The method for outputting information comprises the following steps:
at step 201, at least one target entity is determined from the entities characterizing the video included in the pre-established knowledge-graph.
In this embodiment, an executive (e.g., a server as shown in fig. 1) of the method for outputting information may determine at least one target entity from the entities characterizing the video that are included in the pre-established knowledge-graph. Wherein the target entity is used to characterize the video. The pre-established knowledge graph may be stored in the execution main body, or may be stored in another electronic device communicatively connected to the execution main body. In general, entities in a knowledge graph may be used to characterize something or concept (e.g., characterize people, places, times, information, etc.). The form of the entity may include at least one of: numbers, words, symbols, etc. In this embodiment, the knowledge-graph may include entities for characterizing the video. As an example, a pre-established entity used to characterize a video may be "v-abc", where "v" indicates that the entity is used to characterize the video and "abc" is an identifier used to characterize the video. In addition, the knowledge-graph of the present embodiment may also include entities for characterizing other things or concepts besides video. For example, a pre-established entity that characterizes a person may be "p-xyz," where "p" indicates that the entity characterizes the person and "xyz" is an identifier that characterizes the person.
The entities that characterize the video may have corresponding attribute information. The attribute information is information related to the video characterized by the entity, and may include, but is not limited to, at least one of the following: the video content includes, for example, character information related to the video (e.g., a video producer, an actor, a director, etc.), time information related to the video (e.g., a show time, a shooting time, etc.), source information of the video (e.g., a play address of the video, a name of a website where the video is located, etc.), and other information related to the video content (e.g., a video brief, a movie, a poster picture, etc.), etc. Generally, in a knowledge graph, the correspondence between an entity and attribute information may be represented by a data structure in the form of a triple, i.e., "entity-attribute value", where the attribute information of an entity may include the above-mentioned attribute-attribute value. For example, a triplet may be "abc 123-name-XXX," where "abc 123" is an entity used to characterize the movie "XXX," name "is an attribute, and" XXX "is an attribute value.
In this embodiment, the executing entity may determine at least one target entity from the entities included in the pre-established knowledge graph and characterizing the video according to various methods. As an example, the execution subject may match a search word input by a technician in advance with text information included in attribute information of each entity representing a video, and determine an entity corresponding to the text information including the search word as a target entity. Or, the entity may have identification information, and the execution subject may determine the target entity according to the identification information specified by the technician. In practice, the videos characterized by each of the determined at least one target entity may be entities having some of the same attributes (e.g., videos characterized are of the same type, videos characterized have a lead actor for the same person, videos characterized are from the same website, etc.).
Step 202, for a target entity in at least one target entity, determining at least one hot degree data from attribute information of the target entity; and scoring the video characterized by the target entity based on the determined at least one heat data.
In this embodiment, for a target entity of the at least one target entity, the executing entity may perform the following steps:
at step 2021, at least one heat data is determined from the attribute information of the target entity.
Wherein the heat data is used for representing the attention degree of the video represented by the target entity. It should be noted that the heat data may be a numerical value, such as a play amount, a click amount, and the like, and when the heat data is a numerical value, the higher the numerical value is, the higher the attention degree of the video is represented. The popularity data may also be other non-numeric data, such as information characterizing the user's rating of goodness (e.g., "goodness," "medium rating," "bad rating," etc.) of the video.
In some optional implementations of this embodiment, the heat data may include, but is not limited to, at least one of: the video quality evaluation method comprises the following steps of playing quantity data of videos represented by target entities, attention quantity data of videos represented by the target entities, comment quantity data of the videos represented by the target entities and the like. The popularity data may be obtained in various ways, and the playing amount data may be, for example, playing amount data of a video represented by the target entity obtained from a web page indicated by the web address included in the attribute information of the target entity by the execution subject. In addition, the playing amount data may be an actual playing amount in a preset time period (for example, the last day), or may be a ratio of the actual playing amount in the preset time period to a total playing amount of videos included in a website where the website is located. The volume of interest data may be the number of users who recorded, focused on, or favorite or clicked on videos of the target entity representations on the website. The comment amount data may be the amount of comments made by the user on the video that the target entity characterizes on the website. It should be noted that the attribute information of the target entity may include at least one web address, and accordingly, for each of the various hot data, the hot data may be a summation result of the hot data acquired by the execution subject from the web pages indicated by the respective web addresses.
Step 2022, scoring the video characterized by the target entity based on the determined at least one heat data.
Wherein the score may be used to characterize the degree of attention of the video of the target entity representation, generally, the higher the score, the higher the degree of attention of the video identifying the target entity representation by the user. As an example, when each of the at least one hot data is a numerical value, the executing entity may add the numerical values, and determine the result of the addition as the score value of the target entity.
In some optional implementations of this embodiment, the executing entity may score the video represented by the target entity according to the following steps:
firstly, for the heat data in the determined at least one heat data, obtaining a score corresponding to the heat data based on a preset conversion mode corresponding to the heat data. Specifically, the conversion modes corresponding to the respective heat data may be the same or different. In one aspect, when the heat data is a numerical value, the heat data may be scaled to be within a preset numerical range. For example, the preset numerical range is [0,1], and if certain heat data is the playing amount of the video of the current day, the score corresponding to the heat data may be a ratio of the playing amount to a preset maximum playing amount, and if the ratio is greater than 1, the score is determined to be 1. On the other hand, when the heat data is non-numerical data, the non-numerical data may be converted into a score within a preset numerical range. For example, when the popularity data is the rating of the video by the user, the rating may be mapped into the numerical range [0,1] according to a preset rule (e.g., a score of 1 is good, a score of 0.5 is medium, and a score of 0 is poor).
And then, based on a preset weight value corresponding to the heat data, carrying out weighted summation on the determined scores to obtain the scores of the videos represented by the target entities. Specifically, each of the heat data may correspond to a preset weight value, and the heat data with a large weight value may contribute to the score of the obtained video represented by the target entity to a greater extent. For example, the at least one popularity datum includes a play amount, a concern amount, and a comment amount, and the corresponding weighting values are 0.6, 0.2, and 0.2, respectively, so that the score of the video represented by the target entity is 0.6 × the play amount +0.2 × the concern amount +0.2 × the comment amount. By executing the implementation mode, the scoring of the video represented by the target entity can be calculated according to the weight value corresponding to the heat data, so that the accuracy of determining the scoring according to the heat data can be improved.
And step 203, sequencing the at least one target entity according to the obtained grading size, and outputting the related information of the target entity in the at least one target entity according to the sequencing.
In this embodiment, the execution body may sort the at least one target entity according to the size of each score obtained in step 202, and output the related information of the target entity in the at least one target entity according to the sort. The related information may be information included in the attribute information of the target entity, or may be other information related to the target entity (for example, a sequence number of the sorted target entity). As an example, the attribute information may include various types of sub information, and the sub information may have a corresponding identification or sequence number to distinguish the category of the sub information. The execution main body may extract sub information of a preset category from the attribute information as the related information. The output sorted related information of the target entities can show the target entities more pertinently, and a user can view videos represented by the target entities according to the sequence of the related information.
In some optional implementations of this embodiment, the relevant information of the target entity may include, but is not limited to, at least one of the following: the title of the video of the target entity representation, the images (such as video screenshots, stills, poster pictures and the like) included in the video of the target entity representation, and the corresponding heat data (such as playing amount, attention amount and the like) of the target entity.
Alternatively, the execution body may output the related information of the target entity in various manners. For example, the related information of the target entity is displayed on a display connected to the execution main body, or the related information of the target entity is output to a terminal device (such as the terminal device shown in fig. 1) communicatively connected to the execution main body.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 3, the server 301 first determines three target entities 303, 304, 305 from the entities that characterize the video and that are included in the pre-established knowledge-graph 302. The target entities 303, 304, 305 are entities searched by the server 301 from the entities included in the knowledge graph 302 and representing the videos according to the search term "dominant lie XX movie" input by the technician, that is, three target entities are respectively used for representing movies of dominant lie XX. Then, the server 301 determines, from the attribute information of each target entity, the daily play amount of the video represented by the target entity on the website to which the video belongs as the popularity data, that is, 3031 (corresponding to the play amount "20 ten thousand"), 3041 (corresponding to the play amount "18 ten thousand"), 3051 (corresponding to the play amount "10 ten thousand") in the figure. Server 301 then divides the actual value of the play amount by 10000, resulting in a score 3032 (i.e., "20"), 3042 (i.e., "18"), 3052 (i.e., "10") for each of target entities 303, 304, 305. Finally, the server sorts the target entities 303, 304, 305 in descending order of the obtained scores, and outputs the sorted related information of each target entity to a display 306 connected to the server 301 for display. For example, the movie name "XXX" and the playback volume "20 ten thousand" represented by the target entity 303, the movie name "yyyy" and the playback volume "18 ten thousand" represented by the target entity 304, and the movie name "ZZZ" and the playback volume "10 ten thousand" represented by the target entity 305 are displayed on the display 306.
The method provided by the embodiment of the application determines at least one target entity from the entities which are included in the pre-established knowledge graph and represent the videos, then determines the heat data of each target entity, scores the videos represented by each target entity based on the heat data, and finally sorts each target entity according to the score and outputs the related information of the target entity in at least one target entity according to the sorting, so that the attribute information of the entities in the knowledge graph can be effectively utilized, the accuracy of sorting the determined target entities is improved, and the related information of each target entity is favorably displayed to a user in a targeted manner.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for outputting information is shown. The process 400 of the method for outputting information includes the steps of:
at step 401, at least one target entity is determined from the entities characterizing the video comprised by the pre-established knowledge-graph.
In this embodiment, step 401 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
Step 402, for a target entity in at least one target entity, determining at least one hot degree data from attribute information of the target entity; and scoring the video characterized by the target entity based on the determined at least one heat data.
In this embodiment, step 402 is substantially the same as step 202 in the corresponding embodiment of fig. 2, and is not described herein again.
Step 403, for a target entity in at least one target entity, acquiring preset authority cost information of a video represented by the target entity.
In this embodiment, for a target entity of at least one target entity, an executing entity (for example, a server shown in fig. 1) of the method for outputting information may obtain preset authority cost information of a video represented by the target entity. The right cost information can be used to represent the cost that the user wants to obtain the playing right of the video represented by the target entity. The authority cost information may be a numerical value or other non-numerical data. As an example, the rights cost information may be a copyright price value of a video characterized by the target entity. Or the information which is calculated according to the copyright price of the video represented by the target entity and represents the high cost paid by the user. For example, when the ratio of the copyright price of the video a represented by a certain target entity to the single-day playing amount of the video a is greater than a first preset value, the permission cost information may be the text information "high"; when the ratio of the copyright price of the video A to the single-day playing amount of the video A is less than or equal to a first preset value and greater than or equal to a second preset value, the permission cost information can be the text information 'middle'; when the ratio of the copyright price of the video a to the single-day play amount of the video a is smaller than a second preset value, the permission cost information may be the text information "low".
And 404, sequencing the at least one target entity according to the obtained grading, and outputting the related information of the target entity in the at least one target entity and the authority cost information of the video represented by the target entity in the at least one target entity according to the sequencing.
In this embodiment, the executing entity may first sort the at least one target entity according to the size of the obtained score, and output, according to the sort, the related information of the target entity in the at least one target entity and the authority cost information of the video represented by the target entity in the at least one target entity. The method for sorting at least one target entity in this step is substantially the same as the method for sorting at least one target entity described in step 203 in the corresponding embodiment of fig. 2, and the description of the related information of the target entity may refer to step 203 in the corresponding embodiment of fig. 2, which is not described herein again.
Optionally, the information related to the target entity and the authority cost information of the video represented by the target entity output in this step may be output to a display connected to the execution main body; or outputting the related information of the target entity and the authority cost information of the video represented by the target entity to a terminal device (such as the terminal device shown in fig. 1) in communication connection with the execution main body. Therefore, the related information of the target entity and the authority cost information of the video represented by the target entity can be displayed to the user. The method is helpful for showing more information to the user more pertinently.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for outputting information in this embodiment highlights the steps of obtaining the rights cost information of the video represented by the target entity and outputting the rights cost information of the target entity. Therefore, the scheme described in the embodiment can output more information, and is beneficial to displaying the authority cost information of the video represented by the target entity to the user, so that the pertinence of the information displayed to the user can be further improved.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: a determining unit 501 configured to determine at least one target entity from entities characterizing the video, which are comprised by the pre-established knowledge-graph; a scoring unit 502 configured to determine, for a target entity of the at least one target entity, at least one hot data from the attribute information of the target entity, wherein the hot data is used for characterizing the attention degree of the video characterized by the target entity; scoring the video characterized by the target entity based on the determined at least one heat data; an output unit 503 configured to sort the at least one target entity according to the size of the obtained score, and output related information of a target entity of the at least one target entity according to the sort.
In this embodiment, the determining unit 501 may determine at least one target entity from the entities included in the pre-established knowledge-graph and characterizing the video. Wherein the target entity is used to characterize the video. The pre-established knowledge-graph may be stored in the apparatus 500, or may be stored in another electronic device communicatively coupled to the apparatus 500. In general, entities in a knowledge graph may be used to characterize something or concept (e.g., characterize people, places, times, information, etc.). The form of the entity may include at least one of: numbers, words, symbols, etc. In this embodiment, the knowledge-graph may include entities for characterizing the video. As an example, a pre-established entity used to characterize a video may be "v-abc", where "v" indicates that the entity is used to characterize the video and "abc" is an identifier used to characterize the video. In addition, the knowledge-graph of the present embodiment may also include entities for characterizing other things or concepts besides video. For example, a pre-established entity that characterizes a person may be "p-xyz," where "p" indicates that the entity characterizes the person and "xyz" is an identifier that characterizes the person.
The entities that characterize the video may have corresponding attribute information. The attribute information may be information related to a video characterized by the entity, and may include, but is not limited to, at least one of: the video content includes, for example, character information related to the video (e.g., a video producer, an actor, a director, etc.), time information related to the video (e.g., a show time, a shooting time, etc.), source information of the video (e.g., a play address of the video, a name of a website where the video is located, etc.), and other information related to the video content (e.g., a video brief, a movie, a poster picture, etc.), etc. Generally, in a knowledge graph, the correspondence between an entity and attribute information may be represented by a data structure in the form of a triple, i.e., "entity-attribute value", where the attribute information of an entity may include the above-mentioned attribute-attribute value. For example, a triplet may be "abc 123-name-XXX," where "abc 123" is an entity used to characterize the movie "XXX," name "is an attribute, and" XXX "is an attribute value.
In this embodiment, the determining unit 501 may determine at least one target entity from the entities included in the pre-established knowledge graph and characterizing the video according to various methods. As an example, the determining unit 501 may match a search word input by a technician with text information included in attribute information of each entity representing a video, and determine an entity corresponding to the text information including the search word as a target entity. Alternatively, the entity may have identification information, and the determining unit 501 may determine the target entity according to the identification information specified by the technician. In practice, the videos characterized by each of the determined at least one target entity may be entities having some of the same attributes (e.g., videos characterized are of the same type, videos characterized have a lead actor for the same person, videos characterized are from the same website, etc.).
In this embodiment, the target entity scoring unit 502 in the at least one target entity may perform the following steps:
first, at least one hot data is determined from the attribute information of the target entity.
Wherein the heat data is used for representing the attention degree of the video represented by the target entity. It should be noted that the popularity data may be a numerical value, such as a play amount, a click amount, and the like. The popularity data may also be other non-numeric data, such as information characterizing the user's rating of goodness (e.g., "goodness," "medium rating," "bad rating," etc.) of the video. As an example, when the heat data is a numerical value, the higher the degree of attention characterizing the video.
The video characterized by the target entity is then scored based on the determined at least one heat data. As an example, when each of the at least one hot data is a numerical value, the scoring unit 502 may add the numerical values, and determine the result of the addition as the score value of the target entity.
In this embodiment, the output unit 503 may sort the at least one target entity according to the size of each score obtained by the scoring unit 502, and output the related information of the target entity in the at least one target entity according to the sort. The related information may be information included in the attribute information of the target entity, or may be other information related to the target entity (for example, a sequence number of the sorted target entity). As an example, the attribute information may include various types of sub information, and the sub information may have a corresponding identification or sequence number to distinguish the category of the sub information. The output unit 503 may extract sub information of a preset category from the attribute information as related information. The output sorted related information of the target entities can show the target entities more pertinently, and a user can view videos represented by the target entities according to the sequence of the related information.
In some optional implementations of this embodiment, the scoring unit 502 may include: a conversion module (not shown in the figure) configured to, for the heat data in the determined at least one heat data, obtain a score corresponding to the heat data based on a preset conversion manner corresponding to the heat data; and the calculating module (not shown in the figure) is configured to perform weighted summation on the determined scores based on preset weight values corresponding to the heat data to obtain the scores of the videos represented by the target entities.
In some optional implementations of this embodiment, the apparatus 500 may further include: and the obtaining unit (not shown in the figure) is configured to obtain preset authority cost information of the video represented by the target entity for the target entity in at least one target entity.
In some optional implementations of this embodiment, the output unit 503 may be further configured to: and outputting the related information of the target entity in the at least one target entity and the authority cost information of the video represented by the target entity in the at least one target entity according to the sorting.
In some optional implementations of this embodiment, the heat data may include at least one of: the video quality monitoring method comprises the following steps of playing quantity data of a video represented by a target entity, attention quantity data of the video represented by the target entity and comment quantity data of the video represented by the target entity.
In some optional implementations of this embodiment, the related information of the target entity may include at least one of: the title of the video represented by the target entity, the image included in the video represented by the target entity and the corresponding heat data of the target entity.
The device provided by the above embodiment of the application determines at least one target entity from the entities included in the pre-established knowledge graph and representing the videos, determines the heat data of each target entity, scores the videos represented by each target entity based on the heat data, and finally sorts each target entity according to the score size and outputs the related information of the target entity in at least one target entity according to the sorting, so that the attribute information of the entities in the knowledge graph can be effectively utilized, the accuracy of sorting the determined target entities is improved, and the related information of each target entity is favorably and pointedly displayed to a user.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a server according to embodiments of the present application. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a determination unit, a scoring unit, and an output unit. Where the names of the units do not in some cases constitute a limitation of the units themselves, for example, a determination unit may also be described as a "unit that determines at least one target entity from among the entities characterizing the video comprised by the pre-established knowledge-graph".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the server described in the above embodiments; or may exist separately and not be assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: determining at least one target entity from entities characterizing the video included in the pre-established knowledge graph; for a target entity in at least one target entity, determining at least one piece of heat data from the attribute information of the target entity, wherein the heat data is used for representing the attention degree of a video represented by the target entity; scoring the video characterized by the target entity based on the determined at least one heat data; and sorting the at least one target entity according to the obtained grading size, and outputting the related information of the target entity in the at least one target entity according to the sorting.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A method for outputting information, comprising:
determining at least one target entity from entities which characterize videos and are included in a pre-established knowledge graph, wherein the knowledge graph includes the entities which characterize the videos and attribute information corresponding to the entities, and the videos which are characterized by the target entities in the at least one target entity have the same attribute;
for a target entity in the at least one target entity, determining at least one piece of heat data from the attribute information of the target entity, wherein the heat data is used for representing the attention degree of a video represented by the target entity; based on the determined at least one piece of heat data, scoring the video represented by the target entity, wherein the target entity is determined based on a search word input in advance;
and sorting the at least one target entity according to the obtained grading size, and outputting related information of the target entity in the at least one target entity according to the sorting.
2. The method of claim 1, wherein scoring the video characterized by the target entity based on the determined at least one heat data comprises:
obtaining a score corresponding to the heat data based on a preset conversion mode corresponding to the heat data for the heat data in the determined at least one heat data;
and based on a preset weight value corresponding to the heat data, carrying out weighted summation on the determined scores to obtain the scores of the videos represented by the target entity.
3. The method of claim 1, wherein prior to said ranking said at least one target entity by the size of the derived score, the method further comprises:
and acquiring preset authority cost information of the video represented by the target entity for the target entity in the at least one target entity.
4. The method of claim 3, wherein said outputting information about target entities of said at least one target entity in said order comprises:
and outputting the related information of the target entity in the at least one target entity and the authority cost information of the video represented by the target entity in the at least one target entity according to the sorting.
5. The method of claim 1, wherein the heat data comprises at least one of: the video quality monitoring method comprises the following steps of playing quantity data of a video represented by a target entity, attention quantity data of the video represented by the target entity and comment quantity data of the video represented by the target entity.
6. The method according to one of claims 1 to 5, wherein the related information of the target entity comprises at least one of: the title of the video represented by the target entity, the image included in the video represented by the target entity and the corresponding heat data of the target entity.
7. An apparatus for outputting information, comprising:
the video processing device comprises a determining unit, a processing unit and a processing unit, wherein the determining unit is configured to determine at least one target entity from entities which characterize videos and are included in a pre-established knowledge graph, the entities which characterize the videos and attribute information corresponding to the entities are included in the knowledge graph, and the videos which are characterized by the target entities in the at least one target entity have the same attribute;
the scoring unit is configured to determine at least one piece of heat data from the attribute information of the target entity for the target entity in the at least one target entity, wherein the heat data is used for representing the attention degree of the video represented by the target entity; based on the determined at least one piece of heat data, scoring the video represented by the target entity, wherein the target entity is determined based on a search word input in advance;
and the output unit is configured to sort the at least one target entity according to the size of the obtained scores and output the related information of the target entity in the at least one target entity according to the sort.
8. The apparatus of claim 7, wherein the scoring unit comprises:
the conversion module is configured to obtain a score corresponding to the heat data based on a preset conversion mode corresponding to the heat data for the heat data in the determined at least one heat data;
and the calculating module is configured to perform weighted summation on the determined scores based on preset weight values corresponding to the heat data to obtain the scores of the videos represented by the target entities.
9. The apparatus of claim 7, wherein the apparatus further comprises:
and the acquisition unit is configured to acquire preset authority cost information of the video represented by the target entity for the target entity in the at least one target entity.
10. The apparatus of claim 9, wherein the output unit is further configured to:
and outputting the related information of the target entity in the at least one target entity and the authority cost information of the video represented by the target entity in the at least one target entity according to the sorting.
11. The apparatus of claim 7, wherein heat data comprises at least one of: the video quality monitoring method comprises the following steps of playing quantity data of a video represented by a target entity, attention quantity data of the video represented by the target entity and comment quantity data of the video represented by the target entity.
12. The apparatus according to one of claims 7-11, wherein the related information of the target entity comprises at least one of: the title of the video represented by the target entity, the image included in the video represented by the target entity and the corresponding heat data of the target entity.
13. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201811015493.0A 2018-08-31 2018-08-31 Method and apparatus for outputting information Active CN109255037B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811015493.0A CN109255037B (en) 2018-08-31 2018-08-31 Method and apparatus for outputting information
PCT/CN2018/115949 WO2020042376A1 (en) 2018-08-31 2018-11-16 Method and apparatus for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811015493.0A CN109255037B (en) 2018-08-31 2018-08-31 Method and apparatus for outputting information

Publications (2)

Publication Number Publication Date
CN109255037A CN109255037A (en) 2019-01-22
CN109255037B true CN109255037B (en) 2022-03-08

Family

ID=65050455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811015493.0A Active CN109255037B (en) 2018-08-31 2018-08-31 Method and apparatus for outputting information

Country Status (2)

Country Link
CN (1) CN109255037B (en)
WO (1) WO2020042376A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334265A (en) * 2019-07-10 2019-10-15 杭州二更网络科技有限公司 A kind of novel video methods of marking and device
CN110555627B (en) * 2019-09-10 2022-06-10 拉扎斯网络科技(上海)有限公司 Entity display method and device, storage medium and electronic equipment
CN113468367A (en) * 2020-03-31 2021-10-01 百度在线网络技术(北京)有限公司 Method and device for generating service information
CN111565316B (en) * 2020-07-15 2020-10-23 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN114245220B (en) * 2020-09-09 2023-05-16 中国联合网络通信集团有限公司 Online video evaluation method, device, computer equipment and storage medium
CN114173199B (en) * 2021-11-24 2024-02-06 深圳Tcl新技术有限公司 Video output method and device, intelligent equipment and storage medium

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9948989B1 (en) * 2004-07-21 2018-04-17 Cox Communications, Inc. Interactive media content listing search and filtering system for a media content listing display system such as an electronic programming guide
US10025776B1 (en) * 2013-04-12 2018-07-17 Amazon Technologies, Inc. Language translation mediation system
US9830631B1 (en) * 2014-05-02 2017-11-28 A9.Com, Inc. Image recognition result culling
CN104063448B (en) * 2014-06-18 2017-02-01 华东师范大学 Distributed type microblog data capturing system related to field of videos
CN104462505A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Search method and device
CN104462508A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Character relation search method and device based on knowledge graph
CN104951563A (en) * 2015-07-08 2015-09-30 北京理工大学 Method and device for determining to-be-recommended objects
CN105068661B (en) * 2015-09-07 2018-09-07 百度在线网络技术(北京)有限公司 Man-machine interaction method based on artificial intelligence and system
CN105868237A (en) * 2015-12-09 2016-08-17 乐视网信息技术(北京)股份有限公司 Multimedia data recommendation method and server
US20200327122A1 (en) * 2016-03-30 2020-10-15 Push Technology Limited Conflation of topic selectors
CN106777274B (en) * 2016-06-16 2018-05-29 北京理工大学 A kind of Chinese tour field knowledge mapping construction method and system
CN107783973B (en) * 2016-08-24 2022-02-25 慧科讯业有限公司 Method, device and system for monitoring internet media event based on industry knowledge map database
CN106844603B (en) * 2017-01-16 2021-05-11 竹间智能科技(上海)有限公司 Entity popularity calculation method and device, and application method and device
CN108346075B (en) * 2017-01-24 2024-06-18 北京京东尚科信息技术有限公司 Information recommendation method and device
CN107153641B (en) * 2017-05-08 2021-01-12 北京百度网讯科技有限公司 Comment information determination method, comment information determination device, server and storage medium
CN107066621B (en) * 2017-05-11 2022-11-08 腾讯科技(深圳)有限公司 Similar video retrieval method and device and storage medium
CN108241727A (en) * 2017-09-01 2018-07-03 新华智云科技有限公司 News reliability evaluation method and equipment
CN107908653A (en) * 2017-10-12 2018-04-13 阿里巴巴集团控股有限公司 A kind of data processing method and device
CN107679217B (en) * 2017-10-19 2021-12-07 北京百度网讯科技有限公司 Associated content extraction method and device based on data mining
CN107944025A (en) * 2017-12-12 2018-04-20 北京百度网讯科技有限公司 Information-pushing method and device
CN108153901B (en) * 2018-01-16 2022-04-19 北京百度网讯科技有限公司 Knowledge graph-based information pushing method and device
CN108256070B (en) * 2018-01-17 2022-07-15 北京百度网讯科技有限公司 Method and apparatus for generating information

Also Published As

Publication number Publication date
WO2020042376A1 (en) 2020-03-05
CN109255037A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN109255037B (en) Method and apparatus for outputting information
CN107679211B (en) Method and device for pushing information
US20190042585A1 (en) Method of and system for recommending media objects
JP2021103543A (en) Use of machine learning for recommending live-stream content
CN109255036B (en) Method and apparatus for outputting information
CN111125574B (en) Method and device for generating information
CN109271556B (en) Method and apparatus for outputting information
CN109271557B (en) Method and apparatus for outputting information
CN111800671B (en) Method and apparatus for aligning paragraphs and video
CN108509611B (en) Method and device for pushing information
CN109255035B (en) Method and device for constructing knowledge graph
US11126682B1 (en) Hyperlink based multimedia processing
CN109862100B (en) Method and device for pushing information
CN110019948B (en) Method and apparatus for outputting information
CN110059172B (en) Method and device for recommending answers based on natural language understanding
CN111415183B (en) Method and device for processing access request
CN111897950A (en) Method and apparatus for generating information
CN112287168A (en) Method and apparatus for generating video
CN108038172B (en) Search method and device based on artificial intelligence
CN116821475A (en) Video recommendation method and device based on client data and computer equipment
US9753998B2 (en) Presenting a trusted tag cloud
Chang et al. Revisiting online video popularity: A sentimental analysis
CN109241344B (en) Method and apparatus for processing information
CN111782933B (en) Method and device for recommending booklets
CN109472028B (en) Method and device for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant