CN109271556B - Method and apparatus for outputting information - Google Patents

Method and apparatus for outputting information Download PDF

Info

Publication number
CN109271556B
CN109271556B CN201811010487.6A CN201811010487A CN109271556B CN 109271556 B CN109271556 B CN 109271556B CN 201811010487 A CN201811010487 A CN 201811010487A CN 109271556 B CN109271556 B CN 109271556B
Authority
CN
China
Prior art keywords
entity
information
matching
video
description information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811010487.6A
Other languages
Chinese (zh)
Other versions
CN109271556A (en
Inventor
陈大伟
刘宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811010487.6A priority Critical patent/CN109271556B/en
Publication of CN109271556A publication Critical patent/CN109271556A/en
Application granted granted Critical
Publication of CN109271556B publication Critical patent/CN109271556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a method and a device for outputting information. One embodiment of the method comprises: acquiring description information aiming at a target video; matching the description information with attribute information of an entity used for representing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph, wherein the matching entity is an entity of which the corresponding attribute information is matched with the description information; in response to determining that there is, outputting relevant information for the matching entity. The embodiment can use the description information of the target video to retrieve the matching entity from the knowledge graph spectrum, and is beneficial to improving the accuracy of output information.

Description

Method and apparatus for outputting information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for outputting information.
Background
A Knowledge Graph (knowledgegraph) is a repository called semantic network (semantic network), i.e. a repository with a directed Graph structure, where nodes of the Graph represent entities (entries) or concepts (concepts) and edges of the Graph represent various semantic relationships between entities/concepts. The entities may have corresponding attribute information that may be used to characterize certain attributes of the entities (e.g., attributes such as categories, storage addresses, etc. of information characterized by the entities). The knowledge graph can be applied to various fields, such as information search, information recommendation and the like. By using the knowledge graph, other entities related to the entity representing certain information can be obtained, so that other information related to the information can be obtained more accurately.
Disclosure of Invention
The embodiment of the application provides a method and a device for outputting information.
In a first aspect, an embodiment of the present application provides a method for outputting information, where the method includes: acquiring description information aiming at a target video; matching the description information with attribute information of an entity used for representing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph, wherein the matching entity is an entity of which the corresponding attribute information is matched with the description information; in response to determining that there is, outputting relevant information for the matching entity.
In some embodiments, the description information includes at least one field; and matching the description information with attribute information of the entity in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph, comprising: selecting a field from the at least one field as a target field; for an entity of entities included in the knowledge-graph for characterizing the video, determining whether attribute information of the entity matches the target field, and in response to determining a match, determining the entity as a matching entity.
In some embodiments, after matching the description information with attribute information of the entities in the pre-established knowledge-graph to determine whether there are matching entities in the knowledge-graph, the method further comprises: in response to determining that no entity with corresponding attribute information matched with the description information exists in the knowledge graph, determining similarity between the attribute information and the description information of an entity in the entities included in the knowledge graph and used for representing the video; and outputting the related information of the entity in response to the determination that the similarity is greater than or equal to a preset similarity threshold.
In some embodiments, the relevant information of the matching entity comprises at least one of: the title of the video of the matching entity representation, the version information of the video of the matching entity representation, and the related personal information of the video of the matching entity representation.
In some embodiments, after outputting the relevant information of the matching entities, the method further comprises: and in response to receiving the association operation instruction, establishing an association relationship between the target video and the matching entity, wherein the association operation instruction is generated by the target user terminal executing the association operation on the target video and the matching entity.
In a second aspect, an embodiment of the present application provides an apparatus for outputting information, including: an acquisition unit configured to acquire description information for a target video; the video processing device comprises a first determining unit, a second determining unit and a video processing unit, wherein the first determining unit is configured to match the description information with attribute information of an entity used for representing a video in a pre-established knowledge graph so as to determine whether a matching entity exists in the knowledge graph, and the matching entity is an entity of which the corresponding attribute information is matched with the description information; an output unit configured to output the relevant information of the matching entity in response to determining the existence.
In some embodiments, the description information includes at least one field; and the first determination unit includes: a selection module configured to select a field from the at least one field as a target field; a determination module configured to determine, for an entity of entities included by the knowledge-graph for characterizing the video, whether attribute information of the entity matches the target field, and in response to determining a match, determine the entity as a matching entity.
In some embodiments, the apparatus further comprises: a second determining unit configured to determine, for an entity of entities included in the knowledge graph and used for characterizing the video, a similarity between attribute information and description information of the entity in response to determining that there is no entity whose corresponding attribute information matches the description information; and outputting the related information of the entity in response to the determination that the similarity is greater than or equal to a preset similarity threshold.
In some embodiments, the relevant information of the matching entity comprises at least one of: the title of the video of the matching entity representation, the version information of the video of the matching entity representation, and the related personal information of the video of the matching entity representation.
In some embodiments, the apparatus further comprises: the establishing unit is configured to establish an association relation between the target video and the matching entity in response to receiving an association operation instruction, wherein the association operation instruction is generated by the target user terminal performing an association operation on the target video and the matching entity.
In a third aspect, an embodiment of the present application provides a server, where the server includes: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for outputting the information, the description information aiming at the target video is obtained, and then the description information is matched with the attribute information of the entity in the pre-established knowledge graph to determine whether the corresponding attribute information and the matching entity matched with the description information exist in the knowledge graph, if the relevant information of the matching entity exists, the matching entity can be retrieved from the knowledge graph by using the description information of the target video, and the accuracy of the output information is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for outputting information, according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for outputting information according to an embodiment of the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for outputting information according to an embodiment of the present application;
FIG. 5 is a block diagram of one embodiment of an apparatus for outputting information in accordance with an embodiment of the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which a method for outputting information or an apparatus for outputting information of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a data processing application, a video playing application, a web browser application, an instant messaging tool, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background information processing server that processes description information of a target video uploaded by the terminal apparatuses 101, 102, 103. The background information processing server may process the acquired description information and output a processing result (e.g., related information of the matching entity).
It should be noted that the method for outputting information provided in the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for outputting information is generally disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present application is shown. The method for outputting information comprises the following steps:
in step 201, description information for a target video is acquired.
In the present embodiment, an execution subject (for example, a server shown in fig. 1) of the method for outputting information may acquire the description information for the target video from a remote place or a local place by a wired connection manner or a wireless connection manner. The target video may be a video previously specified by a technician or a video stored in a certain video set. The description information for the target video may be stored in the execution main body or in other electronic devices communicatively connected to the execution main body. The description information may be information for characterizing the target video, and the description information may include, but is not limited to, at least one of the following information: name, author, category, time of showing, introduction, rating, etc. of the target video.
It should be understood that the target video may be various types of videos, such as movies, television shows, small videos uploaded by a user, and so forth.
Step 202, matching the description information with attribute information of an entity used for representing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph.
In this embodiment, based on the description information obtained in step 201, the executing entity may match the description information with attribute information of an entity used for characterizing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph. And the matching entity is an entity of which the corresponding attribute information is matched with the description information. In general, entities in a knowledge graph may be used to characterize something or concept (e.g., characterize people, places, times, information, etc.). The form of the entity may include at least one of: numbers, words, symbols, etc. In this embodiment, the knowledge-graph may include entities for characterizing a video, and as an example, the pre-established entities for characterizing a video may be "v-abc", where "v" indicates that the entities are for characterizing a video and "abc" is an identifier for characterizing the video. In addition, the knowledge graph of the embodiment may further include entities for characterizing other things or concepts besides the video, for example, a pre-established entity for characterizing a person may be "p-xyz", where "p" indicates that the entity is used for characterizing the person, and "xyz" is an identifier for characterizing the person.
The attribute information of the entity used to characterize the video may be information related to the video characterized by the entity, and may include, but is not limited to, at least one of: the video content includes, for example, character information related to the video (e.g., a video producer, an actor, a director, etc.), time information related to the video (e.g., a show time, a shooting time, etc.), source information of the video (e.g., a play address of the video, a name of a website where the video is located, etc.), and other information related to the video content (e.g., a video brief, a movie, a poster picture, etc.), etc. Generally, in a knowledge graph, the correspondence between an entity and attribute information may be represented by a data structure in the form of a triple, i.e., "entity-attribute value", where the attribute information of an entity may include the above-mentioned attribute-attribute value. For example, a triplet may be "abc 123-name-XXX," where "abc 123" is an entity used to characterize the movie "XXX," name "is an attribute, and" XXX "is an attribute value.
The executing body may match the description information of the target video with the attribute information of the entity representing the video in the knowledge graph according to various methods to obtain a matching result. Wherein, the number of the matching results can be multiple, and each matching result corresponds to one entity in the knowledge-graph. For example, the description information may be text information, and the attribute information of the entity may include text information (e.g., names of respective actors, description of video content, etc.). The executing body may perform similarity calculation on the description information of the target video and text information included in the attribute information of the entity in the knowledge graph, and determine an entity corresponding to a similarity greater than or equal to a preset similarity threshold as a matching entity matching the description information. Specifically, the executing entity may calculate the similarity between the description information of the target video and the text information included in the attribute information of the entity according to an existing algorithm (e.g., a Jaccard similarity algorithm, a cosine similarity algorithm, a simhash algorithm, or the like) for determining the text similarity. Optionally, the description information may be at least one keyword, and the attribute information of the entity may include at least one keyword for describing the video. The execution subject may calculate, as a matching result, a similarity between the description information and at least one keyword corresponding to the entity according to various existing algorithms (e.g., a Levenshtein Distance (Levenshtein Distance) algorithm, a cosine Distance algorithm based on a Vector Space Model (VSM)), and the like for calculating the similarity between the keywords.
The executing body can determine whether a matching entity matched with the description information of the target video exists in the knowledge graph according to the matching result. As an example, when the matching result is a similarity between the description information and text information included in the attribute information of the entity in the knowledge graph, an entity corresponding to a similarity greater than or equal to a preset similarity threshold may be determined as a matching entity. In general, the video indicated by the matching entity may be the same or similar video as the target video. For example, the target video may be the movie "XXX", the video indicated by the matching entity may likewise be the movie "XXX", or the video indicated by the matching entity may be another version of the movie "XXX" (e.g., an unpunctured version, a dubbed version in another language, etc.).
In some optional implementations of this embodiment, the description information may include at least one field. Wherein a field is used to characterize a certain attribute of the target video. For example, the description information may include three fields corresponding to the title, the producer, and the shooting time of the target video, respectively. The executing body may determine whether a matching entity exists in the knowledge-graph by using the field included in the description information according to the following steps:
first, the execution body selects a field from at least one field as a target field. As an example, a field may correspond to a field name or field identification. The execution body may select a field from at least one field as a target field according to a preset field identifier or a preset field name for determining the target field.
Then, for an entity among entities included in the knowledge-graph for characterizing the video, the executing body determines whether attribute information of the entity matches the target field, and in response to determining a match, determines the entity as a matching entity. As an example, the executing body may determine, from various entities in the knowledge-graph used for characterizing the video, an entity whose attribute information includes a target field in the description information as a matching entity. Or, when the target field is text information, the executing body may determine, according to an algorithm for calculating text similarity, the similarity between the target field and the text information included in the attribute information of each entity for representing the video, and determine an entity corresponding to the similarity greater than or equal to a preset text similarity threshold as a matching entity.
In response to determining that there is, relevant information of the matching entity is output, step 203.
In this embodiment, the executing entity may output the relevant information of the matching entity in response to determining that the matching entity exists in the knowledge-graph. The related information may be information included in the attribute information of the matching entity, or may be other information related to the matching entity (for example, a pre-acquired comment, a score, and the like of a video represented by the matching entity by a user). As an example, the attribute information may include various types of sub information, and the sub information may have a corresponding identification or sequence number to distinguish the category of the sub information. The execution main body may extract sub information of a preset category from the attribute information as the related information.
Alternatively, the execution body may output the related information of the matching entity in various ways. For example, the related information of the matching entity is displayed on a display connected to the execution main body, or the related information of the matching entity is output to other electronic equipment in communication connection with the execution main body.
In some optional implementations of this embodiment, the relevant information of the matching entity may include, but is not limited to, at least one of the following: titles of videos matching the entity representations, version information of videos matching the entity representations (e.g., "cut down version", "86 year version", etc.), character information related to videos matching the entity representations (e.g., actor names, director names, etc.).
In some optional implementations of this embodiment, in response to determining that there is no entity in the knowledge-graph whose corresponding attribute information matches the description information, for an entity in the entities included in the knowledge-graph and used for characterizing the video, the executing body may first determine a similarity between the attribute information and the description information of the entity. Specifically, the method for determining the similarity between the attribute information and the description information of the entity may be the same as the method for calculating the similarity listed in the step 202 described above and determining the matching result, and is not repeated here. Then, the execution subject outputs the related information of the entity in response to determining that the similarity is greater than or equal to a preset similarity threshold. Specifically, the similarity threshold may be smaller than the similarity threshold used in determining the matching result described in step 202, so that an entity corresponding to a similarity greater than or equal to the similarity threshold may be determined as an entity related to the target video. It should be understood that the video indicated by the entity associated with the target video information may be a video having some of the same or similar attributes as the target video. For example, a video clip cut from the target video, a video having the same information as the related person (e.g., director, etc.) of the target video, etc.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 3, the server 301 first acquires locally pre-stored description information 3021 for the target video 302. Where the target video 302 is the movie "XXX". The descriptive information 3021 includes information such as the title of the movie "XXX", the names of the actors, the name of the director, etc. Then, the server 301 matches the description information 3021 with description information of entities used for characterizing the video in the preset knowledge graph 303, so as to obtain a matching result corresponding to each entity. The matching result is the text similarity between the text information included in the attribute information of the entity and the description information 3021. Next, the server 301 determines that the text similarity between the text information and the description information 3021 included in the attribute information of the entities 3031 and 3032 in the knowledge graph 303 is greater than a preset text similarity threshold (for example, "90%"), and then determines that both the entities 3031 and 3032 are matching entities. Finally, the related information of the matching entities 3031 and 3032 (for example, the movie "XXX one" represented by the matching entity 3031 and the website name "a net" where the movie video is located, and the movie "XXX two" represented by the matching entity 3032 and the website name "a net" where the movie video is located) are displayed on the display 304 connected to the server 301.
According to the method provided by the embodiment of the application, the description information of the target video is obtained, and then the description information is matched with the attribute information of the entity in the pre-established knowledge graph to determine whether the corresponding attribute information and the matching entity matched with the description information exist in the knowledge graph, if the relevant information of the matching entity exists, the matching entity can be retrieved from the knowledge graph by using the description information of the target video, and the accuracy of the output information is improved.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for outputting information is shown. The process 400 of the method for outputting information includes the steps of:
step 401, obtaining description information for a target video.
In this embodiment, step 401 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
Step 402, matching the description information with attribute information of entities in a pre-established knowledge-graph to determine whether matching entities exist in the knowledge-graph.
In this embodiment, step 402 is substantially the same as step 202 in the corresponding embodiment of fig. 2, and is not described herein again
In response to determining that there is, relevant information of the matching entity is output, step 403.
In this embodiment, step 403 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described herein again
And step 404, establishing an association relationship between the target video and the matching entity in response to receiving the association operation instruction.
In this embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for outputting information may establish an association relationship between the target video and the matching entity in response to receiving the association operation instruction. And the correlation operation instruction is generated by the target user terminal executing the correlation operation on the target video and the matching entity. Specifically, the target user terminal is a terminal used by the target user. The target user may be a user having a right to perform the association operation. As an example, the related information of the matching entity may be output to the target user terminal, and the target user may check the related information of the matching entity and determine whether the video indicated by the related information has an association relationship with the target video, and if the video indicated by the related information has the association relationship with the target video, the target user may cause the target user terminal to perform an association operation by means of, for example, clicking, inputting a command, and the like, so as to generate an association operation instruction, and send the association operation instruction to the execution main body.
In this embodiment, the execution subject may establish an association relationship between the target video and the matching entity according to various methods. For example, the execution subject may obtain a link of a storage address of the target video and add the link to the attribute information of the matching entity, so as to establish an association relationship between the target video and the matching entity. Alternatively, the target video may correspond to attribute information including various attributes (e.g., attributes such as title, author, shooting time, storage address, etc.) for characterizing the target video, and the execution body may combine the attribute information of the target video and the attribute information of the matching entity to establish an association relationship between the target video and the matching entity.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for outputting information in the present embodiment highlights a step of establishing an association relationship between the target video and the matching entity according to the received association operation instruction. Therefore, the scheme described in the embodiment can enrich the number of videos represented by the entities in the knowledge graph, and is beneficial to improving the accuracy of output information.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: an acquisition unit 501 configured to acquire description information for a target video; a first determining unit 502 configured to match the description information with attribute information of an entity for characterizing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph, wherein the matching entity is an entity whose corresponding attribute information matches the description information; an output unit 503 configured to output the relevant information of the matching entity in response to the determination of the presence.
The acquisition unit 501 may acquire the description information for the target video from a remote location or a local location by a wired connection manner or a wireless connection manner. The target video may be a video previously specified by a technician or a video stored in a certain video set. The description information for the target video may be stored in the apparatus 500 or in other electronic devices communicatively connected to the apparatus 500. The description information may be information for characterizing the target video, and the description information may include, but is not limited to, at least one of the following information: name, author, category, time of showing, introduction, rating, etc. of the target video).
It should be understood that the target video may be various types of videos, such as movies, television shows, small videos uploaded by a user, and so forth.
In this embodiment, the first determining unit 502 may match the description information with attribute information of an entity for characterizing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph. And the matching entity is an entity of which the corresponding attribute information is matched with the description information. In general, entities in a knowledge graph may be used to characterize something or concept (e.g., characterize people, places, times, information, etc.). The form of the entity may include at least one of: numbers, words, symbols, etc. In this embodiment, the entities in the knowledge-graph may be used to characterize the video, and may also be used to characterize other things or concepts (e.g., people, places, etc.).
The attribute information of the entity used to characterize the video may be information related to the video characterized by the entity, and may include, but is not limited to, at least one of: the video content includes, for example, character information related to the video (e.g., a video producer, an actor, a director, etc.), time information related to the video (e.g., a show time, a shooting time, etc.), source information of the video (e.g., a play address of the video, a name of a website where the video is located, etc.), and other information related to the video content (e.g., a video brief, a movie, a poster picture, etc.), etc. Generally, in a knowledge graph, the correspondence between an entity and attribute information may be represented by a data structure in the form of a triple, i.e., "entity-attribute value", where the attribute information of an entity may include the above-mentioned attribute-attribute value. For example, a triplet may be "abc 123-name-XXX," where "abc 123" is an entity used to characterize the movie "XXX," name "is an attribute, and" XXX "is an attribute value.
The first determining unit 502 may match the description information of the target video with the attribute information of the entity in the knowledge graph according to various methods to obtain a matching result. Wherein, the number of the matching results can be multiple, and each matching result corresponds to one entity in the knowledge-graph. For example, the description information may be text information, and the attribute information of the entity may include text information (e.g., names of respective actors, description of video content, etc.). The first determining unit 502 may perform similarity calculation between the description information for the target video and text information included in the attribute information of the entity in the knowledge graph, and determine an entity corresponding to a similarity greater than or equal to a preset similarity threshold as a matching entity matching the description information. Specifically, the first determining unit 502 may calculate the similarity between the description information of the target video and the text information included in the attribute information of the entity according to an existing algorithm (e.g., a Jaccard similarity algorithm, a cosine similarity algorithm, a simhash algorithm, or the like) for determining the text similarity. Optionally, the description information may be at least one keyword, and the attribute information of the entity may include at least one keyword for describing the video. The obtaining unit 501 may calculate, as a matching result, a similarity between the description information and at least one keyword corresponding to the entity according to various existing algorithms (for example, a Levenshtein Distance (Levenshtein Distance) algorithm, a cosine Distance algorithm based on a Vector Space Model (VSM)), which are used for calculating the similarity between the keywords.
The first determination unit 502 may determine whether there is a matching entity matching the description information of the target video in the knowledge-graph according to the matching result. As an example, when the matching result is a similarity between the description information and text information included in the attribute information of the entity in the knowledge graph, an entity corresponding to a similarity greater than or equal to a preset similarity threshold may be determined as a matching entity. In general, the video indicated by the matching entity may be the same or similar video as the target video. For example, the target video may be the movie "XXX", the video indicated by the matching entity may likewise be the movie "XXX", or the video indicated by the matching entity may be another version of the movie "XXX" (e.g., an unpunctured version, a dubbed version in another language, etc.).
In this embodiment, the output unit 503 may output the related information of the matching entity in response to determining that the matching entity exists in the knowledge-graph. Wherein the related information may be information included in attribute information of the entity. The attribute information may include various types of sub information, and the sub information may have identification, sequence number, and the like to distinguish the category of the sub information. The output unit 503 may extract sub information of the setting category from the attribute information as the related information.
In some optional implementations of this embodiment, the description information may include at least one field; and the first determining unit 502 may include: a selection module (not shown in the figures) configured to select a field from the at least one field as a target field; a determination module (not shown in the figures) configured to determine, for an entity of the entities included by the knowledge-graph for characterizing the video, whether attribute information of the entity matches the target field, and in response to determining a match, determine the entity as a matching entity.
In some optional implementations of this embodiment, the apparatus 500 may further include: a second determining unit (not shown in the figures), which may be configured to determine, in response to determining that there is no entity in the knowledge-graph whose corresponding attribute information matches the description information, a similarity of the attribute information and the description information of the entity included in the knowledge-graph for the entity used to characterize the video; and outputting the related information of the entity in response to the determination that the similarity is greater than or equal to a preset similarity threshold.
In some optional implementations of this embodiment, the relevant information of the matching entity may include at least one of: the title of the video of the matching entity representation, the version information of the video of the matching entity representation, and the related personal information of the video of the matching entity representation.
In some optional implementations of this embodiment, the apparatus 500 may further include: the method comprises the steps of establishing an association relation between a target video and a matching entity in response to receiving an association operation instruction, wherein the association operation instruction is generated by the target user terminal performing association operation on the target video and the matching entity.
According to the device provided by the embodiment of the application, the description information of the target video is acquired, and then the description information is matched with the attribute information of the entity in the pre-established knowledge graph to determine whether the corresponding attribute information and the matching entity matched with the description information exist in the knowledge graph, if the relevant information of the matching entity exists, the matching entity can be retrieved from the knowledge graph by using the description information of the target video, and the accuracy of the output information is improved.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a server according to embodiments of the present application. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first determination unit, and an output unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the receiving unit may also be described as a "unit that acquires description information for the target video".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the server described in the above embodiments; or may exist separately and not be assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: acquiring description information aiming at a target video; matching the description information with attribute information of an entity used for representing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph, wherein the matching entity is an entity of which the corresponding attribute information is matched with the description information; in response to determining that there is, outputting relevant information for the matching entity.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A method for outputting information, comprising:
acquiring description information for a target video, wherein the description information for the target video is stored in an execution main body or an electronic device in communication connection with the execution main body;
matching the description information with attribute information of an entity used for characterizing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph, comprising: matching the description information with attribute information of an entity used for representing a video in a pre-established knowledge graph to obtain matching results, and determining whether a matching entity exists in the knowledge graph or not based on the similarity of the description information and text information included in the attribute information of the entity in the knowledge graph in response to the fact that the number of the matching results is multiple, wherein the matching entity is an entity of which the corresponding attribute information is matched with the description information;
in response to determining that there is, outputting relevant information for the matching entity.
2. The method of claim 1, wherein the description information comprises at least one field; and
the matching the description information with attribute information of an entity used for characterizing the video in a pre-established knowledge graph to determine whether a matching entity exists in the knowledge graph comprises:
selecting a field from the at least one field as a target field;
for an entity of entities included in the knowledge-graph for characterizing the video, determining whether attribute information of the entity matches the target field, and in response to determining a match, determining the entity as a matching entity.
3. The method of claim 1, wherein after said matching said description information with attribute information of entities characterizing a video in a pre-established knowledge-graph to determine if there are matching entities in said knowledge-graph, said method further comprises:
in response to determining that there is no entity in the knowledge-graph whose corresponding attribute information matches the description information, determining, for an entity in the entities included in the knowledge-graph and used for characterizing the video, a similarity between the attribute information of the entity and the description information; and outputting the related information of the entity in response to the determination that the similarity is greater than or equal to a preset similarity threshold.
4. The method of claim 1, wherein the relevant information of the matching entity comprises at least one of: title of the video characterized by the matching entity, version information of the video characterized by the matching entity, and personal information related to the video characterized by the matching entity.
5. The method according to one of claims 1-4, wherein after said outputting the relevant information of the matching entity, the method further comprises:
and establishing an association relation between the target video and the matching entity in response to receiving an association operation instruction, wherein the association operation instruction is generated by a target user terminal executing an association operation on the target video and the matching entity.
6. An apparatus for outputting information, comprising:
an acquisition unit configured to acquire description information for a target video, wherein the description information for the target video is stored in an execution subject or in an electronic device communicatively connected to the execution subject;
a first determining unit configured to match the description information with attribute information of entities in a pre-established knowledge-graph characterizing the video to determine whether there are matching entities in the knowledge-graph, comprising: matching the description information with attribute information of an entity used for representing a video in a pre-established knowledge graph to obtain matching results, and determining whether a matching entity exists in the knowledge graph or not based on the similarity of the description information and text information included in the attribute information of the entity in the knowledge graph in response to the fact that the number of the matching results is multiple, wherein the matching entity is an entity of which the corresponding attribute information is matched with the description information;
an output unit configured to output the relevant information of the matching entity in response to determining the existence.
7. The apparatus of claim 6, wherein the description information comprises at least one field; and
the first determination unit includes:
a selection module configured to select a field from the at least one field as a target field;
a determination module configured to determine, for an entity of entities included by the knowledge-graph for characterizing the video, whether attribute information of the entity matches the target field, and in response to determining a match, determine the entity as a matching entity.
8. The apparatus of claim 6, wherein the apparatus further comprises:
a second determining unit configured to determine, in response to determining that there is no entity in the knowledge-graph whose corresponding attribute information matches the description information, a similarity of attribute information of an entity included in the knowledge-graph for characterizing the video to the description information; and outputting the related information of the entity in response to the determination that the similarity is greater than or equal to a preset similarity threshold.
9. The apparatus of claim 6, wherein the relevant information of the matching entity comprises at least one of: the title of the video information characterized by the matching entity, the version information of the video information characterized by the matching entity, and the related personal information of the video characterized by the matching entity.
10. The apparatus according to one of claims 6-9, wherein the apparatus further comprises:
the establishing unit is configured to establish an association relationship between the target video and the matching entity in response to receiving an association operation instruction, wherein the association operation instruction is generated by a target user terminal performing an association operation on the target video and the matching entity.
11. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201811010487.6A 2018-08-31 2018-08-31 Method and apparatus for outputting information Active CN109271556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811010487.6A CN109271556B (en) 2018-08-31 2018-08-31 Method and apparatus for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811010487.6A CN109271556B (en) 2018-08-31 2018-08-31 Method and apparatus for outputting information

Publications (2)

Publication Number Publication Date
CN109271556A CN109271556A (en) 2019-01-25
CN109271556B true CN109271556B (en) 2021-06-01

Family

ID=65155111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811010487.6A Active CN109271556B (en) 2018-08-31 2018-08-31 Method and apparatus for outputting information

Country Status (1)

Country Link
CN (1) CN109271556B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110413848B (en) * 2019-07-19 2022-04-15 上海赜睿信息科技有限公司 Data retrieval method, electronic equipment and computer-readable storage medium
CN110543574B (en) * 2019-08-30 2022-05-17 北京百度网讯科技有限公司 Knowledge graph construction method, device, equipment and medium
CN111078727A (en) * 2019-12-17 2020-04-28 Oppo广东移动通信有限公司 Brief description generation method and device and computer readable storage medium
CN111191049B (en) * 2020-01-03 2023-04-07 北京明略软件系统有限公司 Information pushing method and device, computer equipment and storage medium
CN111353070B (en) * 2020-02-18 2023-08-18 北京百度网讯科技有限公司 Video title processing method and device, electronic equipment and readable storage medium
CN113973096B (en) * 2020-07-23 2023-06-20 腾讯科技(深圳)有限公司 Data processing method and transmitting terminal
CN113360580B (en) * 2021-05-31 2023-09-26 北京百度网讯科技有限公司 Abnormal event detection method, device, equipment and medium based on knowledge graph

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462505A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Search method and device
CN104462506A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Method and device for establishing knowledge graph based on user annotation information
CN104462512A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Chinese information search method and device based on knowledge graph
CN107169010A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of determination method and device of recommendation search keyword

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462505A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Search method and device
CN104462506A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Method and device for establishing knowledge graph based on user annotation information
CN104462512A (en) * 2014-12-19 2015-03-25 北京奇虎科技有限公司 Chinese information search method and device based on knowledge graph
CN107169010A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of determination method and device of recommendation search keyword

Also Published As

Publication number Publication date
CN109271556A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109271556B (en) Method and apparatus for outputting information
CN107679211B (en) Method and device for pushing information
CN110582025B (en) Method and apparatus for processing video
CN107241260B (en) News pushing method and device based on artificial intelligence
CN109271557B (en) Method and apparatus for outputting information
CN109255035B (en) Method and device for constructing knowledge graph
CN109543058B (en) Method, electronic device, and computer-readable medium for detecting image
CN109189857B (en) Data sharing system, method and device based on block chain
CN109255037B (en) Method and apparatus for outputting information
CN107943877B (en) Method and device for generating multimedia content to be played
CN109857908B (en) Method and apparatus for matching videos
US20200409998A1 (en) Method and device for outputting information
US11758088B2 (en) Method and apparatus for aligning paragraph and video
CN110019948B (en) Method and apparatus for outputting information
CN109033464A (en) Method and apparatus for handling information
CN108038172B (en) Search method and device based on artificial intelligence
CN109873756B (en) Method and apparatus for transmitting information
CN111897950A (en) Method and apparatus for generating information
WO2024099171A1 (en) Video generation method and apparatus
CN109241344B (en) Method and apparatus for processing information
CN110188113B (en) Method, device and storage medium for comparing data by using complex expression
CN112182255A (en) Method and apparatus for storing media files and for retrieving media files
CN107885872B (en) Method and device for generating information
CN108509442B (en) Search method and apparatus, server, and computer-readable storage medium
CN107483595B (en) Information pushing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant