CN112182235A - Method and device for constructing knowledge graph, computer equipment and storage medium - Google Patents
Method and device for constructing knowledge graph, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112182235A CN112182235A CN202010890276.7A CN202010890276A CN112182235A CN 112182235 A CN112182235 A CN 112182235A CN 202010890276 A CN202010890276 A CN 202010890276A CN 112182235 A CN112182235 A CN 112182235A
- Authority
- CN
- China
- Prior art keywords
- knowledge
- constructing
- model
- graph
- mapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013507 mapping Methods 0.000 claims abstract description 35
- 238000004590 computer program Methods 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 238000010606 normalization Methods 0.000 claims description 14
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000011161 development Methods 0.000 description 10
- 230000018109 developmental process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000003058 natural language processing Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/335—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/338—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application relates to a method, a device, computer equipment and a storage medium for constructing a knowledge graph, wherein the method comprises the following steps: preparing a data resource; extracting knowledge from the data resources; constructing a word embedding model through the extracted knowledge structure; constructing a TF mapping value table through the knowledge; normalizing knowledge by the model and the mapping table; and storing the normalized knowledge.
Description
Technical Field
The present application relates to the field of word processing technologies, and in particular, to a method and an apparatus for constructing a knowledge graph, a computer device, and a storage medium.
Background
The knowledge graph can be applied to many fields, for example, in Chinese semantic disambiguation, a certain ambiguous word can be disambiguated by combining context information of a sentence; and for example, in upper-layer general applications such as semantic search, dialogue understanding, knowledge and answer recognition and the like, the knowledge graph can expand and reason semantic information.
In the knowledge graph construction method based on the specific field, the graph is mainly defined as a triple < entity1, relationship, entity1>, wherein the entity1 and the entity1 represent descriptions of a specific object in the objective world and are called entities, and the relationship represents the association between two entities and is called entity relationship. The main construction process is as follows: firstly, collecting a large amount of structured, unstructured and semi-structured data, then extracting entities and entity relations from the data according to a certain algorithm or rule, and finally expressing and storing the knowledge graph according to a certain mode.
The entity and the entity relationship extracted during the construction of the prior knowledge graph have the same meaning and different expressions, such as the entity relationship of wife and wife, the entity of NLP and natural language processing, and the like. Because these entities and entity relationships are not normalized, a larger space is needed for storing the entities, which means that a larger query pressure is needed for the application of the subsequent maps; secondly, the relationship among certain entities is lost; the two points can influence subsequent upper-layer applications, such as knowledge question answering and recommendation systems, and influence the expansion and fusion of subsequent knowledge maps.
Disclosure of Invention
The application provides a method, a device, a computer device and a storage medium for constructing a knowledge graph, which aim to solve the problems.
In a first aspect, the present application provides a method of constructing a knowledge-graph, the method comprising:
preparing a data resource;
extracting knowledge from the data resources;
constructing a word embedding model through the extracted knowledge structure;
constructing a TF mapping value table through the knowledge;
normalizing knowledge by the model and the mapping table;
and storing the normalized knowledge.
In a second aspect, the present application also provides an apparatus for constructing an atlas, the apparatus comprising:
a data resource unit for preparing a data resource;
a knowledge extraction unit for extracting knowledge from the data resources;
the model building unit is used for building a word embedding model through the extracted knowledge structure;
the TF mapping unit is used for adding global relationships into the training sliding window and constructing a TF mapping value table through the knowledge;
the normalization unit is used for normalizing the knowledge of the model and the mapping table;
and the storage unit is used for storing the normalized knowledge.
In a third aspect, the present application further provides a computer device comprising a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and to implement the method of constructing a knowledge-graph as described above when executing the computer program.
In a fourth aspect, the present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the method of constructing a knowledge-graph as described above.
The application discloses a method, a device, equipment and a storage medium for constructing a knowledge graph, which are used for preparing data resources; extracting knowledge from the data resources; constructing a word embedding model through the extracted knowledge structure; meanwhile, a TF mapping value table is constructed through the knowledge; normalizing knowledge with the model and the mapping table; and storing the normalized knowledge. The method has the advantages that the number of map nodes is greatly reduced, the method is more efficient when inquiring the knowledge map, the inquiring pressure when the knowledge map is applied can be greatly reduced, the performance of the knowledge map is improved, the response time of the system is prolonged, and the recommended information is enriched.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a method of constructing a knowledge-graph provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for constructing a word embedding model according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a method for entity normalization provided by an embodiment of the present application;
FIG. 4 is a schematic block diagram of a system for constructing a knowledge-graph provided by an embodiment of the present application;
fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it should be understood that the described embodiments are some, but not all embodiments of the present application. All other embodiments that can be derived by a person skilled in the art from the embodiments given herein without making any inventive effort fall within the scope of protection of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The embodiment of the application provides a method and a device for constructing a knowledge graph, computer equipment and a storage medium. The method for constructing the knowledge graph can be applied to a terminal or a server so as to improve the response time of a system and enrich recommended information.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for constructing a knowledge graph according to an embodiment of the present application. The method for constructing the knowledge graph comprises the steps S101 to S106.
S101, preparing data resources.
The data resources can be acquired from a specific field open website by using a crawler, and the acquired resources comprise structured data, unstructured data and semi-structured data. For the "crawler" in the data resource acquisition, a program or script is usually included to automatically acquire the public legal information from the network according to a certain rule.
And S102, extracting knowledge from the data resources.
The knowledge extraction can be realized by combining a Chinese word segmentation device with artificial rules. Knowledge in the knowledge extraction generally includes a large set of triples, where each element may include an entity (entity) and an entity relationship (relationship), and the entity may specifically be a person name, a place name, an organization name, and the like; a relationship between any two entities is generally referred to as an entity relationship. Such as a couple relationship, a teammate relationship, a colleague relationship, etc. The words and phrases are segmented for the speech by a Chinese word segmenter, such as an NLPIR word segmenter. For example,
familiarity with developing languages such as JAVA and PYTHON, the entities 'JAVA', 'PYTHON' and 'developing language' can be extracted by using rules and word segmenters, and the relation 'isA' is extracted, so that the triplets are < JAVA, isA, developing language > and < PYTHON, isA, developing language >.
S103, building a word embedding model through the extracted knowledge.
And constructing a word embedding training corpus by using the extracted knowledge, and constructing a word embedding model. For the word embedding model in knowledge normalization, the method generally comprises embedding a high latitude space with latitude of all word numbers into a low latitude continuous space, and each word is mapped to a vector representation on a real number field.
Each line of entity tag can be any defined english word, for example, if there is a sentence "familiar with development languages such as JAVA, PYTHON, etc. and having front end experience", the defined tags have "SKILL", "accept", and after the rules and chinese word segmentation, not only triplets < JAVA, isA, development language > and < PYTHON, isA, development language > can be obtained, but also another entity "front end" can be obtained, and after the tags are added to these words, "JAVA | SKILL", "PYTHON | SKILL", "development language | accept", and "front end | SKILL"; each line of corpus of subsequent word embedding may be "PYTHON | SKILL PYTHON | SKI development language | CONCEPT front end | SKILL", where each entity is separated by a space.
And S104, constructing a TF mapping value table through the knowledge.
In particular, TF generally comprises a commonly used weighting technique for information retrieval and data mining, and is referred to as term frequency (term frequency). Performing TF word frequency statistics on all knowledge, wherein the TF value of a certain knowledge can be according to a formulaWherein, tfiTF values, n, representing a certain knowledgeiRepresents the number of times the knowledge appears in the speech material, Σkn represents the total number of all knowledge in the corpus. After all knowledge is counted, the TF values are saved. It should be noted that S103 and S104 in this embodiment may be operated in reverse.
S105, normalizing the knowledge by the model and the mapping table.
And normalizing all knowledge by using the obtained word embedding model and the TF mapping table to obtain normalized knowledge.
And S106, storing the normalized knowledge.
And storing the obtained normalized knowledge in a server for a user to use.
The number of nodes of the knowledge graph after normalization is greatly reduced, the knowledge graph is more efficiently inquired, the inquiry pressure during the application of the knowledge graph can be greatly reduced, the performance of the knowledge graph is improved, and the graph relation is richer. For example, before normalization, there are two triples of < NLP, relation1, entity1> and < natural language processing, relations2, and entity2>, where "NLP" is actually "natural language processing", but because normalization is not performed, "NLP" and entity2 do not establish relations2, and "natural language processing" and entity1 do not establish relations1, this problem is solved after normalization, so that the relations are richer; finally, in the upper-layer application, such as a recommendation system, due to the reduction of the number of nodes and the increase of the correlation coefficient, the response time of the system is improved, and the recommendation information is enriched.
Referring to fig. 2, in an alternative embodiment, building a word embedding model includes steps S1031 to S1034.
And S1031, adding a specific label to the knowledge, and performing word segmentation on the knowledge row by using a Chinese word segmentation device to form a model training corpus.
S1032, randomly extracting a knowledge from one or more entity relations of the same line of linguistic data to be used as a global relationship.
And S1033, removing the entity relation of the linguistic data in the same row, and adding global relation into each training sliding window after removal.
And S1034, iterating all the linguistic data, and training to obtain a word embedding model.
Specifically, a skip gram mode of negative sampling may be selected. Correspondingly, when training the word embedding model, firstly, adding different label suffixes to the entities, entity relations and word segmentation words in each line of the corpus. For example, if the entity set { JAVA, PYTHON, development language, front end }, then adding the label suffix is { JAVA | SKILL, PYTHON | SKILL, development language | CONCEPT, front end | SKILL }. Then, the global relationships are selected, for example, in the set { JAVA | SKILL, PYTHON | SKILL, development language | CONCEPT, front end | SKILL }, where the global relationships are the entity relationship "isA", and then, during the sliding window selection of the word embedding training process, the global relationships are added to each line of the same line of corpus. For example, if the value of the training window at this time is 2, the window X is { JAVA | SKILL, development language | conccept }, and after adding global relationships, X is { JAVA | SKILL, development language | conccept, isA }; and finally, circulating the whole corpus, obtaining a model according to a word embedding training process, and performing corresponding modification according to a scheme of sliding a window before using a python open source library genesis.
In an alternative embodiment, normalizing knowledge with the mapping table of the model includes steps S1051-S1054.
S1051, adding a label to the knowledge.
S1052, inquiring top N most similar candidates in the model.
S1053, inquiring the corresponding TF values in the mapping table according to the candidates, and sorting the TF values in a descending order
And S1054, selecting the first one of the candidates as normalization knowledge.
Referring to fig. 3, fig. 3 is a flowchart illustrating an entity normalization method according to an embodiment of the invention. Before the TF mapping table is constructed, an empty mapping table T { } is constructed, and when the model and the mapping table normalize knowledge, an entity or an entity relationship is input first, and label is added to the entity or the entity relationship, such as a ═ JAVA | SKILL "; then, whether a exists in T is searched in a mapping table, if yes, a corresponding mapping value is returned, otherwise, top N most similar results are found in a trained word embedding model, and the results are assumed to be M ═ M1: v1, M2: v2, …, mn: vn }, wherein M represents word, and v represents a similarity value; then, filtering the result in M by taking a reasonable similarity threshold, wherein M is { M1: v1, M2: v2, …, mx: vx }, wherein x < ═ n; then, the TF values of all elements in M are inquired by using an existing TF value table, wherein M is { M1: TF1, M2: TF2, … mx: tfx }, the elements in M are sorted in a descending order according to the TF values, the maximum value of b is assumed to be recorded, and { a: b } is newly added in a mapping table T; and finally returning the normalization result b to remove the result of label.
Referring to fig. 4, fig. 4 is a schematic block diagram of an apparatus for constructing a knowledge graph according to an embodiment of the present application, which may be configured in a server for performing the aforementioned method for constructing a knowledge graph.
As shown in fig. 4, the apparatus 200 for constructing a knowledge graph includes: the system comprises a data resource unit 201, a knowledge extraction unit 202, a model construction unit 203, a TF mapping unit 204, a normalization unit 205 and a storage unit 206.
A data resource unit 201 for preparing data resources,.
A knowledge extraction unit 202, configured to extract knowledge from the data resources.
And the model constructing unit 203 is used for constructing a word embedding model through the extracted knowledge structure.
And the TF mapping unit 204 is configured to add global relationships to the training sliding window, and construct a TF mapping value table according to the knowledge.
A normalization unit 205, configured to normalize knowledge of the model and the mapping table.
And a storage unit 206, configured to store the normalized knowledge.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the apparatus and the units described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The apparatus described above may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 5.
Referring to fig. 5, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present disclosure. The computer device may be a server or a terminal.
Referring to fig. 5, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any of the methods of constructing a knowledge graph.
The processor is used for providing calculation and control capability and supporting the operation of the whole computer equipment.
The internal memory provides an environment for the execution of a computer program on a non-volatile storage medium, which when executed by a processor causes the processor to perform any of the methods for constructing a knowledge graph.
The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration relevant to the present teachings and does not constitute a limitation on the computing device to which the present teachings may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of:
preparing a data resource; extracting knowledge from the data resources; constructing a word embedding model through the extracted knowledge structure; adding global relationships into a training sliding window, and simultaneously constructing a TF mapping value table through the knowledge; normalizing knowledge by the model and the mapping table; and storing the normalized knowledge.
The embodiment of the application also provides a computer readable storage medium, the computer readable storage medium stores a computer program, the computer program comprises program instructions, and the processor executes the program instructions to realize any method for constructing the knowledge graph provided by the embodiment of the application.
The computer-readable storage medium may be an internal storage unit of the computer device described in the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (8)
1. A method of constructing a knowledge graph, comprising:
preparing a data resource;
extracting knowledge from the data resources;
constructing a word embedding model through the extracted knowledge structure;
constructing a TF mapping value table through the knowledge;
normalizing knowledge by the model and the mapping table;
and storing the normalized knowledge.
2. The method of constructing a knowledge-graph of claim 1 wherein the extracted knowledge includes entities and entity relationships; and extracting the entities and entity relations from the structured, semi-structured and unstructured data resources in a mode of manually defining rules.
3. The method for constructing a knowledge graph according to claim 1, wherein the constructing a word embedding model through the structure of the extracted knowledge comprises the following steps:
adding labels to the extracted knowledge, and performing word segmentation on the knowledge row by using a Chinese word segmentation device to form a model training corpus;
randomly extracting a knowledge from one or more entity relations of the same line of linguistic data as a global relationship;
removing the entity relation of the linguistic data in the same row, and adding global relation into each training sliding window after removal;
iterating all corpora, and training to obtain a word embedding model.
4. The method of constructing a knowledge graph according to claim 1, wherein the constructing a TF mapping value table comprises:
performing TF computation on the knowledge;
the result is saved as a mapping table.
5. The method of constructing a knowledge graph according to claim 1, wherein normalizing the model and the mapping table to knowledge comprises:
tagging the knowledge;
querying top N most similar candidates in the model;
inquiring corresponding TF values in the mapping table for the candidates, and sorting the candidates according to the descending order of the TF values;
the first of the candidates is selected as the normalization knowledge.
6. An apparatus for constructing a knowledge graph, comprising:
a data resource unit for preparing a data resource;
a knowledge extraction unit for extracting knowledge from the data resources;
the model building unit is used for building a word embedding model through the extracted knowledge structure;
the TF mapping unit is used for adding global relationships into a training sliding window and constructing a TF mapping value table through the knowledge;
the normalization unit is used for normalizing the knowledge of the model and the mapping table;
and the storage unit is used for storing the normalized knowledge.
7. A computer device, wherein the computer device comprises a memory and a processor;
the memory is used for storing a computer program;
the processor for executing the computer program and implementing the method of constructing a knowledge-graph according to any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to carry out a method of constructing a knowledge-graph as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010890276.7A CN112182235A (en) | 2020-08-29 | 2020-08-29 | Method and device for constructing knowledge graph, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010890276.7A CN112182235A (en) | 2020-08-29 | 2020-08-29 | Method and device for constructing knowledge graph, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112182235A true CN112182235A (en) | 2021-01-05 |
Family
ID=73925557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010890276.7A Pending CN112182235A (en) | 2020-08-29 | 2020-08-29 | Method and device for constructing knowledge graph, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112182235A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116975313A (en) * | 2023-09-25 | 2023-10-31 | 国网江苏省电力有限公司电力科学研究院 | Semantic tag generation method and device based on electric power material corpus |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156083A (en) * | 2015-03-31 | 2016-11-23 | 联想(北京)有限公司 | A kind of domain knowledge processing method and processing device |
CN106919689A (en) * | 2017-03-03 | 2017-07-04 | 中国科学技术信息研究所 | Professional domain knowledge mapping dynamic fixing method based on definitions blocks of knowledge |
WO2018036239A1 (en) * | 2016-08-24 | 2018-03-01 | 慧科讯业有限公司 | Method, apparatus and system for monitoring internet media events based on industry knowledge mapping database |
CN107894986A (en) * | 2017-09-26 | 2018-04-10 | 北京纳人网络科技有限公司 | A kind of business connection division methods, server and client based on vectorization |
CN109086434A (en) * | 2018-08-13 | 2018-12-25 | 华中师范大学 | A kind of knowledge polymerizing method and system based on thematic map |
CN110347894A (en) * | 2019-05-31 | 2019-10-18 | 平安科技(深圳)有限公司 | Knowledge mapping processing method, device, computer equipment and storage medium based on crawler |
CN110543574A (en) * | 2019-08-30 | 2019-12-06 | 北京百度网讯科技有限公司 | knowledge graph construction method, device, equipment and medium |
CN110659365A (en) * | 2019-09-23 | 2020-01-07 | 中国农业大学 | Animal product safety event text classification method based on multi-level structure dictionary |
CN111444317A (en) * | 2020-03-17 | 2020-07-24 | 杭州电子科技大学 | Semantic-sensitive knowledge graph random walk sampling method |
CN111538894A (en) * | 2020-06-19 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Query feedback method and device, computer equipment and storage medium |
WO2022041730A1 (en) * | 2020-08-28 | 2022-03-03 | 康键信息技术(深圳)有限公司 | Medical field intention recognition method, apparatus and device, and storage medium |
-
2020
- 2020-08-29 CN CN202010890276.7A patent/CN112182235A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156083A (en) * | 2015-03-31 | 2016-11-23 | 联想(北京)有限公司 | A kind of domain knowledge processing method and processing device |
WO2018036239A1 (en) * | 2016-08-24 | 2018-03-01 | 慧科讯业有限公司 | Method, apparatus and system for monitoring internet media events based on industry knowledge mapping database |
CN106919689A (en) * | 2017-03-03 | 2017-07-04 | 中国科学技术信息研究所 | Professional domain knowledge mapping dynamic fixing method based on definitions blocks of knowledge |
CN107894986A (en) * | 2017-09-26 | 2018-04-10 | 北京纳人网络科技有限公司 | A kind of business connection division methods, server and client based on vectorization |
CN109086434A (en) * | 2018-08-13 | 2018-12-25 | 华中师范大学 | A kind of knowledge polymerizing method and system based on thematic map |
CN110347894A (en) * | 2019-05-31 | 2019-10-18 | 平安科技(深圳)有限公司 | Knowledge mapping processing method, device, computer equipment and storage medium based on crawler |
CN110543574A (en) * | 2019-08-30 | 2019-12-06 | 北京百度网讯科技有限公司 | knowledge graph construction method, device, equipment and medium |
CN110659365A (en) * | 2019-09-23 | 2020-01-07 | 中国农业大学 | Animal product safety event text classification method based on multi-level structure dictionary |
CN111444317A (en) * | 2020-03-17 | 2020-07-24 | 杭州电子科技大学 | Semantic-sensitive knowledge graph random walk sampling method |
CN111538894A (en) * | 2020-06-19 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Query feedback method and device, computer equipment and storage medium |
WO2022041730A1 (en) * | 2020-08-28 | 2022-03-03 | 康键信息技术(深圳)有限公司 | Medical field intention recognition method, apparatus and device, and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116975313A (en) * | 2023-09-25 | 2023-10-31 | 国网江苏省电力有限公司电力科学研究院 | Semantic tag generation method and device based on electric power material corpus |
CN116975313B (en) * | 2023-09-25 | 2023-12-05 | 国网江苏省电力有限公司电力科学研究院 | Semantic tag generation method and device based on electric power material corpus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109189942B (en) | Construction method and device of patent data knowledge graph | |
CN110276023B (en) | POI transition event discovery method, device, computing equipment and medium | |
EP3819785A1 (en) | Feature word determining method, apparatus, and server | |
CN112395395B (en) | Text keyword extraction method, device, equipment and storage medium | |
JP2020027649A (en) | Method, apparatus, device and storage medium for generating entity relationship data | |
CN108090216B (en) | Label prediction method, device and storage medium | |
CN110096573B (en) | Text parsing method and device | |
CN107844533A (en) | A kind of intelligent Answer System and analysis method | |
CN110162768B (en) | Method and device for acquiring entity relationship, computer readable medium and electronic equipment | |
CN109388743B (en) | Language model determining method and device | |
CN110008474B (en) | Key phrase determining method, device, equipment and storage medium | |
CN111460170B (en) | Word recognition method, device, terminal equipment and storage medium | |
CN111325030A (en) | Text label construction method and device, computer equipment and storage medium | |
CN114416998A (en) | Text label identification method and device, electronic equipment and storage medium | |
CN115795061A (en) | Knowledge graph construction method and system based on word vectors and dependency syntax | |
CN115935983A (en) | Event extraction method and device, electronic equipment and storage medium | |
CN113761192B (en) | Text processing method, text processing device and text processing equipment | |
CN110705282A (en) | Keyword extraction method and device, storage medium and electronic equipment | |
CN111950261B (en) | Method, device and computer readable storage medium for extracting text keywords | |
CN114328800A (en) | Text processing method and device, electronic equipment and computer readable storage medium | |
CN113434631A (en) | Emotion analysis method and device based on event, computer equipment and storage medium | |
CN112182235A (en) | Method and device for constructing knowledge graph, computer equipment and storage medium | |
CN112599211A (en) | Medical entity relationship extraction method and device | |
CN113704420A (en) | Method and device for identifying role in text, electronic equipment and storage medium | |
CN112560425A (en) | Template generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |