CN111177591B - Knowledge graph-based Web data optimization method for visual requirements - Google Patents

Knowledge graph-based Web data optimization method for visual requirements Download PDF

Info

Publication number
CN111177591B
CN111177591B CN201911254814.7A CN201911254814A CN111177591B CN 111177591 B CN111177591 B CN 111177591B CN 201911254814 A CN201911254814 A CN 201911254814A CN 111177591 B CN111177591 B CN 111177591B
Authority
CN
China
Prior art keywords
corpus
entity
word
data
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911254814.7A
Other languages
Chinese (zh)
Other versions
CN111177591A (en
Inventor
陆佳炜
王小定
高燕煦
朱昊天
徐俊
肖刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Zhiqing Intellectual Property Service Co ltd
Shenzhen Shukangyun Information Technology Co ltd
Original Assignee
Shenzhen Shukangyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shukangyun Information Technology Co ltd filed Critical Shenzhen Shukangyun Information Technology Co ltd
Priority to CN201911254814.7A priority Critical patent/CN111177591B/en
Publication of CN111177591A publication Critical patent/CN111177591A/en
Application granted granted Critical
Publication of CN111177591B publication Critical patent/CN111177591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A knowledge graph-based Web data optimization method facing visual requirements comprises the following steps: firstly, constructing a target field corpus; secondly, entity extraction facing to a corpus; thirdly, performing secondary pre-grouping on the corpus, and constructing a knowledge graph by using a k-means clustering algorithm; classifying various visual graphs, summarizing the attribute and structural characteristics of the various graphs, and formally expressing various graph information by creating a visual model tree VT; fifthly, a data visualization optimizing and matching method based on a network corpus knowledge graph comprises the following steps: defining M-JSON as a prototype structure of the JSON returned by the REST Web service, matching the M-JSON with a data structure in a visual model tree, and inquiring whether an attribute combination matched by a knowledge graph in the third step has actual semantic association or not so as to select an effective dimension combination and improve the accuracy rate of automatically generating the graph.

Description

Knowledge graph-based Web data optimization method for visual requirements
Technical Field
The invention relates to a knowledge graph-based Web data optimization method facing visual requirements.
Background
Service-Oriented Computing (SOC) is a computing paradigm for distributed systems that is currently of great interest to both industry and academia. Under the promotion of the development of the SOC calculation mode, the Web service is further popularized and applied. With the proposal of REST (Representational State Transfer) architecture style in 2000, REST services gradually become an important component of Web services. The characteristics of simple, lightweight and quick REST service promote the popularity of REST service on the Internet, maintain considerable exponential growth, and simultaneously drive the increase of the service quantity. The diversified data services are cross-fused with a plurality of fields such as economy, medical treatment, sports, life and the like, and huge amounts of data are generated in an acceleration way. However, the primary purpose of human acquisition of data is still to obtain valid information in the data, regardless of the data.
The data visualization assists the user in analyzing and understanding the data through an interactive visual interface and a data-to-image conversion technique. The visual basis is data, and the data in the network data age is multi-source heterogeneous (multi-source heterogeneous), which brings the problems of data source integration and data arrangement; a large number of services are provided by data service providers in a plurality of fields, and each service has a data response mode and a response format with different structures, so that data acquisition and data analysis are difficult; with the development of multimedia technology and visualization technology, people are no longer satisfied with common form data, but pursue more visual and abundant data display forms and more convenient and efficient data processing tools. Therefore, by reducing human intervention, the heterogeneous service data is automatically analyzed and arranged, so that the data is automatically visualized, and the method has important practical significance.
Knowledge graph is formally proposed by Google company in 2012 at 6, and is a graph-based data structure. Knowledge graph is a structured semantic knowledge base, which shows each entity in the real world and the relation between each entity in the real world in the form of a graph, and describes the entities in a formalized mode. The general representation of the basic constituent elements of the knowledge-graph is an entity, "entity-relationship-entity" triplet, and "attribute-value" pairs of entities. Knowledge maps are stored in the form of a triplet expression of "entity-relationship-entity" or "entity-attribute value", which data will constitute a considerable network of entity relationships, i.e. "maps" of knowledge.
At present, although some data visual modeling methods facing REST service exist, the automatic data visual efficiency of the methods is lower, or a large number of redundant patterns exist in the automatically generated visual patterns, which is not beneficial to the understanding and analysis of users. The knowledge graph has high-efficiency information retrieval capability, strong semantic relation construction capability and visual presentation capability, and the rule hidden behind the data can be found out more effectively by combining the knowledge graph with the data visualization.
Disclosure of Invention
The invention provides a knowledge graph-based Web data optimization method for visual requirements, which is used for analyzing, generalizing and modeling common visual graphs, and carrying out structural matching on the structure of Web data and the structure of a visual model to obtain attribute combinations of candidate coordinate axes/legends meeting the requirements. And constructing a knowledge graph by using the network corpus, and acquiring whether the attribute combination has semantic association or not by inquiring the knowledge graph so as to further optimize the visualization operation of the Web data and improve the probability of effective graph generation.
In order to realize the invention, the following technical scheme is adopted:
a visual demand-oriented knowledge graph-based Web data optimization method comprises the following steps:
firstly, constructing a target field corpus: the method comprises the steps of taking network corpus content as a basis for constructing a knowledge graph, taking network corpus vocabulary entry information as original corpus content, screening the original network corpus content for constructing the knowledge graph, comparing and analyzing web page content of the network vocabulary entry, wherein the original corpus content contains HTML tags besides title and text information, editing information of the vocabulary entry, web page link information and other redundant information irrelevant to the vocabulary entry, filtering and cleaning the content of the network vocabulary entry, and extracting the title and effective text content. The filtering content comprises: performing HTML tag/text style symbol filtering, vocabulary entry template symbol and non-English character filtering, vocabulary entry editing information filtering, picture information filtering, link information filtering, page proprietary title attribute name filtering and numerical filtering on the webpage content of the vocabulary entry;
Secondly, entity extraction facing to a corpus: the knowledge graph is a data information network with a graph structure formed by entities and relations, the basic structure of the knowledge graph is represented by a triplet of 'entity-relation-entity', the triplet comprises two entities with real semantic relations and the relation between the two entities, the relation is represented by a form of G= (head, relation, tail), wherein G represents the triplet, head represents the head entity, tail represents the tail entity, and relation represents the relation between the head entity and the tail entity; each entity also comprises an attribute and an attribute value, the attribute of the entity is converted into a tail entity connected with the entity, and a relationship is established between the entity and the tail entity;
and a third step of: combining Word2vec, performing secondary pre-grouping on a corpus, using a k-means clustering algorithm to construct a knowledge graph, wherein a structure of a triplet G is head, relation, along with the difference of the head and the tail, the relation also has various relations, the relation is actually a relation set in the knowledge graph and is used for representing complex relations among various entities, the aim is to judge whether semantic relations exist between two attributes, namely whether the relation exists between the two entities, and not paying attention to the relation exists, performing secondary grouping on the corpus by calculating Word vectors of the vocabulary of the corpus, and extracting entity relations by using the k-means clustering algorithm;
Fourthly, constructing a visual model Tree (VT for short): classifying various visual graphics, summarizing the attribute and the structural characteristics of the various graphics, and formally expressing various graphic information by creating a visual model tree (VT);
fifthly, a data visualization optimizing and matching method based on a network corpus knowledge graph comprises the following steps: defining M-JSON as a prototype structure of JSON returned by REST Web service; matching each structModel in the Web data prototype structure M-JSON and the visual model tree VT according to the data structure, and returning a result which is a set formed by attribute combinations of candidate coordinate axes/legends meeting the conditions; based on structure matching, inquiring whether the attribute combination of the matched candidate coordinate axis/legend has actual semantic association or not by utilizing the knowledge graph constructed in the third step, optimizing matching according to the inquiry result, and selecting an effective dimension combination so as to improve the accuracy of automatically generating the graph. Further, in the second step, the entity extraction is divided into three stages of named entity extraction, attribute entity extraction and noun entity extraction;
2.1, entity extraction: entity extraction, also known as named entity recognition, is the automatic recognition of named entities from a text dataset, which is commonly referred to as person names, place names, institution names, and other entities whose all names are identified. The process can be accomplished by using some mainstream named entity recognition system, the steps of which include: 1. carrying out named entity identification on the corpus content by a tool; 2. labeling the identified named entity with its type attribute; 3. filtering named entities according to type attributes, deleting unsuitable named entities, reserving labels of other named entities, and defining entry names as named entities by default;
2.2, extracting attribute entities: extracting attributes from information frames of the vocabulary network by taking the information frames as sources of the attributes, then intercepting the information frame information of each vocabulary in the corpus, extracting attribute names according to the information frame structure, and taking the attribute names as tail entities of named entities corresponding to the names of the corresponding vocabulary, wherein attribute values are not reserved, and if the information frames do not exist in a certain vocabulary, the tail entities do not need to be created for the named entities corresponding to the vocabulary;
2.3, noun entity extraction, comprising four steps: word splitting (Split), part-of-speech Tagging (POS Tagging), stop word filtering (Stop Word Filtering), and stem extraction (Stemming), named entity extraction steps have tagged identified named entities, so that the next operation only extracts corpus content outside the tagged entities.
Still further, the procedure of 2.3 is as follows:
2.3.1, word splitting: using a regular expression design splitting rule mode, and carrying out word splitting on the corpus content according to spaces, symbols and paragraphs to obtain word texts;
2.3.2, part-of-speech tagging: in order to obtain nouns in a corpus, part-of-speech tagging is needed to be carried out on text words, the part-of-speech tagging is also called grammar tagging or part-of-speech disambiguation, and is a text data processing technology for tagging word parts in the corpus according to the meaning and the context content of the word parts in the corpus in linguistics, a plurality of word parts possibly simultaneously contain a plurality of part-of-speech and have a plurality of meanings, the part-of-speech selection depends on the context meaning, the corpus marked with named entities is used as tagged object text to carry out part-of-speech tagging, named word objects are searched according to tagging results, non-noun objects are removed from the corpus, but the names of the non-noun entries are not included, so that named entities, noun objects and original punctuations in each word part-of-speech are reserved in the corpus, and all contents still keep original text sequence;
2.3.3, stop word filtering: the term Stop Word is derived from Stop Word, and refers to a Word or Word automatically filtered out when processing natural language text in order to save storage space and improve search efficiency in information retrieval, and for a given purpose, any kind of Word can be selected as a Stop Word, and the Stop Word includes two kinds: the functional Words (functions Words) contained in human language are very common in use, have extremely high frequency of occurrence and have no exact actual meaning; another category is the real Words (Content Words), which refer to a part of Words that have a practical, specific meaning but are not explicitly referred to or pointed at; in natural language processing, a Stop Word List is already available, the Stop Word List is used as a reference dictionary, stop words are deleted from a corpus through Word comparison, the content of the corpus is further simplified, and no Stop words in the corpus are ensured to be reserved;
2.3.4, extracting word stems: word drying extraction is a process of removing morphological affixes to obtain corresponding root words, and is a special processing process of western languages such as English; the same English word has the deformation of singular and plural, the deformation of tense and the deformation of the corresponding different predicates of the human pronoun. These words, although slightly different in form, correspond to the same root word and should be treated as the same word when the correlation is calculated, and then the stemming process is required; the Bode stem algorithm (Porter Stemming Algorithm) is a mainstream stem extraction algorithm, and the core idea is to classify, process and restore words according to the type of morphological affix. Most word variants are regular except for some special variants, which are classified into 6 categories according to the law.
Still further, in the 2.3.4, the stem extraction step is as follows:
2.3.4.1, performing affix removal and word recovery according to word deformation categories, and obtaining stem information of noun objects in a corpus to reduce the situations of different shapes of the same word, wherein 6 different word deformations are as follows:
2.3.4.1.1, complex, end-of-ed and ing words;
2.3.4.1.2 words which contain vowels and end with y;
2.3.4.1.3, double-suffix word;
2.3.4.1.4 words with-ic, -ful, -less, -active, etc. as suffixes;
2.3.4.1.5 in case of < c > vcvc < v >, words of suffixes such as-ant, -ence (c is a consonant and v is a vowel);
2.3.4.1.6 < c > vc < v > vowel consonants have more than 1 pair vc between them, words ending with e;
2.3.4.2, creating noun objects restored to stems as noun entities, and updating the noun objects in a corpus, and representing the noun objects in a stem form.
In the third step, the construction flow of the knowledge graph is as follows:
3.1, using Word2vec training Word vector: word2vec is a Word vector tool that represents words as feature vectors of words. Word2vec is used to convert words into numerical form, represented using an N-dimensional vector.
3.2, pre-grouping the corpus twice: because the clustering of the k-means algorithm is easily affected by the distribution condition of the data set, in order to ensure the core concept, namely that the main classification object in the target field is a clustering center, the k-means clustering cannot be directly used, and the corpus is subjected to twice pre-grouping;
3.3, automatically searching a clustering center for the small corpus set through a k-means clustering algorithm, clustering, and constructing triples at the same time, wherein the method comprises the following steps:
3.3.1, determining the size of k according to the size of the small corpus set, wherein the larger the set is, the larger the k value is.
And 3.3.2, constructing a triplet by an entity corresponding to the centroid obtained through k-means clustering calculation and an entity corresponding to the centroid in the last layer of grouping.
The k-means algorithm in step 3.3.2 is an unsupervised clustering algorithm, and Word vectors trained by Word2vec from the database are used to represent each Word. And taking each small corpus set as a data set, and carrying out clustering calculation by using a k-means clustering algorithm. The steps of k-means clustering are as follows:
3.3.2.1, selecting k objects in the data set as initial centers, wherein each object represents a clustering center;
3.3.2.2, objects in the word vector sample are classified into classes corresponding to the cluster centers closest to the objects according to Euclidean distance between the objects and the cluster centers;
3.3.2.3, update cluster center: taking the average value corresponding to all objects in each category as a clustering center of the category, and calculating the value of an objective function;
3.3.2.4, judging whether the values of the clustering center and the objective function are changed, if not, outputting a result, and if so, returning to 3.3.2.2;
3.3.3, taking the new packet as a data set, calling the k-means algorithm again, and repeating the steps 3.3.1-3.3.3 until each packet only contains the element number smaller than a certain threshold Z;
3.3.4, constructing a triplet between the entity corresponding to the data point in each group and the entity corresponding to the current centroid;
all entities in the corpus are related to other entities, and the triples formed by the entities are combined with each other, so that a knowledge graph is formed, and the entity relationship with weak correlation can be generated due to the fact that cluster centers and cluster conditions are found out by automatic clustering, so that manual checking and screening are needed after the knowledge graph is built, and entity association with low correlation is removed, so that the quality of the knowledge graph is improved.
The step of 3.2 is as follows:
3.2.1, grouping the language database once, wherein the steps are as follows:
3.2.1.1, extracting a first layer of sub-classification labels of the target field label obtained previously, wherein the target field label forms a core entity, and generating a first layer of sub-classification label set Tag, wherein n sub-classification labels are included in total, each label has a corresponding entity and word vector, and the entities are connected with the core entity to form n triples;
3.2.1.2, taking the first layer sub-classification label object as a centroid, calculating Euclidean distance from each data point to each centroid in the corpus data set, and then distributing the data points to the class corresponding to each centroid according to the principle of nearby, so as to obtain n clusters, namely n grouping data sets, taking the first layer sub-classification label as the centroid, wherein the corpus is divided into n corpus sets;
wherein, the Euclidean distance (Euclidean Distance) in step 3.2.1.2 is an important basis for judging the category of the data point, and a given sample is assumedAnd->Wherein i, j=1, 2, …, m, the number of samples, n the number of features, the euclidean distance is calculated by:
3.2.2, combining TF-IDF algorithm, grouping the corpus secondarily, the steps are as follows:
3.2.2.1, searching out the keywords in each corpus set by calculating TF-IDF.
The TF-IDF algorithm in step 3.2.2 is a numerical statistical method for evaluating the importance of a word to a given document, and the term frequency TF (Term Frequency) refers to the frequency of occurrence of the given word in the given document, and its calculation formula is:
n x,y refers to the number of times the term x appears in document y, Σ k n k,y Referring to the total vocabulary in document y, the inverse document frequency IDF (Inverse Document Frequency) is an information amount used to evaluate the provision of words or terms, i.e., whether the term is common throughout the document, and its calculation formula is:
N refers to the total number of documents, N x Referring to the number of documents in which the term x appears, each term in the text is used as a document, and finally the values of TF and IDF are calculated together, and the formula for obtaining TF-IDF is as follows:
TF-IDF x,y =TF x,y ×IDF x
3.2.2.2, manually screening the keywords of each corpus, removing the keywords with low correlation with the core entity of the current corpus, reserving partial keywords with highest correlation, and reserving the number of the keywords to be correlated with the overall quality of all the extracted keywords;
3.2.2.3, constructing triples by the entity corresponding to the selected keywords in each corpus and the core entity of the current corpus, taking the keywords as centroids in each corpus, calculating Euclidean distance from data points in the corpus to each centroid again, classifying the data points, and dividing the primitive database into a plurality of small corpus sets at the moment.
The fourth step comprises the following steps:
the definition VT comprises a basic attribute (basic attribute) and a visual structure (DVSCHEMA), and the formalized definition is as shown in (1), wherein the basic attribute stores general information of graphic titles, subtitles and other text styles;
(1)、VT::=<BASICATTRIBUTE><DVSCHEMA>
4.2, BASICTRIBUTE comprises three attributes: title (title) for storing title of final generated visual graphic, subtitle (subtitle) for storing subtitle of final generated visual graphic, attribute (attributes) for storing position, color combination, font size setting parameter of final generated visual graphic;
(2)、BASICATTRIBUTE::=<title><subtitle><attributes>
4.3, BASICTRIBUTE categorizes common visualized graphics into four basic categories according to the data types, graphic data structures, and graphic dimensions required by the graphics: general graphics (General), topology (Topology), map (Map), text graphics (Text), formalized definition as (3);
(3)、DVSCHEMA::=<General><Topology><Map><Text>
the four basic categories in step 4.4 and step 4.3 respectively comprise two attributes: a graphics type (VType) and a graphics structure (structModel), wherein the VType stores the graphics type to which the type belongs, the structModel stores the basic visual structure of the graphics to which the type belongs, the formalized definition is as (4), and the definition of A is that B indicates that A contains an attribute B;
(4)、DVSCHEMA::=<General><Topology><Map><Text>::<VType>
<StructModel>。
in 4.4, the graphics of the VType attribute of the four basic categories are as follows:
4.4.1, general includes bar graph (barChart), line graph (LineChart), pie graph (PieChart), radar graph (RadarChart), scatter graph (ScaterChart);
4.4.2, the Topology includes network map (netchart), tree map (TreeMap), area tree map (TreeMapChart);
4.4.3, maps include regional Map (AreaMapChart), national Map (CountryMapChart), world Map (WorldMapChart);
4.4.4, text includes word cloud (WorldCludChart);
4.5, the four basic categories in the step 4.4 all have respective Mapping relations (Mapping), and describe the data structures, data dimensions, graphic structure relations and data Mapping position information of various graphics; according to Mapping information and the data structure of the graph, basic visual structure structModel of various graphs can be abstracted.
In the 4.5, mapping relation Mapping of various graphics and basic visualization structure StructModel are defined as follows:
4.5.1, graphic in General type is commonly used to represent two-dimensional data or three-dimensional data, information can be represented by a binary (XAxis, YAxis) or a triplet (XAxis, YAxis, ZAxis), mapping structure of such graphic is as (5), wherein LegendName represents legend name, each group information is stored in ARRAY type; according to the Mapping structure, the structure of the basic structModel can be abstracted, for example, (6), the child nodes of the structModel are temporary Root nodes, and the Root comprises two child nodes: key value pair k_v and legend node LegendNode;
(5)、Mapping::=<XAxis,YAxis,[ZAxis]><LegendName>
(6)、StructModel::=<Root::<K_V><LegendNode>>
4.5.2, graphics in the Topology type are typically used to represent Topology relationship data, and the tree graph and the area tree graph can represent attribute structures with nested key-value pairs { key: value, child: { key: value } }, mapping structures such as (7); the network graph can use node sets (Nodes) and edge sets (Links) to represent graph structures, and Mapping structures are shown as (8), wherein source represents a starting node of an edge link, and target represents a pointing node of the edge link; according to the Mapping structure, the structure of the basic structModel can be abstracted as (9), the structModel has two substructures, root1 and Root2 are temporary Root nodes of the two substructures respectively, and Root1 comprises two sub nodes: the key value pair K_V and child node child whose substructure is the key value pair K_V; root2 contains two child nodes: node set Nodes and edge set Links, wherein the child Nodes of the node set are key words and value values, wherein the value can be null, and the child Nodes of the edge set are a starting point source and a target;
(7)、Mapping::=<K_V><children::<K_V>>
(8)、Mapping::=<Nodes::<key,[value]><Links::<source><target>>
(9)、StructModel::=<Root1::<K_V><children::<K_V>>><Root2::<Nodes::<key,[value]>,<Links::<source><target>>>
4.5.3, graphics in Map types are typically used to represent Map information, with key value pairs arrays [ { PlaceName: value } ] or triplesets [ { lng, lat, value } ], the Mapping structure of such graphics being as (10), where PlaceName represents a place name, lng represents latitude, lat represents longitude; according to the Mapping structure, the structure of a basic structModel can be abstracted, for example, (11), the structModel has two substructures, root1 and Root2 are temporary Root nodes of the two substructures respectively, and Root1 comprises a sub-node key value pair K_V; root2 contains three child nodes: longitude lat, latitude lng, value;
(10)、Mapping::=<Data1::<PlaceName><value>><Data2::<lng>
<lat><value>>
(11)、StructModel::=<Root1::<K_V>>,<Root2::<lng>,<lat>,<value>>
4.5.4 a graphic common binary group (Keyword) in the Text type represents the Keyword frequency, and the Mapping structure of the graphic is as shown in (12), wherein the Keyword is a word extracted from a Text, and the frequency represents the occurrence frequency of the word in the Text; according to the Mapping structure, the structure of the basic structModel can be abstracted, for example, (13), the child node of the structModel is a temporary Root node, and the Root comprises a key value pair K_V;
(12)、Mapping::=<Keyword><frequency>
(13)、StructModel::=<Root::<K_V>>。
the fifth step comprises the following steps:
and 5.1, matching the Web data prototype structure M-JSON with the structModel of the visual model tree VT according to the data structure, and matching M candidate coordinate axes/legends meeting the conditions in the M-JSON to obtain attribute combination results, wherein each combination result is expressed as a binary group consisting of a key value pair L and an attribute name A. Wherein L and A correspond to LegendNode and K_V in step 4.5.1, respectively;
And 5.2, matching and optimizing m attribute combinations meeting the conditions by combining the constructed network corpus knowledge graph, wherein the process is as follows:
5.2.1, each matching result in step 5.1 is represented in the form of a binary group: p= (L:: nameA::: name), each matching result P i =(L i ::name,A i : : name), into triplet form G i =(L i ::name,R,A i :: name) put set s= { G 1 ,G 2 ,...,G m };
5.2.2 sequentially combining G in set S i The three parameters of the knowledge graph are mapped to the triplet structure as follows F (L i ::name→head,R→relation,A i : : name→tail) is mapped into a triplet (head, relation, tail), whether the current triplet (head, relation, tail) exists in the matched corpus knowledge graph is matched, and the result is a result of matching, namely True or False, and the result is respectively expressed as 1 and 0. Firstly, matching a head entity node head and a tail entity node tail in a corpus knowledge graph, then matching a relation relationship between the head entity node and the tail entity node, and if and only if the head entity head, the tail entity tail and the relation relationship are successfully matched, setting as 1;
5.2.3 after the object query in set S is completed, set Q= { (G) is returned i ,result i ) And Q is used to determine whether there is a semantic association between the currently eligible tuples as a determination of the matching result of the attribute combination of candidate coordinate axes/legends, so that only the structure matches and results i If the matching is 1, the matching is judged to be successful. Thereby improving the accuracy of data attribute matching and reducing the generation rate of the image without practical meaning。
The beneficial effects of the invention are mainly shown in the following steps: when the Web data is visualized to generate the graph, the method can utilize the Web corpus data to construct a Web corpus knowledge graph, analyze, generalize and model the common visualized graph, optimize the matching process of the Web data prototype structure and the common visualized graph model, reduce the generation of redundant graph and improve the generation rate of the effective graph. Meanwhile, the manual participation in the graphic screening work is reduced in the automatic data visualization process, and the Web data visualization flow is simplified.
Drawings
FIG. 1 shows a knowledge graph construction flow chart based on the k-means algorithm.
Fig. 2 shows a block diagram of a visual model tree VT.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, a knowledge graph-based Web data optimization method facing to visual requirements includes the following steps:
firstly, constructing a target field corpus: the method comprises the steps of taking the corpus content of the network corpus (such as Wikipedia in Wikipedia) as the basis for constructing a knowledge graph, improving the text quality and the comprehensiveness of the content of a domain corpus, using the vocabulary entry information of the network corpus as the original corpus content, screening the original network corpus content for constructing the knowledge graph, analyzing the webpage content of the vocabulary entry, finding out that the webpage content contains HTML tags besides title and text information, editing information of the vocabulary entry, webpage link information and other redundant information which is irrelevant to the vocabulary entry, filtering and cleaning the target vocabulary entry, and extracting the title and the effective text content. The filtering content comprises: performing HTML tag/text style symbol filtering (e.g., deleting HTML tags such as < h1> text < 1>, < p > text < p >, < div > text </div > and retaining text; deleting style symbols such as span { font-color: # effect }), entry editing information filtering (e.g., deleting [ edit ] tags), picture information filtering (e.g., deleting < img src = '…'/> picture tags), link information filtering (e.g., deleting < a href = "…" title = "," > -text < a > hyperlink tags < a >, and retaining text information), page specific title/attribute name filtering (e.g., deleting proprietary titles and attribute names such as Further reads), and numerical filtering (e.g., deleting numerical values such as 20, 30);
For example, using Wikipedia (Wikipedia) web corpus, obtaining webpage content of Wikipedia Athletic sports (athletics sports) through a crawler, filtering and screening to obtain entry corpus content containing Athletic sports (athletics sports) and sub-classifications thereof;
secondly, entity extraction facing to a corpus: the knowledge graph is a data information network of a graph structure formed by entities and relations, an infrastructure of the knowledge graph is represented by a triplet of an entity-relation-entity, the triplet comprises two entities with actual semantic relations and the relation between the two entities, the relation is represented by a form of G= (head, relation, tail), the head represents a head entity, the tail represents a tail entity, the relation represents the relation between the head entity and the tail entity, each entity also comprises an attribute and an attribute value, the attribute of the entity is also converted into the tail entity connected with the entity, the relation is established between the two entities, and the entity extraction is divided into three stages of named entity extraction, attribute entity extraction and noun entity extraction;
2.1, entity extraction: entity extraction, also known as named entity recognition, is the automatic recognition of named entities from a text dataset, which is commonly referred to as person names, place names, institution names, and other entities all of which are identified by names. The process can be completed by using some mainstream named entity recognition system, for example, the standard NER can mark the entities in the text according to the types, can recognize seven types of attributes including Time, location, person, date, organization, money, percentage and the like, uses the named entity recognition system as a tool to recognize the named entity of the corpus content, and the recognized named entity marks the type attribute of the named entity, and the main process is as follows: 1. carrying out named entity identification on the corpus content by a tool; 2. labeling the identified named entity with its type attribute; 3. filtering the named entities according to the type attribute, deleting unsuitable named entities, reserving labels of other named entities, and defining the entry names as named entities by default.
2.2, extracting attribute entities: extracting attributes from an information frame of an entry network corpus by taking the information frame as an attribute source, then intercepting the information frame information of each entry in the corpus, extracting attribute names as tail entities of named entities corresponding to names of the corresponding entries according to an information frame structure, and not reserving attribute values, wherein if no information frame exists in a certain entry, the tail entities do not need to be created for the named entities corresponding to the entry, and the information frame (Info Box) of an entry ' National Basketball Association (NBA) ' in Wikipedia (Wikipedia) is taken as an example, and is formed in a form of a table, wherein the content of the 1 st row and the 1 st column is ' Sport ', the content of the 1 st row and the 2 nd column is ' Basketball ', the content of the 2 nd row and the 1 st column is ' Found ', and the content of the 2 nd row and the 2 nd column is ' June 6,1946;73years ago ", etc., extracts the first column content" Sport "," found "and the term" National Basketball Association (NBA) "respectively to construct triples;
2.3, noun entity extraction, comprising four steps: word splitting (Split), part-of-speech Tagging (POS tag), stop word filtering (Stop Word Filtering) and stem extraction (Stemming), the named entity extraction step already tags the identified named entity, so that the next operation only extracts the corpus content outside the tagged entity;
2.3.1, word splitting: using a regular expression design splitting rule mode, and carrying out word splitting on the corpus content according to spaces, symbols and paragraphs to obtain word texts;
2.3.2, part-of-speech tagging: in order to obtain nouns in the corpus, part-of-speech tagging is first required for text vocabulary. Part-of-speech tagging is also called grammatical tagging or part-of-speech disambiguation, and is a text data processing technology for tagging words in a corpus linguistic according to their meanings and context, wherein a plurality of words may contain a plurality of parts of speech at the same time and have a plurality of meanings, the choice of the parts of speech depends on the context, the corpus tagged with named entities is used as tagged object text for part-of-speech tagging, named word objects are found out according to the tagging result, non-noun objects are removed from the corpus, but the names of non-noun words are not included, and the named entities, noun objects and original punctuations in each word remain in the corpus at this time, and all contents still keep the original text sequence;
2.3.3, stop word filtering: the name of Stop Word is derived from Stop Word, and refers to words or words automatically filtered out when natural language text is processed in order to save storage space and improve search efficiency in information retrieval. For a given purpose, any type of word may be selected as a stop word, and in the present invention, the stop word mainly includes two types: the functional Words (functional Words) contained in human language, such as articles, conjunctions, adverbs or prepositions, are very common in use, have extremely high occurrence frequency, but have no exact actual meaning, such as a, an, the, which, and the like; another category is the real Words (Content Words), which refer herein to a portion of Words that have a substantial meaning but are not explicitly referred to or pointed at, such as want, welcome, enough, consider, indeed, etc. In natural language processing, a Stop Word List is already available, the Stop Word List is used as a reference dictionary, stop words are deleted from a corpus through Word comparison, the content of the corpus is further simplified, and no Stop words in the corpus are ensured to be reserved;
2.3.4, extracting word stems: the word drying extraction is a process for removing morphological affixes to obtain corresponding root words, is a special processing process of western languages such as English, has singular and plural deformation (such as apple and apples), deformation of states such as ing and ed (such as doing and fid), deformation of human pronouns corresponding to different predicates (such as like and like), and the like, and the words correspond to the same root words although in form, are processed as the same words under the condition of calculating correlation, and then the word drying processing is needed. The baud stem algorithm (Porter Stemming Algorithm) is a mainstream stem extraction algorithm, the core idea is that words are classified, processed and restored according to the type of morphological affix, most of word deformation is regular except for partial special deformation, the deformation is divided into 6 categories according to the rule, and the stem extraction steps are as follows:
2.3.4.1, performing affix removal and word restoration according to the word deformation category, and obtaining stem information of noun objects in the corpus to reduce the situations of different shapes of the same word. The 6 different words are morphed as follows:
2.3.4.1.1, complex, end-of-ed and ing words;
2.3.4.1.2 words which contain vowels and end with y;
2.3.4.1.3, double-suffix word;
2.3.4.1.4 words with-ic, -ful, -less, -active, etc. as suffixes;
2.3.4.1.5 in case of < c > vcvc < v >, words of suffixes such as-ant, -ence (c is a consonant and v is a vowel);
2.3.4.1.6 < c > vc < v > vowel consonants have more than 1 pair vc between them, words ending with e;
2.3.4.2, creating noun objects restored to stems as noun entities, and updating the noun objects in a corpus, and representing the noun objects in a stem form.
And a third step of: combining Word2vec, performing secondary pre-grouping on the corpus, and constructing a knowledge graph by using a k-means clustering algorithm, wherein the structure of a triplet G is head, relation, and along with the difference of the head and the tail, the relation also has various relations, and the relation is actually a relation set in the knowledge graph and is used for representing complex relations among various entities. The method aims at judging whether semantic association exists between two attributes, namely whether a relation exists between two entities, and does not pay attention to the existence of the relation. And (3) performing secondary grouping on the corpus by calculating word vectors of the vocabulary of the corpus, and extracting entity relations by using a k-means clustering algorithm. The construction flow of the knowledge graph is as follows:
3.1, using Word2vec training Word vector: word2vec is a Word vector tool that represents words as feature vectors of words. Word2vec is used to convert words into numerical form, represented using an N-dimensional vector. For example, word2vec is used to perform Word vector calculation on the content of the acquired Athletic sports (athletics sports) corpus, and the dimension of the Word vector is set to 300 dimensions. The higher the word vector dimension, the richer the feature expression of the word, but at the same time, the time cost of training and the calculation time cost when the model is called are increased.
3.2, pre-grouping the language library twice: since the clustering of the k-means algorithm is easily affected by the distribution condition of the data set, in order to ensure the core concept, namely that the main classification object in the target field is a clustering center, the k-means clustering cannot be directly used, and the corpus is subjected to two-time pre-grouping, the steps are as follows:
3.2.1, grouping the language database once, wherein the steps are as follows:
3.2.1.1, extracting a first layer of sub-classification labels of the target field label obtained previously, wherein the target field label forms a core entity, and generating a first layer of sub-classification label set Tag, wherein n sub-classification labels are included in total, each label has a corresponding entity and word vector, and the entities are connected with the core entity to form n triples.
3.2.1.2, taking the first layer sub-classification label object as a centroid, calculating the Euclidean distance from each data point to each centroid in the corpus data set, and then distributing the data points to the class corresponding to each centroid according to the principle of nearby. N clusters, i.e. n grouping data sets, with the first layer of sub-classification labels as centroids are obtained, while the corpus is also divided into n corpus sets.
Wherein, the Euclidean distance (Euclidean Distance) in step 3.2.1.2 is an important basis for judging the category of the data point, and a given sample is assumedAnd->Wherein i, j=1, 2, …, m, the number of samples, n the number of features, the euclidean distance is calculated by: />
For example, first, a pre-sort is performed on the entity data set of the constructed sports (athletics sports) corpus, the first-layer sub-sort tags of the previously crawled sports (athletics sports) wikipedic corpus tags are extracted to form Tag sets tag= { "Association football", "Baseball", "badminon", "beacon juice", … … }, wherein 55 sub-sort tags are included in total, each Tag has a corresponding entity and Word2vec trained Word vector, and the entities are connected with the core entity "athletics sports" to form 55 triples. And taking the label objects as centroids, calculating Euclidean distances from each data point in the data set to each centroid, and then distributing the data points into classes corresponding to each centroid according to a nearby principle. At this time, 55 clusters, i.e., 55 grouping data sets, with the event category as the centroid are obtained, and the corpus is also divided into 55 corpus sets.
3.2.2, combining TF-IDF algorithm, grouping the corpus secondarily. The method comprises the following steps:
3.2.2.1, searching out the keywords in each corpus set by calculating TF-IDF.
Wherein the TF-IDF algorithm in step 3.2.2 is a numerical statistical method for evaluating the importance of a word to a given document. Word frequency TF (Term Frequency) refers to the frequency with which a given word appears in a given document, and is calculated by the formula:
n x,y refers to the number of times the term x appears in document y, Σ k n k,y Refers to the total vocabulary in document y. The inverse document frequency IDF (Inverse Document Frequency) is an information amount used to evaluate the provision of a word or term, i.e., whether the term is common throughout the document, and is calculated by the formula:
n refers to the total number of documents, N x Referring to the number of documents in which the term x appears, each term in the text serves as a document. Finally, the TF and IDF values are calculated together to obtain the formula of the TF-IDT as follows:
TF-IDF x,y =TF x,y ×IDF x
3.2.2.2, manually screening the keywords of each corpus, removing the keywords with low correlation with the core entity of the current corpus, reserving partial keywords with highest correlation, and reserving the number of the keywords to be correlated with the overall quality of all the extracted keywords.
And 3.2.2.3, constructing triples by the entity corresponding to the selected keywords extracted from each corpus and the core entity of the current corpus. And taking the keywords as centroids in each corpus set, calculating Euclidean distance from the data points in the set to each centroid, and classifying the data points. The original corpus has now been divided into a plurality of small corpus sets.
For example, keywords in each Athletic sports (athletics) corpus are found through TF-IDF calculation, for example, "text" "" place "" "match" ", team" ", and" match "", in the corpus corresponding to Association football, but some words are frequent but have no great relevance, such as "list" "" finals "", and "body" ". Therefore, manual intervention screening is needed for the keywords of each corpus, the keywords with low correlation with the core entity of the current corpus are removed, and partial keywords with highest correlation are reserved. And constructing a triplet between the entity which is extracted from each small corpus and corresponds to the screened keyword and the core entity of the current corpus. And then taking the keywords as centroids in each corpus set, calculating Euclidean distance from data points in the set to each centroid, classifying the data points, and dividing the data points into a plurality of small corpus sets.
3.3, automatically searching a clustering center for the small corpus set through a k-means clustering algorithm, clustering, and constructing triples at the same time, wherein the method comprises the following steps:
3.3.1, determining the size of k according to the size of the small corpus set, wherein the larger the set is, the larger the k value is.
And 3.3.2, constructing a triplet by an entity corresponding to the centroid obtained through k-means clustering calculation and an entity corresponding to the centroid in the last layer of grouping.
The k-means algorithm in step 3.3.2 is an unsupervised clustering algorithm, and Word vectors trained by Word2vec from the database are used to represent each Word. And taking each small corpus set as a data set, and carrying out clustering calculation by using a k-means clustering algorithm. The steps of k-means clustering are as follows:
3.3.2.1, selecting k objects in the data set as initial centers, wherein each object represents a clustering center;
3.3.2.2, objects in the word vector sample are classified into classes corresponding to the cluster centers closest to the objects according to Euclidean distance between the objects and the cluster centers;
3.3.2.3, update cluster center: taking the average value corresponding to all objects in each category as a clustering center of the category, and calculating the value of an objective function;
3.3.2.4, judging whether the values of the clustering center and the objective function are changed, if not, outputting a result, and if so, returning to 3.3.2.2.
3.3.3, calling the k-means algorithm again by taking the new packet as a data set, and repeating the steps 3.3.1-3.3.3 until each packet only contains the element number smaller than a certain threshold value Z.
And 3.3.4, constructing a triplet by the entity corresponding to the data point in each group and the entity corresponding to the current centroid.
All entities in the corpus are related to other entities, and the triples formed by the entities are combined with each other, so that a knowledge graph is formed. Because the cluster center and the cluster situation searched by the automatic clustering can possibly generate entity relations with weak correlation, the knowledge graph construction is completed and then needs to be manually checked and screened, and the entity relations with low correlation are removed so as to improve the quality of the knowledge graph.
For example, the original Athletic sports (athletics) corpus is divided into a plurality of small corpus sets at this time, and then a clustering center is automatically searched for through a k-means clustering algorithm to perform clustering, and a ternary is constructed at the same time. The size of the designated k is determined by the size of the corpus set, with larger set k values being larger. And finally, constructing a triplet by the entity corresponding to the calculated centroid and the entity corresponding to the centroid in the last layer of grouping. The k-means algorithm is then called again, taking the new packet as a data set, repeating the above operation until each packet contains only less than 10 elements (at this time the threshold z=10). And finally, constructing a triplet by the entity corresponding to the data point in each group and the entity corresponding to the current centroid. So far, all entities in the Athletic sports (athletics sports) corpus are related with other entities, and the triples formed by the entities are combined with each other, so that a knowledge graph is formed. However, the entity association with weak correlation may be generated in the situation of clustering and the centroid found by automatic clustering, so that manual checking and screening are finally needed to remove the entity association with extremely low correlation.
Fourth, referring to fig. 2, a visual model Tree (VT) is constructed: classifying various visual graphics, summarizing the attribute and the structural characteristics of the various graphics, and formally expressing various graphic information by creating a visual model tree (VT), wherein the steps are as follows:
the definition VT comprises a basic attribute (basic attribute) and a visual structure (DVSCHEMA), and the formalized definition is as shown in (1), wherein the basic attribute stores general information of graphic titles, subtitles and other text styles;
(1)、VT::=<BASICATTRIBUTE><DVSCHEMA>
4.2, BASICTRIBUTE comprises three attributes: title (title) for storing title of final generated visual graphic, subtitle (subtitle) for storing subtitle of final generated visual graphic, attribute (attributes) for storing position, color combination, font size setting parameter of final generated visual graphic;
(2)、BASICATTRIBUTE::=<title><subtitle><attributes>
4.3, BASICTRIBUTE categorizes common visualized graphics into four basic categories according to the data types, graphic data structures, and graphic dimensions required by the graphics: general graphics (General), topology (Topology), map (Map), text graphics (Text), formalized definition as (3);
(3)、DVSCHEMA::=<General><Topology><Map><Text>
The four basic categories in step 4.4 and step 4.3 respectively comprise two attributes: a graphics type (VType) and a graphics structure (structModel), wherein the VType stores the graphics type to which the type belongs, the structModel stores the basic visual structure of the graphics to which the type belongs, the formalized definition is as (4), and the definition of A is that B indicates that A contains an attribute B;
(4)、DVSCHEMA::=<General><Topology><Map><Text>::<VType><StructModel>
in step 4.4, the attached graphics of the VType attributes of the four basic categories are as follows:
4.4.1, general includes bar graph (barChart), line graph (LineChart), pie graph (PieChart), radar graph (RadarChart), scatter graph (ScaterChart);
4.4.2, the Topology includes network map (netchart), tree map (TreeMap), area tree map (TreeMapChart);
4.4.3, maps include regional Map (AreaMapChart), national Map (CountryMapChart), world Map (WorldMapChart);
4.4.4, text includes word cloud (WorldCludChart);
4.5, the four basic categories in the step 4.4 all have respective Mapping relations (Mapping), and describe the data structures, data dimensions, graphic structure relations and data Mapping position information of various graphics; according to Mapping information and the data structure of the graph, basic visual structure structModel of various graphs can be abstracted.
In the 4.5 step, mapping relation Mapping of various graphics and basic visual structure structModel definition are defined as follows:
4.5.1, graphic in General type is commonly used to represent two-dimensional data or three-dimensional data, information can be represented by a binary (XAxis, YAxis) or a triplet (XAxis, YAxis, ZAxis), mapping structure of such graphic is as (5), wherein LegendName represents legend name, each group information is stored in ARRAY type; according to the Mapping structure, the structure of the basic structModel can be abstracted, for example, (6), the child nodes of the structModel are temporary Root nodes, and the Root comprises two child nodes: key value pair k_v and legend node LegendNode;
(5)、Mapping::=<XAxis,YAxis,[ZAxis]><LegendName>
(6)、StructModel::=<Root::<K_V><LegendNode>>
4.5.2, graphics in the Topology type are typically used to represent Topology relationship data, and the tree graph and the area tree graph can represent attribute structures with nested key-value pairs { key: value, child: { key: value } }, mapping structures such as (7); the network graph can use node sets (Nodes) and edge sets (Links) to represent graph structures, and Mapping structures are shown as (8), wherein source represents a starting node of an edge link, and target represents a pointing node of the edge link; according to the Mapping structure, the structure of the basic structModel can be abstracted as (9), the structModel has two substructures, root1 and Root2 are temporary Root nodes of the two substructures respectively, and Root1 comprises two sub nodes: the key value pair K_V and child node child whose substructure is the key value pair K_V; root2 contains two child nodes: node set Nodes and edge set Links, wherein the child Nodes of the node set are key words and value values, wherein the value can be null, and the child Nodes of the edge set are a starting point source and a target;
(7)、Mapping::=<K_V><children::<K_V>>
(8)、Mapping::=<Nodes::<key,[value]><Links::<source><target>>
(9)、StructModel::=<Root1::<K_V><children::<K_V>>><Root2::<Nodes::<key,[value]>,<Links::<source><target>>>
4.5.3, graphics in Map types are typically used to represent Map information, with key value pairs arrays [ { PlaceName: value } ] or triplesets [ { lng, lat, value } ], the Mapping structure of such graphics being as (10), where PlaceName represents a place name, lng represents latitude, lat represents longitude; according to the Mapping structure, the structure of a basic structModel can be abstracted, for example, (11), the structModel has two substructures, root1 and Root2 are temporary Root nodes of the two substructures respectively, and Root1 comprises a sub-node key value pair K_V; root2 contains three child nodes: longitude lat, latitude lng, value;
(10)、Mapping::=<Data1::<PlaceName><value>><Data2::<lng><lat><value>>
(11)、StructModel::=<Root1::<K_V>>,<Root2::<lng>,<lat>,<value>>
4.5.4 a graphic common binary group (Keyword) in the Text type represents the Keyword frequency, and the Mapping structure of the graphic is as shown in (12), wherein the Keyword is a word extracted from a Text, and the frequency represents the occurrence frequency of the word in the Text; according to the Mapping structure, the structure of the basic structModel can be abstracted, for example, (13), the child node of the structModel is a temporary Root node, and the Root comprises a key value pair K_V;
(12)、Mapping::=<Keyword><frequency>
(13)、StructModel::=<Root::<K_V>>
fifthly, a data visualization optimizing and matching method based on a network corpus knowledge graph comprises the following steps: defining M-JSON as a prototype structure of JSON returned by REST Web service; matching each structModel in the Web data prototype structure M-JSON and the visual model tree VT according to the data structure, and returning a result which is a set formed by attribute combinations of candidate coordinate axes/legends meeting the conditions; based on structure matching, inquiring whether the attribute combination of the matched candidate coordinate axis/legend has actual semantic association or not by utilizing the knowledge graph constructed in the third step, optimizing matching according to the inquiry result, and selecting an effective dimension combination to improve the accuracy (Precision) of automatically generating the graph, wherein the method comprises the following steps of:
And 5.1, matching a Web data prototype structure M-JSON with a structModel of a visual model tree VT according to a data structure, and matching attribute combination results of M candidate coordinate axes/legends meeting the conditions in the M-JSON, wherein each combination result is expressed as a binary group consisting of a key value pair L and an attribute name A, and L and A respectively correspond to LegendNode and K_V in the step 4.5.1.
And 5.2, matching and optimizing m attribute combinations meeting the conditions by combining the constructed network corpus knowledge graph, wherein the process is as follows:
5.2.1, each matching result in step 5.1 is represented in the form of a binary group: p= (L:: name, a:: name). Each matching result P i = (L i ::name,A i : : name), into triplet form G i =(L i ::name,R,A i : : name) put set s= { G 1 ,G 2 ,...,G m }。
5.2.2 sequentially combining G in set S i The three parameters of the knowledge graph are mapped to the triplet structure as follows F (L i ::name→head,R→relation,A i Name→tail) into triples (head, relation, tail). And (3) whether a current triplet (head, tail) exists in the matching in the corpus knowledge graph construction, wherein a result is True or False, and the result is respectively expressed as 1 and 0. First, matching head entity nodes head and tail entity nodes tail in a corpus knowledge graph, and then matching relation between the head entity nodes and the tail entity nodes. If and only if the head entity head, the tail entity tail and the relation are successfully matched, the result is 1; otherwise, result is 0.
5.2.3 after the object query in set S is completed, set Q= { (G) is returned i ,result i ) And judging whether the currently qualified binary group has semantic association or not by using the Q as a judgment of the attribute combination matching result of the candidate coordinate axis/legend.Thus, only the structure matches and results i If the matching is 1, the matching is judged to be successful. Therefore, the accuracy of data attribute matching is improved, and the generation rate of the image without practical significance is reduced.

Claims (10)

1. A knowledge graph-based Web data optimization method facing visual requirements is characterized by comprising the following steps:
firstly, constructing a target field corpus: the method comprises the steps of taking network corpus content as a basis for constructing a knowledge graph, using network corpus vocabulary item information as original corpus content, screening the original network corpus content for constructing the knowledge graph, comparing and analyzing web page content of the network vocabulary item, wherein the original corpus content contains HTML tags besides title and text information, editing information of the vocabulary item, redundant information of web page link information irrelevant to the vocabulary item, filtering and cleaning the content of the network vocabulary item, extracting the title and effective text content, and filtering the content, wherein the filtering content comprises the following steps: performing HTML tag/text style symbol filtering, vocabulary entry template symbol and non-English character filtering, vocabulary entry editing information filtering, picture information filtering, link information filtering, page proprietary title attribute name filtering and numerical filtering on the webpage content of the vocabulary entry;
Secondly, entity extraction facing to a corpus: the knowledge graph is a data information network with a graph structure formed by entities and relations, the basic structure of the knowledge graph is represented by a triplet of 'entity-relation-entity', the triplet comprises two entities with real semantic relations and the relation between the two entities, the relation is represented by a form of G= (head, relation, tail), wherein G represents the triplet, head represents the head entity, tail represents the tail entity, and relation represents the relation between the head entity and the tail entity; each entity also comprises an attribute and an attribute value, the attribute of the entity is also converted into a tail entity connected with the entity, and a relationship is established between the entity and the tail entity, and the entity extraction is divided into three stages of named entity extraction, attribute entity extraction and noun entity extraction;
and a third step of: combining Word2vec, performing secondary pre-grouping on a corpus, using a k-means clustering algorithm to construct a knowledge graph, wherein a structure of a triplet G is head, relation and various relations exist in relation with the difference of the head and the tail, the relation is a relation set in the knowledge graph and used for representing complex relations among various entities, and the aim is to judge whether semantic relations exist between two attributes, namely whether the relation exists between the two entities, and not paying attention to the relation, performing secondary grouping on the corpus by calculating Word vectors of the vocabulary of the corpus, and extracting the entity relation by using the k-means clustering algorithm;
Fourthly, constructing a visual model tree VT: classifying various visual graphics, summarizing the attribute and the structural characteristics of the various graphics, and formally expressing various graphic information by creating a visual model tree VT;
fifthly, a data visualization optimizing and matching method based on a network corpus knowledge graph comprises the following steps: defining M-JSON as a prototype structure of JSON returned by REST Web service; matching each structModel in the Web data prototype structure M-JSON and the visual model tree VT according to the data structure, and returning a result which is a set formed by attribute combinations of candidate coordinate axes/legends meeting the conditions; based on structure matching, inquiring whether the attribute combination of the matched candidate coordinate axes/legends has actual semantic association or not by utilizing the knowledge graph constructed in the third step, optimizing matching according to the inquiry result, and selecting an effective dimension combination so as to improve the accuracy rate of automatically generating the graph.
2. The visual demand-oriented knowledge graph-based Web data optimization method according to claim 1, wherein in the second step, entity extraction is divided into three stages of named entity extraction, attribute entity extraction and noun entity extraction;
2.1, entity extraction: entity extraction, also known as named entity recognition, is the automatic recognition of named entities from a text dataset, which refers to the names of people, places, institutions, nouns, and other all named identified entities, by using some mainstream named entity recognition system, which includes the steps of: 1. carrying out named entity identification on the corpus content by a tool; 2. labeling the identified named entity with its type attribute; 3. filtering named entities according to type attributes, deleting unsuitable named entities, reserving labels of other named entities, and defining entry names as named entities by default;
2.2, extracting attribute entities: extracting attributes from information frames of the vocabulary network by taking the information frames as sources of the attributes, then intercepting the information frame information of each vocabulary in the corpus, extracting attribute names according to the information frame structure, and taking the attribute names as tail entities of named entities corresponding to the names of the corresponding vocabulary, wherein attribute values are not reserved, and if the information frames do not exist in a certain vocabulary, the tail entities do not need to be created for the named entities corresponding to the vocabulary;
2.3, noun entity extraction, comprising four steps: the steps of word splitting Split, part-of-speech Tagging POS Tagging, disabling word filtering Stop Word Filtering and stem extraction Stemming are carried out, and the identified named entities are tagged in the named entity extraction step, so that the following operation only extracts the corpus content outside the tagged entities.
3. The visual demand-oriented knowledge-graph-based Web data optimization method of claim 2, wherein the 2.3 process is as follows:
2.3.1, word splitting: using a regular expression design splitting rule mode, and carrying out word splitting on the corpus content according to spaces, symbols and paragraphs to obtain word texts;
2.3.2, part-of-speech tagging: in order to obtain nouns in a corpus, part-of-speech tagging is needed to be carried out on text vocabularies, the part-of-speech tagging is also called grammar tagging or part-of-speech disambiguation, which is a text data processing technology for tagging parts of speech of words in the corpus according to the meaning and the context content thereof in linguistics of the corpus, a plurality of words simultaneously contain a plurality of parts of speech and have a plurality of meanings, the part of speech selection depends on the context meaning, the corpus marked by named entities is used as tagged object texts to carry out part-of-speech tagging, named word objects are searched according to tagging results, non-noun objects are removed from the corpus, but not including the names of non-nouns, at the moment, named entities, noun objects and original punctuations in each word are reserved in the corpus, and all contents still keep the original text sequence;
2.3.3, stop word filtering: the term Stop Word is derived from Stop Word, and refers to a Word or Word automatically filtered out when processing natural language text in order to save storage space and improve search efficiency in information retrieval, and for a given purpose, any kind of Word can be selected as a Stop Word, and the Stop Word includes two kinds: the functional Words are functional Words contained in human language, are very commonly used, have extremely high occurrence frequency and have no exact actual meaning; the other category is the real Words Content Words, which refer to a part of Words with actual specific meaning but without explicit reference or pointing; in natural language processing, a Stop Word List is existed, the Stop Word List is used as a reference dictionary, stop words are deleted from a corpus through Word comparison, the content of the corpus is further simplified, and no Stop words in the corpus are ensured to be reserved;
2.3.4, extracting word stems: word drying extraction is a process of removing morphological affixes to obtain corresponding root words, and is a special treatment process of western language; the same English word has single and plural deformation, tense deformation and deformation of human-called pronouns corresponding to different predicates, and the words have slight differences in form but correspond to the same root, and the words are treated as the same words under the condition of calculating the correlation, so that the word stem treatment is needed; the baud stem algorithm Porter Stemming Algorithm is a mainstream stem extraction algorithm, and the core idea is to classify, process and restore words according to the types of morphological affixes, and most of word deformation is regular except for partial special deformation, and the deformation is classified into 6 categories according to the rules.
4. The visual demand-oriented knowledge-graph-based Web data optimization method of claim 3, wherein in 2.3.4, the step of extracting the stem word is as follows:
2.3.4.1, performing affix removal and word recovery according to word deformation categories, and obtaining stem information of noun objects in a corpus to reduce the situations of different shapes of the same word, wherein 6 different word deformations are as follows:
2.3.4.1.1, complex, end-of-ed and ing words;
2.3.4.1.2 words which contain vowels and end with y;
2.3.4.1.3, double-suffix word;
2.3.4.1.4 words with-ic, -ful, -less, -active as suffixes;
2.3.4.1.5 in case of < c > vcvc < v >, the word of the-ant-source suffix, c is a consonant and v is a vowel;
2.3.4.1.6 < c > vc < v > vowel consonants have more than 1 pair vc between them, words ending with e;
2.3.4.2, creating noun objects restored to stems as noun entities, and updating the noun objects in a corpus, and representing the noun objects in a stem form.
5. The visual demand-oriented Web data optimization method based on knowledge graph according to one of claims 1 to 4, wherein in the third step, the knowledge graph construction flow is as follows:
3.1, using Word2vec training Word vector: word2vec is a Word vector tool that represents words as feature vectors of words, using Word2vec to convert words into numerical form, using an N-dimensional vector;
3.2, pre-grouping the corpus twice: because the clustering of the k-means algorithm is easily affected by the distribution condition of the data set, in order to ensure the core concept, namely that the main classification object in the target field is a clustering center, the k-means clustering cannot be directly used, and the corpus is subjected to twice pre-grouping;
3.3, automatically searching a clustering center for the small corpus set through a k-means clustering algorithm, clustering, and constructing triples at the same time, wherein the method comprises the following steps:
3.3.1, determining the size of k according to the size of the small corpus set, wherein the larger the set is, the larger the k value is;
3.3.2, constructing a triplet by an entity corresponding to the mass center obtained through k-means clustering calculation and an entity corresponding to the mass center in the last layer of grouping;
the k-means algorithm in the step 3.3.2 is an unsupervised clustering algorithm, word vectors trained by Word2vec of a database are used for representing each Word, each small corpus is used as a data set, the k-means clustering algorithm is used for carrying out clustering calculation, and the k-means clustering steps are as follows:
3.3.2.1, selecting k objects in the data set as initial centers, wherein each object represents a clustering center;
3.3.2.2, objects in the word vector sample are classified into classes corresponding to the cluster centers closest to the objects according to Euclidean distance between the objects and the cluster centers;
3.3.2.3, update cluster center: taking the average value corresponding to all objects in each category as a clustering center of the category, and calculating the value of an objective function;
3.3.2.4, judging whether the values of the clustering center and the objective function are changed, if not, outputting a result, and if so, returning to 3.3.2.2;
3.3.3, taking the new packet as a data set, calling the k-means algorithm again, and repeating the steps 3.3.1-3.3.3 until each packet only contains the element number smaller than a certain threshold Z;
3.3.4, constructing a triplet between the entity corresponding to the data point in each group and the entity corresponding to the current centroid;
all entities in the corpus are related to other entities, and the triples formed by the entities are combined with each other, so that a knowledge graph is formed, and the entity relationship with weak correlation can be generated due to the fact that cluster centers and cluster conditions are found out by automatic clustering, so that manual checking and screening are needed after the knowledge graph is built, and entity association with low correlation is removed, so that the quality of the knowledge graph is improved.
6. The visualization-demand-oriented knowledge-graph-based Web data optimization method of claim 5, wherein the step of 3.2 is as follows:
3.2.1, grouping the language database once, wherein the steps are as follows:
3.2.1.1, extracting a first layer of sub-classification labels of the target field label obtained previously, wherein the target field label forms a core entity, and generating a first layer of sub-classification label set Tag, wherein n sub-classification labels are included in total, each label has a corresponding entity and word vector, and the entities are connected with the core entity to form n triples;
3.2.1.2, taking the first layer sub-classification label object as a centroid, calculating Euclidean distance from each data point to each centroid in the corpus data set, and then distributing the data points to the class corresponding to each centroid according to the principle of nearby, so as to obtain n clusters, namely n grouping data sets, taking the first layer sub-classification label as the centroid, wherein the corpus is divided into n corpus sets;
wherein, the Euclidean distance (Euclidean Distance) in step 3.2.1.2 is an important basis for judging the category of the data point, and a given sample is assumedAnd->Where i ', j=1, 2, …, m, m represents the number of samples, n' represents the number of features, and the euclidean distance is calculated in the following manner:
3.2.2, combining TF-IDF algorithm, grouping the corpus secondarily, the steps are as follows:
3.2.2.1, searching out keywords in each corpus set by calculating TF-IDF;
the TF-IDF algorithm in step 3.2.2 is a numerical statistical method for evaluating the importance of a word to a given document, and the term frequency TF (Term Frequency) refers to the frequency of occurrence of the given word in the given document, and its calculation formula is:
n x,y refers to the number of times the term x appears in document y, Σ k n k,y Referring to the total vocabulary in document y, the inverse document frequency IDF (Inverse Document Frequency) is an information amount used to evaluate the provision of words or terms, i.e., whether the term is common throughout the document, and its calculation formula is:
n refers to the total number of documents, N x Referring to the number of documents in which the term x appears, each term in the text is used as a document, and finally the values of TF and IDF are calculated together, and the formula for obtaining TF-IDF is as follows:
(TF-IDF) x,y =TF x,y ×IDF x
3.2.2.2, manually screening the keywords of each corpus, removing the keywords with low correlation with the core entity of the current corpus, reserving partial keywords with highest correlation, and reserving the number of the keywords to be correlated with the overall quality of all the extracted keywords;
3.2.2.3, constructing triples by the entity corresponding to the selected keywords in each corpus and the core entity of the current corpus, taking the keywords as centroids in each corpus, calculating Euclidean distance from data points in the corpus to each centroid again, classifying the data points, and dividing the primitive database into a plurality of small corpus sets at the moment.
7. The knowledge-graph-based Web data optimization method for visual requirements according to one of claims 1 to 4, wherein the fourth step comprises the following steps:
the definition VT comprises basic attribute basic and visual structure DVSCHEMA, the formalized definition is as (1), wherein basic attribute stores the general information of graphic titles, subtitles and other text styles;
(1)、VT::=<BASICATTRIBUTE><DVSCHEMA>
4.2, BASICTRIBUTE comprises three attributes: title, subtitle, attribute, formalized definition is as (2), title is used for saving the title of the finally generated visual graphic, subtitle is used for saving the subtitle of the finally generated visual graphic, attribute is used for saving the position, color combination, font and font size setting parameters of the finally generated visual graphic;
(2)、BASICATTRIBUTE::=<title><subtitle><attributes>
4.3, BASICTRIBUTE categorizes common visualized graphics into four basic categories according to the data types, graphic data structures, and graphic dimensions required by the graphics: general graphics, topology, map, text graphics Text, formalized definitions as (3);
(3)、DVSCHEMA::=<General><Topology><Map><Text>
the four basic categories in step 4.4 and step 4.3 respectively comprise two attributes: the graphic type VType and the graphic structure StructModel, VType store the graphic type to which the type belongs, the structModel stores the basic visual structure of the graphic to which the type belongs, the formalized definition is as (4), and the A is that B represents that A contains attribute B;
(4)、DVSCHEMA::=<General><Topology><Map><Text>::<VType><StructModel>。
8. The visualization-demand-oriented knowledge-graph-based Web data optimization method of claim 7, wherein the four basic categories of VType attribute-based graphics in 4.4 are as follows:
4.4.1, general includes bar chart barChart, line chart LineChart, pie chart PieChart, radar chart radarChart, scatter chart ScatterChart;
4.4.2, the Topology comprises a network chart, a tree map, and an area tree map;
4.4.3, map including regional Map AreaMapChart, national Map CountryMapChart, world Map WorldMapChart;
4.4.4, text includes the word cloud WorldCloudchart;
4.5, the four basic categories in the step 4.4 all have respective Mapping relations Mapping, and describe the data structures, data dimensions, graphic structure relations and data Mapping position information of various graphics; according to Mapping information and the data structure of the graph, basic visual structure structModel of various graphs can be abstracted.
9. The visual demand-oriented knowledge graph-based Web data optimization method of claim 7, wherein in 4.5, mapping relationships of various graphics and basic visual structure StructModel are defined as follows:
4.5.1, graphic in General type for representing two-dimensional data or three-dimensional data, information may be represented by a binary (XAxis, YAxis) or a triplet (XAxis, YAxis, ZAxis), mapping structure of such graphic as (5), wherein LegendName represents a legend name, each group information is stored in an ARRAY type; according to the Mapping structure, the structure of the basic structModel can be abstracted, for example, (6), the child nodes of the structModel are temporary Root nodes, and the Root comprises two child nodes: key value pair k_v and legend node LegendNode;
(5)、Mapping::=<XAxis,YAxis,[ZAxis]><LegendName>
(6)、StructModel::=<Root::<K_V><LegendNode>>
4.5.2, the graph in the Topology type is used for representing topological relation data, and the tree diagram and the area tree diagram can represent attribute structures by using nested key value pairs { key: value, child: { key: value } }, and Mapping structures are as shown in (7); the network graph can use a node set Nodes and an edge set Links to represent a graph structure, wherein the Mapping structure is shown as (8), the source represents the starting node of one edge link, and the target represents the pointing node of the edge link; according to the Mapping structure, the structure of the basic structModel can be abstracted as (9), the structModel has two substructures, root1 and Root2 are temporary Root nodes of the two substructures respectively, and Root1 comprises two sub nodes: the key value pair K_V and child node child whose substructure is the key value pair K_V; root2 contains two child nodes: node set Nodes and edge set Links, wherein the child Nodes of the node set are key words and value values, wherein the value can be null, and the child Nodes of the edge set are a starting point source and a target;
(7)、Mapping::=<K_V><children::<K_V>>
(8)、Mapping::=<Nodes::<key,[value]><Links::<source><target>>
(9)、StructModel::=<Root1::<K_V><children::<K_V>>><Root2::<Nodes::<key,[value]>,<Links::<source><target>>>
4.5.3, map type graphics are used to represent Map information, map information is represented by key value pairs array [ { PlaceName: value } ], or triplet array [ { lng, lat, value } ], where PlaceName represents a place name, lng represents latitude, lat represents longitude, such graphics' Mapping structure as (10); according to the Mapping structure, the structure of a basic structModel can be abstracted, for example, (11), the structModel has two substructures, root1 and Root2 are temporary Root nodes of the two substructures respectively, and Root1 comprises a sub-node key value pair K_V; root2 contains three child nodes: longitude lat, latitude lng, value;
(10)、Mapping::=<Data1::<PlaceName><value>><Data2::<lng>
<lat><value>>
(11)、StructModel::=<Root1::<K_V>>,<Root2::<lng>,<lat>,<value>>
4.5.4 graphs in Text type are characterized by a Keyword frequency by using a binary group (Keyword), and the Mapping structure of the graph is as shown in (12), wherein the Keyword is a word extracted from a Text, and the frequency represents the occurrence frequency of the word in the Text; according to the Mapping structure, the structure of the basic structModel can be abstracted, for example, (13), the child node of the structModel is a temporary Root node, and the Root comprises a key value pair K_V;
(12)、Mapping::=<Keyword><frequency>
(13)、StructModel::=<Root::<K_V>>。
10. the visual demand-oriented knowledge-graph-based Web data optimization method of claim 9, wherein the fifth step comprises the steps of:
5.1, matching a Web data prototype structure M-JSON with a structModel of a visual model tree VT according to a data structure, and matching attribute combination results of M candidate coordinate axes/legends meeting the conditions in the M-JSON, wherein each combination result is expressed as a binary group consisting of a key value pair L and an attribute name A, and L and A respectively correspond to LegendNode and K_V in the step 4.5.1;
and 5.2, carrying out a matching optimization process on m attribute combinations meeting the conditions by combining the constructed network corpus knowledge graph, wherein the matching optimization process is as follows:
5.2.1, each matching result in step 5.1 is represented in the form of a binary group: p= (L:: name, A::: name), each matching result P i =(L i ::name,A i Name), into triplet form G i =(L i ::name,R,A i Name) put set s= { G 1 ,G 2 ,…,G m };
5.2.2 sequentially combining G in set S i The three parameters of the knowledge graph are mapped to the triplet structure as follows F (L i ::name→head,R→relation,A i Mapping name- & gtTail into triples (head, relation, tail) and matching whether the current triples (head, relation, tail) exist in the construction of a corpus knowledge map, wherein result is a result of True or False, which are respectively expressed as 1 and 0, firstly matching head entity nodes head and tail entity nodes tail in the corpus knowledge map, then matching relation between the head entity nodes and the tail entity nodes, and if and only if the head entity head, the tail entity tail and the relation are matched successfully, resu lt is 1;
5.2.3 after the object query in set S is completed, set Q= { (G) is returned i ,result i ) And Q is used to determine whether there is a semantic association between the currently eligible tuples as a determination of the matching result of the attribute combination of candidate coordinate axes/legends, so that only the structure matches and results i And if the data attribute is 1, judging that the matching is successful, thereby improving the accuracy of the data attribute matching and reducing the generation rate of the image without practical significance.
CN201911254814.7A 2019-12-10 2019-12-10 Knowledge graph-based Web data optimization method for visual requirements Active CN111177591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911254814.7A CN111177591B (en) 2019-12-10 2019-12-10 Knowledge graph-based Web data optimization method for visual requirements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911254814.7A CN111177591B (en) 2019-12-10 2019-12-10 Knowledge graph-based Web data optimization method for visual requirements

Publications (2)

Publication Number Publication Date
CN111177591A CN111177591A (en) 2020-05-19
CN111177591B true CN111177591B (en) 2023-09-29

Family

ID=70655440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911254814.7A Active CN111177591B (en) 2019-12-10 2019-12-10 Knowledge graph-based Web data optimization method for visual requirements

Country Status (1)

Country Link
CN (1) CN111177591B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985236A (en) * 2020-06-02 2020-11-24 中国航天科工集团第二研究院 Visual analysis method based on multi-dimensional linkage
CN111680516A (en) * 2020-06-04 2020-09-18 宁波浙大联科科技有限公司 PDM system product design requirement information semantic analysis and extraction method and system
CN112364173B (en) * 2020-10-21 2022-03-18 中国电子科技网络信息安全有限公司 IP address mechanism tracing method based on knowledge graph
CN112016276B (en) * 2020-10-29 2021-02-26 广州欧赛斯信息科技有限公司 Graphical user-defined form data acquisition system
CN112507036A (en) * 2020-11-30 2021-03-16 武汉烽火众智数字技术有限责任公司 Knowledge graph visualization analysis method
CN112541072B (en) * 2020-12-08 2022-12-02 成都航天科工大数据研究院有限公司 Supply and demand information recommendation method and system based on knowledge graph
CN112596031A (en) * 2020-12-22 2021-04-02 电子科技大学 Target radar threat degree assessment method based on knowledge graph
CN113342913A (en) * 2021-06-02 2021-09-03 合肥泰瑞数创科技有限公司 Community information model-based epidemic prevention control method, system and storage medium
CN113609309B (en) * 2021-08-16 2024-02-06 脸萌有限公司 Knowledge graph construction method and device, storage medium and electronic equipment
CN115048096B (en) * 2022-08-15 2022-11-04 广东工业大学 Dynamic visualization method and system for data structure

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107797991A (en) * 2017-10-23 2018-03-13 南京云问网络技术有限公司 A kind of knowledge mapping extending method and system based on interdependent syntax tree
CN108345647A (en) * 2018-01-18 2018-07-31 北京邮电大学 Domain knowledge map construction system and method based on Web

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9684678B2 (en) * 2007-07-26 2017-06-20 Hamid Hatami-Hanza Methods and system for investigation of compositions of ontological subjects
US10380144B2 (en) * 2015-06-16 2019-08-13 Business Objects Software, Ltd. Business intelligence (BI) query and answering using full text search and keyword semantics
US20180232443A1 (en) * 2017-02-16 2018-08-16 Globality, Inc. Intelligent matching system with ontology-aided relation extraction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107797991A (en) * 2017-10-23 2018-03-13 南京云问网络技术有限公司 A kind of knowledge mapping extending method and system based on interdependent syntax tree
CN108345647A (en) * 2018-01-18 2018-07-31 北京邮电大学 Domain knowledge map construction system and method based on Web

Also Published As

Publication number Publication date
CN111177591A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111143479B (en) Knowledge graph relation extraction and REST service visualization fusion method based on DBSCAN clustering algorithm
CN111177591B (en) Knowledge graph-based Web data optimization method for visual requirements
CN111190900B (en) JSON data visualization optimization method in cloud computing mode
CN109492077B (en) Knowledge graph-based petrochemical field question-answering method and system
CN108763333B (en) Social media-based event map construction method
CN110633409B (en) Automobile news event extraction method integrating rules and deep learning
CN109800284B (en) Task-oriented unstructured information intelligent question-answering system construction method
CN106446148B (en) A kind of text duplicate checking method based on cluster
CN107180045B (en) Method for extracting geographic entity relation contained in internet text
CN102955848B (en) A kind of three-dimensional model searching system based on semanteme and method
US20150081277A1 (en) System and Method for Automatically Classifying Text using Discourse Analysis
CN106776562A (en) A kind of keyword extracting method and extraction system
CN111309925A (en) Knowledge graph construction method of military equipment
CN109145260A (en) A kind of text information extraction method
CN109960756A (en) Media event information inductive method
CN109299221A (en) Entity extraction and sort method and device
CN112989208B (en) Information recommendation method and device, electronic equipment and storage medium
CN111143574A (en) Query and visualization system construction method based on minority culture knowledge graph
CN110888991A (en) Sectional semantic annotation method in weak annotation environment
CN112036178A (en) Distribution network entity related semantic search method
CN101673306A (en) Website information query method and system thereof
CN114090861A (en) Education field search engine construction method based on knowledge graph
CN114997288A (en) Design resource association method
CN113673252A (en) Automatic join recommendation method for data table based on field semantics
CN111104437A (en) Test data unified retrieval method and system based on object model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230831

Address after: Room 202, Building B, Tian'an Digital Entrepreneurship Park, No. 441 Huangge Road, Huanggekeng Community, Longcheng Street, Longgang District, Shenzhen City, Guangdong Province, 518000

Applicant after: Shenzhen Shukangyun Information Technology Co.,Ltd.

Address before: No. 9 Santong Road, Houzhou Street, Taijiang District, Fuzhou City, Fujian Province, 350000. Zhongting Street Renovation. 143 Shopping Mall, 3rd Floor, Jiahuiyuan Link Section

Applicant before: Fuzhou Zhiqing Intellectual Property Service Co.,Ltd.

Effective date of registration: 20230831

Address after: No. 9 Santong Road, Houzhou Street, Taijiang District, Fuzhou City, Fujian Province, 350000. Zhongting Street Renovation. 143 Shopping Mall, 3rd Floor, Jiahuiyuan Link Section

Applicant after: Fuzhou Zhiqing Intellectual Property Service Co.,Ltd.

Address before: The city Zhaohui six districts Chao Wang Road Hangzhou City, Zhejiang province 310014 18

Applicant before: JIANG University OF TECHNOLOGY

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant