CN115714001A - Method for constructing multi-modal knowledge graph service platform for healthy diet - Google Patents

Method for constructing multi-modal knowledge graph service platform for healthy diet Download PDF

Info

Publication number
CN115714001A
CN115714001A CN202211426196.1A CN202211426196A CN115714001A CN 115714001 A CN115714001 A CN 115714001A CN 202211426196 A CN202211426196 A CN 202211426196A CN 115714001 A CN115714001 A CN 115714001A
Authority
CN
China
Prior art keywords
entity
diet
entities
knowledge
healthy diet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211426196.1A
Other languages
Chinese (zh)
Inventor
牛广林
李波
黄龚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202211426196.1A priority Critical patent/CN115714001A/en
Publication of CN115714001A publication Critical patent/CN115714001A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method for constructing a multi-modal knowledge map service platform for healthy diet, which comprises the steps of establishing a healthy diet knowledge map body model according to a diet field knowledge system; acquiring a menu, food nutrient elements and food therapy related data, providing a body-enhanced diet knowledge extraction model, automatically extracting entities, relations, attributes, attribute values and images, and establishing a diet knowledge map according to the representation mode of the knowledge map; providing a multi-mode entity alignment technology for fusing the images and the diet common knowledge, eliminating redundant entities in the diet knowledge map, and obtaining a multi-mode knowledge map for healthy diet; and then storing the multi-modal knowledge map facing the healthy diet, and realizing the multi-modal knowledge map visualization facing the healthy diet. The method and the system are convenient for users to fully know the related knowledge of healthy diet and guide the users to efficiently cook and match the healthy diet, and are helpful for supporting some downstream tasks of the knowledge graph, such as visualization, search, question answering and recommendation functions.

Description

Method for constructing multi-modal knowledge graph service platform for healthy diet
Technical Field
The invention relates to the technical field of data mining and knowledge maps, in particular to a method for constructing a multi-modal knowledge map service platform for healthy diet.
Background
At present, diet is one of the most concerned contents in people's life, with the development and popularization of the internet, some menu websites and application software can provide menu information at present, however, the websites and the application software only contain fixed item information of each menu and only provide a search function based on key words, and as deep-level association among the menu, food materials and menu attributes is not established, a machine is difficult to understand and process semantic knowledge related to the menus, and a good man-machine interaction function cannot be provided, so that people cannot efficiently acquire and utilize the menu knowledge.
The knowledge graph in the field of artificial intelligence can represent data in the Internet and knowledge accumulated by human beings as directed graphs formed by entities and relations, so that the incidence relation among all things in the world is modeled, the machine can store, process and utilize knowledge conveniently, and the knowledge graph is further utilized to provide efficient retrieval and reasoning services for people. However, there is no systematic ontology model of the recipe, which makes it difficult to construct a comprehensive and high-quality multi-modal knowledge map for healthy diets. In addition, the sources of the recipe data related to the diet domain are various, including semi-structured data, and unstructured data in the form of text and image, and because the entity types, relationships and attributes contained in the diet domain data are greatly different from those of other non-diet domains, the general knowledge extraction method and the knowledge extraction method oriented to the non-diet domain cannot be directly used for knowledge extraction of the diet related data, and therefore, a specific knowledge extraction method needs to be designed for the diet related data to extract entity, relationship and attribute information from the diet related data. Meanwhile, considering the problem of synonymy of multiple words of entities ubiquitous in the diet field, the expression of different entities with the same meaning extracted from different sources can cause redundancy of the knowledge graph and influence the storage and application of the knowledge graph, so that how to align the entities in the diet field to improve the quality of the knowledge graph is very important. Furthermore, in order to help people intuitively acquire healthy diet knowledge from a multi-modal knowledge map for healthy diet, it is necessary to visualize the multi-modal knowledge map for healthy diet.
In order to solve the problems, methods related to knowledge maps in the field of recipes are used at home and abroad. The patent 202011489915.5 designs a combined menu generation method based on knowledge maps, and mainly considers the combination of dishes by using state knowledge maps when a plurality of people have a dinner; the patent 202110105393.2 and the patent 202110977544.3 respectively design a menu recommendation method by depending on historical behavior data of a menu collected by a user and user motion information, so that a personalized menu recommendation function is realized; however, the current methods for applying the knowledge graph in the recipe field are all based on the preset existing knowledge graph, and how to construct a body model of the healthy diet knowledge graph and a knowledge extraction technology for multisource recipe data are not specially researched for a knowledge system in the diet field, so that a relatively complete and practical multi-mode knowledge graph for the healthy diet cannot be constructed, meanwhile, the existing methods neglect the problem of entity redundancy in the diet field and cannot ensure the quality of the knowledge graph, in addition, the current methods lack visualization of the multi-mode knowledge graph for the healthy diet, cannot visually display the associated information in the multi-mode knowledge graph for the healthy diet in a good human-computer interaction mode, and are difficult to fully mine and utilize various kinds of knowledge related to the recipe.
Therefore, how to construct a multi-modal knowledge map for a healthy diet is a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a method for constructing a multi-modal knowledge graph service platform for healthy diet, which comprises the steps of establishing a body model of the healthy diet knowledge graph by combing a knowledge system in the diet field, designing a body-enhanced diet knowledge extraction model aiming at diet-related data, and integrating a multi-modal entity alignment technology of images and diet common knowledge to construct a high-quality multi-modal knowledge graph for healthy diet, providing a knowledge graph visualization service, and visually displaying associated information and attribute information of entities among different entities in the diet field, so that people can more efficiently and fully utilize diet knowledge.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for constructing a multi-modal knowledge map service platform for healthy diet comprises the following specific steps:
step 1: establishing a healthy diet knowledge map body model according to a diet domain knowledge system;
and 2, step: acquiring relevant data including a menu, food nutrient elements and food therapy by using a healthy diet knowledge map body model;
and 3, step 3: constructing a diet knowledge extraction model based on the healthy diet knowledge map body model, automatically extracting entities, relations, attributes, attribute values and images from relevant data of a recipe, food nutrient elements and food therapy, and establishing a healthy diet multi-mode knowledge map according to the information in a knowledge map representation mode;
and 4, step 4: eliminating redundant entities of the diet knowledge graph by adopting a multi-mode entity alignment method for fusing images and diet common knowledge, and obtaining a high-quality multi-mode knowledge graph facing to healthy diet;
and 5: and storing the multi-modal knowledge graph facing the healthy diet into a constructed service platform, and performing visual display.
Preferably, the specific steps of step 1 are: according to the requirements of people on healthy diet in daily life, a healthy diet knowledge system is established by analyzing common knowledge in the diet field and data characteristics in a menu website, hierarchical concepts related to menu, food nutrition and food therapy, relationships and attributes among the concepts and corresponding value ranges are set according to diet knowledge characteristics, and a healthy diet knowledge map body model is established; the hierarchical concept refers to the concept with upper and lower level associations in the diet field, the relationship refers to semantic associations among some concepts, and the attribute and the value range thereof refer to the attribute value corresponding to some attributes, and the value of the attribute value should be limited within a certain range.
Preferably, the manner of obtaining the relevant data in step 2 includes, but is not limited to: compiling a crawler script by combining a healthy diet knowledge map ontology model, automatically crawling menu, food nutrient elements and diet therapy related webpage data from a website through the crawler script, wherein the webpage data comprises semi-structured data in the form of an information frame and unstructured data in the form of texts and images, and labeling names of the menu or food materials for the images by using webpage subject words for introducing the menu or food materials.
Preferably, the specific steps of step 3 are: constructing an ontology-enhanced diet knowledge extraction model according to the semi-structured data and the unstructured data, wherein the ontology-enhanced diet knowledge extraction model comprises a wrapper and an entity relationship combined extraction model, the wrapper automatically extracts the relationships among entities, attributes, attribute values and entities from the semi-structured data, and the entity relationship combined extraction model automatically extracts the relationships among the entities, attributes, attribute values and entities from the unstructured data;
preferably, a wrapper is constructed by adopting an XPath expression according to nodes corresponding to data containing concepts in the health diet knowledge graph ontology model;
because the page structures of the menu, food nutrient elements and diet therapy related web pages are relatively fixed and have obvious rules, the wrapper is designed based on the established health diet knowledge map body model, and an XPath expression is designed to construct the wrapper so as to automatically extract the relationship among entities, attributes, attribute values and entities from semi-structured data by analyzing nodes corresponding to data belonging to concepts contained in the health diet knowledge map body model in an HTML web page.
Preferably, the constructed entity-relationship combined extraction model comprises an entity identification submodel and a relationship extraction submodel, and entities and relationships among the entities, entities and attributes of the entities and corresponding attribute values can be automatically extracted from unstructured text data; attributes include, but are not limited to, images of dishes or food items, main ingredients, auxiliary ingredients, tastes, cooking times, cooking difficulties, cooking steps, and the like; relationships include, but are not limited to, belonging to, suitable for, selected materials, cuisine, dish type, efficacy, and the like; organizing the information associated with each entity by an (entity, relationship, entity) or (entity, attribute value) triple structure, and further converting the triple structure information into a diet knowledge graph according to the representation mode of the knowledge graph, wherein the representation mode of the diet knowledge graph comprises but is not limited to RDF, RDFS, OWL, N-Triples and XML.
Preferably, the construction process of the entity-relationship joint extraction model comprises the following steps:
step 321: the entity recognition sub-model adopts a BERT model to extract the context characteristics of each character in the text, and calculates the probability of all named entity label sequences by using a conditional random field CRF, wherein the expression is as follows:
Figure BDA0003942348230000041
wherein X and Y represent a text entry sequence and a named entity tag sequence, respectively, wherein the text entry sequence is represented by the symbol "[ CLS ]]The named entity tag sequence is formed by tags which are correspondingly marked by BIO (binary object notation), wherein the BIO marking means that each character belongs to one of an entity starting tag B-type, an entity middle tag I-type or a non-entity tag O, and the type represents the entity type; y is i And y i+1 Represents the ith and (i + 1) th tags in the named entity tag sequence Y;
Figure BDA0003942348230000042
a feature representation representing the ith character obtained by the BERT model; y' represents any named entity tag sequence; y' i And y' i+1 Represents the ith and (i + 1) th tags in the named entity tag sequence Y';
Figure BDA0003942348230000043
respectively representing the parameters of the conditional random field CRF; p (Y | X) represents the probability of input as a text input sequence X and output as a named entity tag sequence Y;
in order to obtain the optimal named entity tag sequence, the following objective function is solved:
L ner =argmaxP(Y|X);
step 322: constructing a relationship extraction submodule to extract an entity type part in a named entity label sequence; taking the relation or attribute of entity type in the body model of the healthy diet knowledge map as a newly added label sequence Y o Inputting the named entity tag sequence obtained by the entity identification submodule into the embedding layer to obtain a named entity tag sequence vector representation s Y And a new tag sequence vector representation
Figure BDA0003942348230000044
Representing a named entity tag sequence vector by s Y And a new tag sequence vector representation
Figure BDA0003942348230000045
And (3) jointly inputting the data into a relation classifier to obtain a relation extraction result, wherein the relation classifier is expressed as:
Figure BDA0003942348230000051
Figure BDA0003942348230000052
wherein, W 1 、W 2 Each represents a learnable weight matrix; b is a mixture of 1 Is a deviation vector;
Figure BDA0003942348230000053
represents s Y And
Figure BDA0003942348230000054
transposition after vector splicing; h is a hidden vector; r is a radical of hydrogen i A probability representing the ith relationship; [ W ] 1 h T ] i And [ W ] 1 h T ] j Respectively represent W 1 h T The ith and jth dimension values of (a); reLU represents a shaping linear unit function, specifically:
Figure BDA0003942348230000055
wherein x represents a variable;
to optimize the relational classifier, an objective function is used as:
Figure BDA0003942348230000056
wherein n represents the number of all relationships; q (r) i ) Represents the ith relation r i If the ith relation r i If the relation is correct corresponding to the currently input text, Q (r) i ) =1, otherwise Q (r) i )=0;
Step 323: and combining the objective functions of the entity identification submodule and the relation extraction submodule to obtain an overall objective function, and performing combined learning on the entity relation combined extraction model enhanced by the body by using the overall objective function, wherein the overall objective function is expressed as:
L=L ner1 L re
wherein alpha is 1 Is a weighting factor.
Preferably, the specific process of step 4 is as follows:
step 41: designing a multi-mode entity alignment method for fusing the images and the common dietary knowledge according to the dietary knowledge map in the step 3; for the food material type entities in the diet knowledge map, constructing a diet field homonymy mapping table for the food material type entities according to diet common knowledge, and converting all entities matched with the diet field homonymy mapping table into uniform expression;
extracting a characteristic vector from the entity image of each food material type by using a ViT model based on a Transformer, calculating the visual similarity of the characteristic vectors of the entity images of each two food material types, and simultaneously calculating the character string similarity of each two entities so as to obtain the entity similarity fusing the vision and the text, wherein the corresponding formula is as follows:
S mat =Sim 1 (ViT(m 1 ),ViT(m 2 ))+a 2 Sim 2 (C(m 1 ),C(m 2 ));
wherein m is 1 And m 2 Entities representing two food material types; viT (m) 1 ) And ViT (m) 2 ) Respectively represent a pair of entities m 1 And m 2 Extracting a characteristic vector by adopting a ViT model; c (m) 1 ) And C (m) 2 ) Are respectively entity m 1 And m 2 The character string of (1); sim 1 To calculate visual similarity functions, including but not limited to cosine similarity, euclidean distance similarity, inner product, etc.; sim 2 For calculating a string similarity function, including but not limited to cosine similarity, euclidean distance similarity, edit distance, etc.; alpha (alpha) ("alpha") 2 Is a weight factor;
by setting a food material type entity similarity threshold, if the entity similarity of the fusion vision of the two food material type entities and the text is greater than or equal to the food material type entity similarity threshold, aligning the two entities, and eliminating redundant food material type entities; if the entity similarity of the fusion vision of the two food material type entities and the text is smaller than the threshold value of the similarity of the food material type entities, regarding the two food material type entities as different entities;
step 43: for the entity of the dish type in the diet knowledge map, extracting a characteristic vector for the entity image of each dish type by using a ViT model based on a Transformer, calculating the visual similarity of the characteristic vectors of the entity images of each two dish types, and simultaneously converting each entity and the relation in the diet knowledge map into vector representation by using a knowledge map embedding method; knowledge graph embedding techniques include, but are not limited to, the TransE model, the TransH model, and the RotatE model;
the neighborhood information of each dish type entity is coded by adopting a graph attention neural network to obtain the neighborhood code representation of each dish type entity, and then the neighborhood code representation similarity between every two dish type entities can be calculated to obtain the entity similarity fusing vision and neighborhood information, wherein the formula is as follows:
S rec =Sim 1 (ViT(rec i ),ViT(rec 2 ))+α 3 Sim 1 (N(rec 1 ),N(rec 2 ));
wherein rec 1 And rec 2 Representing two dish type entities; viT (rec) 1 ) And ViT (rec) 2 ) Respectively represent to entities rec 1 And rec 2 Extracting a characteristic vector by adopting a ViT model; alpha is alpha 3 Is a weight factor; n (rec) 1 ) And N (rec) 2 ) Are respectively directed to the entity rec 1 And rec 2 The neighborhood coding obtained by using the graph attention neural network is represented as:
n i =ReLU(U[e ni ;r ni ] T )
Figure BDA0003942348230000061
Figure BDA0003942348230000062
wherein n is i A hidden vector representing the ith neighbor in the neighborhood; u represents a learnable parameter vector; [ e ] a ni ;r ni ] T To represent the entity vector of the ith neighbor as e ni And the relation vector representation r ni Transposition after splicing; a is a i An attention weight representing the ith neighbor; n (e) is a neighborhood coding representation of entity e; w 3 And b 2 Representing a learnable parameter matrix and parameter vector;
by setting a similarity threshold of the dish type entities, if the entity similarity of the fusion vision and neighborhood information of the two dish type entities is greater than or equal to the similarity threshold of the dish type entities, aligning the two entities to eliminate redundant dish type entities; if the entity similarity of the fusion vision of the two dish type entities and the text is smaller than the dish type entity similarity threshold value, the two dish type entities are regarded as different entities;
step 44: and obtaining a multi-modal knowledge map facing the healthy diet.
Preferably, the way to store the multi-modal knowledge-map for a healthy diet includes, but is not limited to: the method comprises the steps of storing a multi-mode knowledge graph facing to healthy diet by using a graph database Neo4j and a triple database Jena, wherein attributes and relations of each dish or food material type entity need to be distinguished when Neo4j is used for storing, tags are added to the entities by using the attributes, the attributes and the relations are not distinguished when Jena is used for storing, the entities, the relations, the entities and the entities, the attributes and the attribute values are stored in a triple mode, and meanwhile, as main materials, auxiliary materials and auxiliary materials in the attributes already represent food materials of the dishes, the food material relations are removed when Jena is used for storing the knowledge graph.
Preferably, the functions of visualizing a multimodal knowledge map of a healthy diet include, but are not limited to: displaying a complete multi-modal knowledge map facing the healthy diet by adopting a data structure of a directed graph; entities belonging to different concepts are displayed in different colors; selecting one or more concepts, and carrying out visual display on sub-graphs associated with entities corresponding to the selected concepts; selecting an entity, and carrying out visual display on a subgraph associated with the selected entity; the entity name is input in the search box, all entities similar to the entity name are presented, and the subgraph associated with the entities is visually displayed.
According to the technical scheme, compared with the prior art, the invention discloses a method for constructing a multi-modal knowledge graph service platform for healthy diet, which has the advantages that:
(1) Designing a healthy diet knowledge map body model according to a knowledge system in the diet field, wherein the healthy diet knowledge map body model comprises a recipe, food nutrition and diet therapy related hierarchical concepts, relationships among the concepts, attributes and corresponding value ranges thereof, and is used for guiding crawling of diet related data and construction and visualization of a knowledge map;
(2) By designing the ontology enhanced knowledge extraction model, entities, relations, attributes and attribute values can be automatically and accurately extracted from semi-structured and unstructured data related to diet to construct a multi-modal knowledge map, and meanwhile, a multi-modal entity alignment technology for fusing images and diet common knowledge is provided for the diet field, so that the problem of entity redundancy existing in the diet field generally is solved, and a high-quality multi-modal knowledge map facing healthy diet is constructed;
(3) The method can combine the body model of the healthy diet knowledge map and certain human-computer interaction logic to realize the visualization of the multi-mode knowledge map facing the healthy diet, and a user can visually and efficiently know the healthy diet knowledge through the multi-mode knowledge map visualization facing the healthy diet to assist diet selection and cooking.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of the construction of a multi-modal knowledge-graph service platform for healthy diet provided by the invention;
FIG. 2 is a schematic diagram of a health diet knowledge map ontology model provided by the present invention;
FIG. 3 is a diagram of a multi-modal knowledge-map visualization interface for a healthy diet provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a method for constructing a multi-mode knowledge graph service platform for healthy diet, which can automatically extract menu knowledge, construct a multi-mode knowledge graph with high quality and multi-mode characteristics for healthy diet, provide visual human-computer interaction function of the knowledge graph, visually display associated information in the menu, help people to know related knowledge of the menu more efficiently and fully, and has good practicability.
As shown in FIG. 1, the method for constructing the multi-modal knowledge-graph service platform for healthy diet comprises the following steps: firstly, combing a diet field knowledge system and establishing a healthy diet knowledge map body model; acquiring semi-structured and unstructured data related to a recipe, food nutrition and food therapy based on a healthy diet knowledge map ontology model; then, providing a body-enhanced diet knowledge extraction model, automatically extracting entities, relations, attributes, attribute values and images from semi-structured and unstructured data related to a menu, food nutrition and food therapy, and establishing a health-oriented diet knowledge map according to the information in a knowledge map representation mode; secondly, a multi-mode entity alignment technology of fusion images and diet common knowledge is adopted, redundant entities are eliminated to achieve knowledge fusion, the quality of the diet knowledge graph is improved, the multi-mode knowledge graph facing to the healthy diet is obtained, and then the multi-mode knowledge graph facing to the healthy diet is stored; finally, multi-mode knowledge map visualization facing healthy diet is realized, and a certain human-computer interaction function is provided; the specific implementation steps are as follows:
a method for constructing a multi-modal knowledge map service platform for healthy diet comprises the following specific steps:
s1: combing a diet field knowledge system, and establishing a healthy diet knowledge map body model;
s2: acquiring data related to the recipe, food nutrient elements and food therapy by using the health diet knowledge map ontology model established in the step S1;
s3: based on the health diet knowledge map ontology model established in the S2, providing an ontology-enhanced diet knowledge extraction model, automatically extracting entities, relations, attributes, attribute values and images from the recipe and food nutrition related data, and establishing a health-oriented diet knowledge map according to the information in a knowledge map representation mode;
s4: aiming at the diet knowledge map constructed in the S3, a multi-mode entity alignment technology of fusion images and diet common knowledge is adopted, redundant entities are eliminated to realize knowledge fusion, and the multi-mode knowledge map facing to healthy diet is obtained;
s5: and then, storing the multi-modal knowledge map facing the healthy diet, and constructing a service platform based on the stored knowledge map, so that the multi-modal knowledge map visualization facing the healthy diet is realized, and a certain human-computer interaction function is provided.
Further, the specific steps of S1 are: according to the requirements of people on rational diet in daily life, the common knowledge in the diet field and the data in the menu website are combined, the menu knowledge system is combed, hierarchical concepts related to the menu, the relation between the concepts, attributes and value ranges thereof are defined, and a multi-mode knowledge map body facing to healthy diet is established; as shown in fig. 2, the hierarchical concept means that concepts having upper and lower hierarchical associations exist in the food and drink field, the hierarchical concept includes, but is not limited to, "dish type" - "food," "dish system" - "Chinese meal" - "food," "food material" - "food material large class" - "food," "food image" - "food," "eating manner" - "eating characteristics," "efficacy" - "eating characteristics," "taste" - "eating characteristics," "suitable crowd" - "eating characteristics," "cooking manner" - "cooking characteristics," and "time" - "cooking characteristics," the relationship between concepts includes, but is not limited to, "belonging to," "selecting materials," "characteristics," the attributes include, but are not limited to "taste," "cooking time," "image is," the value range means that the value of the attribute value corresponding to some attributes is restricted within a certain range, the value range includes, but is not limited to "taste" corresponding value range is an entity belonging to the concept "taste," cooking time "is an entity concept corresponding to" corresponding value range only to "time".
Further, the specific step of S2 is: and compiling a crawler script based on the health diet knowledge map body model established in the S1, automatically crawling menu, food nutrient elements and diet therapy related webpage data from a website through the crawler script, wherein the webpage data comprises semi-structured data in the form of an information frame and unstructured data in the form of texts and images, and meanwhile, labeling the names of the menu or the food materials for the images by utilizing webpage subject terms for introducing the menu or the food materials.
Further, the specific step of S3 is: aiming at semi-structured data such as an HTML document obtained in S2, a wrapper is designed by combining an HTML document structure characteristic of a healthy diet knowledge map ontology model and a menu webpage designed in S1, and as the page structures of the menu, food nutrient elements and diet therapy related webpages are relatively fixed and have obvious rules, by analyzing nodes corresponding to data belonging to concepts contained in the healthy diet knowledge map ontology model in the HTML webpage, for example, a first-level title tag < h1class = "recipe _ De _ title" > in the HTML document corresponds to a dish name, a list tag < ul > represents information such as food materials, tastes and cooking time, and a < di class = "recipe _ De _ imgBox" id = "recipe _ DegBox" > represents a dish image, so that an XPath expression wrapper can be designed to construct semi-structured data obtained in S2 and automatically extract entity, relationship, attribute and image information; particularly, a part of the automatically extracted entities are dishes and food materials, which can be stored as a dish dictionary and a food material dictionary, and meanwhile, a relation dictionary and an attribute dictionary are constructed by the automatically extracted relations and attributes;
on the other hand, the unstructured text data and the image data are oriented to mainly comprise images of dishes and food materials and texts for describing the dishes, wherein all the images are automatically zoomed and converted into a uniform size through scripts, and the images and corresponding dish or food material labels are established; then designing a body-enhanced entity relationship joint extraction model facing unstructured text data, taking a part of texts as training data in order to train the body-enhanced entity relationship joint extraction model, automatically labeling a named entity label sequence of the training data by using a dish dictionary and a food material dictionary constructed in the step S3, and simultaneously automatically labeling the relationship and the attribute in the training data by using a relationship dictionary and an attribute dictionary; the entity-enhanced entity-relationship joint extraction model comprises an entity identification submodule and a relationship extraction submodule, for the entity identification submodule, the context characteristics of each character in a text are extracted by using a BERT model, and the probability of all possible named entity label sequences is obtained by means of a conditional random field CRF, wherein the formula is as follows:
Figure BDA0003942348230000111
wherein X and Y represent a text entry sequence and a named entity tag sequence, respectively, wherein the text entry sequence is represented by the symbol "[ CLS ]]The named entity tag sequence is formed by tags which are correspondingly marked by BIO (binary object notation), wherein the BIO marking means that each character belongs to one of an entity starting tag B-type, an entity middle tag I-type or a non-entity tag O, and the type represents the entity type; y is i And y i+1 Represents the ith and (i + 1) th tags in the named entity tag sequence Y;
Figure BDA0003942348230000112
a feature representation representing the ith character obtained by the BERT model; y' represents any named entity tag sequence; y' i And y' i+1 Represents the ith and (i + 1) th tags in the named entity tag sequence Y';
Figure BDA0003942348230000113
respectively representing the parameters of the conditional random field CRF; p (Y | X) represents the probability of input as a text input sequence X and output as a named entity tag sequence Y;
in order to obtain the optimal named entity tag sequence, the following objective function is solved:
L ner =argmaxP(Y|X);
in the relation extraction submodule, an entity type part in a named entity label sequence is extracted, and the relation or attribute of the entity type in the healthy diet knowledge map body model is used as a newly added label sequence Y o Inputting the named entity tag sequence obtained by the entity identification submodule into the embedding layer to obtain a named entity tag sequence vector representation s Y And a new tag sequence vector tableDisplay device
Figure BDA0003942348230000118
Representing a named entity tag sequence vector by s Y And a new tag sequence vector representation
Figure BDA0003942348230000119
And (3) jointly inputting the data into a relation classifier to obtain a relation extraction result, wherein the relation classifier is expressed as:
h=ReLU(W 1 [s Y ;s Yo ] T +b 1 )
Figure BDA0003942348230000114
wherein, W 1 、W 2 Each represents a learnable weight matrix; b 1 Is a deviation vector;
Figure BDA0003942348230000115
represents s Y And
Figure BDA0003942348230000116
transposition after vector splicing; h is a hidden vector; r is i A probability representing the ith relationship; [ W ] 1 h T ] i And [ W ] 1 h T ] j Respectively represent W 1 h T The ith and jth dimension values of (a); reLU represents a shaping linear unit function, specifically:
Figure BDA0003942348230000117
wherein x represents a variable;
to optimize the relational classifier, an objective function is used as:
Figure BDA0003942348230000121
wherein n representsA number of relationships; q (r) i ) Represents the ith relation r i If the ith relation r i If the relation is correct corresponding to the currently input text, Q (r) i ) =1, otherwise Q (r) i )=0;
Combining the objective functions of the entity identification submodule and the relation extraction submodule, and performing combined learning on the entity relation combined extraction model of the ontology enhancement by using the overall objective function, wherein the overall objective function is expressed as:
L=L ner1 L re
wherein alpha is 1 Is a weighting factor.
Through an entity-relationship joint extraction model enhanced by an ontology, entities, relationships, attributes and attribute values can be automatically extracted from unstructured text data; the attribute includes but is not limited to "dish image is", "food material image is", "main material is", "auxiliary material is", "ingredient is", "taste is", "cooking time is", "cooking difficulty is", "cooking step is", "nutrient element is", and the like, and the relationship includes but is not limited to "belonging", "suitable crowd is", "material selection", "dish system is", "dish type is", "efficacy is", and the like, in particular, the attribute "nutrient element is" and related information are extracted from nutrient element data, and the relationship "efficacy is" and related entity is extracted from diet therapy data; organizing the information associated with each entity in a (entity, relationship, entity) or (entity, attribute value) triple structure, and further converting the triple structure information into a diet knowledge graph according to the representation mode of the knowledge graph, wherein the multi-modal knowledge graph refers to data with attribute values capable of being in multiple modes of phrases, images and long texts, and the representation mode of the knowledge graph includes but is not limited to RDF, RDFS, OWL, N-Triples and XML.
Further, the specific step of S4 is: aiming at the diet recognition map constructed in the S3, a multi-mode entity alignment method fusing an image and diet common knowledge is designed, and a multi-mode knowledge map facing healthy diet is obtained, wherein a food material type entity is faced, a diet field homonymy mapping table is formulated by utilizing diet common knowledge, for example, potato and potato are mapped to tomato, mashed garlic, minced garlic and garlic flakes are mapped to garlic, and all entities matched with the diet field homonymy mapping table are converted into uniform expression;
further, extracting a characteristic vector from the entity image of each food material type by using a ViT model based on a Transformer, calculating the visual similarity of the characteristic vectors of every two food material type entities, and simultaneously calculating the character string similarity of every two entities, thereby obtaining the entity similarity fusing the vision and the text, wherein the corresponding formula is as follows:
S mat =Sim 1 (ViT(m 1 ),ViT(m 2 ))+α 2 Sim 2 (C(m 1 ),C(m 2 ));
wherein m is 1 And m 2 Entities representing two food material types; viT (m) 1 ) And ViT (m) 2 ) Respectively represent a pair of entities m 1 And m 2 Extracting a characteristic vector by adopting a ViT model; c (m) 1 ) And C (m) 2 ) Are respectively entity m 1 And m 2 The character string of (2); sim 1 To calculate visual similarity functions, including but not limited to cosine similarity, euclidean distance similarity, inner product, etc.; sim 2 For calculating a string similarity function, including but not limited to cosine similarity, euclidean distance similarity, edit distance, etc.; alpha (alpha) ("alpha") 2 As a weighting factor, α 2 A preferred value of 1.5; further, by setting a threshold value of the similarity of the food material type entities, wherein the preferred value of the threshold value is 0.9, if the entity similarity of the fusion vision of the two food material type entities and the text is greater than or equal to the threshold value of the similarity of the food material type entities, aligning the two entities; if the entity similarity of the fusion vision of the two food material type entities and the text is smaller than the threshold value of the similarity of the food material type entities, regarding the two food material type entities as different entities;
the method comprises the steps of for entities facing to dish types, extracting feature vectors from images of each entity of the dish types by using a ViT model based on a Transformer, calculating the visual similarity of the feature vectors of every two entity of the dish types, and converting each entity and relation in a diet knowledge map into vector representation by using a knowledge map embedding technology, wherein the knowledge map embedding technology comprises but is not limited to a TransE model, a TransH model and a RotatE model, then coding neighborhood information of each entity of the dish types by using a graph attention neural network to obtain neighborhood coding representation of each entity of the dish types, further calculating the neighborhood coding representation similarity between every two entity of the dish types, and further obtaining the entity similarity fusing the visual and the neighborhood information, wherein the formula is as follows:
S rec =Sim 1 (ViT(rec 1 ),ViT(rec 2 ))+α 3 Sim 1 (N(rec 1 ),N(rec 2 ));
wherein, rec 1 And rec 2 Representing two dish type entities; viT (rec) 1 ) And ViT (rec) 2 ) Respectively represent to entities rec 1 And rec 2 Extracting a characteristic vector by adopting a ViT model; alpha is alpha 3 As a weighting factor, α 3 A preferred value of (b) is 1.5; n (rec) 1 ) And N (rec) 2 ) Are respectively directed to the entity rec 1 And rec 2 The neighborhood coding obtained by using the graph attention neural network is represented as:
n i =ReLU(U[e ni ;r ni ] T )
Figure BDA0003942348230000141
Figure BDA0003942348230000142
wherein n is i A hidden vector representing the ith neighbor in the neighborhood; u represents a learnable parameter vector; [ e ] a ni ;r ni ] T To represent the entity vector of the ith neighbor as e ni And the relation vector representation r ni Transposition after splicing; a is i RepresentsAttention weight of ith neighbor; n (e) is a neighborhood coding representation of entity e; w 3 And b 2 Representing a learnable parameter matrix and parameter vector; by setting a dish type entity similarity threshold value, wherein the preferred value of the threshold value is 0.9, if the entity similarity of the fusion vision and neighborhood information of two dish type entities is greater than or equal to the dish type entity similarity threshold value, aligning the two entities; if the entity similarity of the fusion vision of the two dish type entities and the text is smaller than the dish type entity similarity threshold value, the two dish type entities are regarded as different entities;
after the diet knowledge map is aligned, redundant entities are eliminated, and a multi-mode knowledge map for healthy diet is obtained.
Further, ways to store the multi-modal health diet oriented knowledge map include, but are not limited to: the method comprises the steps of storing a multi-mode knowledge graph facing to healthy diet by using a graph database Neo4j and a triple database Jena, wherein attributes and relations of each dish or food material type entity need to be distinguished when Neo4j is used for storing, tags are added to the entities by using the attributes, the attributes and the relations are not distinguished when Jena is used for storing, the entities, the relations, the entities and the entities, the attributes and the attribute values are stored in a triple mode, and meanwhile, as main materials, auxiliary materials and auxiliary materials in the attributes already represent food materials of the dishes, the food material relations are removed when Jena is used for storing the knowledge graph.
Further, designing a multi-modal knowledge graph visualization style and an interaction mode for the healthy diet, and specifically, displaying a complete multi-modal knowledge graph for the healthy diet by adopting a data structure of a directed graph; entities belonging to different concepts are represented by nodes with different colors and sizes, and considering that the multi-modal knowledge graph facing the healthy diet is dense, and a large amount of overlapping phenomena exist on the edges representing the relationship in the whole multi-modal knowledge graph visualization facing the healthy diet, so that only the edges among the nodes are displayed without displaying specific relationship names when the static visual display of the multi-modal knowledge graph facing the healthy diet is performed, and the relationship names associated with the selected entities are displayed when a user selects a certain entity; selecting an entity, visually displaying a sub-graph associated with the selected entity, and simultaneously displaying all attribute information of the current entity, including but not limited to a dish image or a food material image, main materials, ingredients, auxiliary materials, taste, cooking time and cooking difficulty; designing a concept selection switch based on concepts in the ontology model established in the S1, and visually displaying all entities corresponding to the selected concepts and associated subgraphs when one or more concepts are selected; inputting an entity name in a search box, presenting all entities similar to the entity name, and visually displaying sub-graphs associated with the entities; then, a D3 visualization tool is adopted to realize the multi-modal knowledge graph visualization facing the healthy diet, the color and the size attribute are set for each node representing the entity according to the corresponding concept, the side representing the relationship is established between the two entities in the triad, the line width and the color are set, meanwhile, the mouse response event and the search response event are completed according to the multi-modal knowledge graph visualization interaction mode facing the healthy diet designed in the S5, the multi-modal knowledge graph visualization facing the healthy diet is realized to provide a good human-computer interaction function, and the multi-modal knowledge graph visualization interface facing the healthy diet is shown in figure 3.
The method comprises the steps of designing a body model of a healthy diet knowledge map according to a knowledge system in the diet field by using the method for constructing the multimodality knowledge map for healthy diet, obtaining semi-structured and unstructured data related to menu, food nutrition and food therapy, then providing a body-enhanced diet knowledge extraction model, automatically extracting entities, relations, attributes, attribute values and images from the semi-structured and unstructured data related to menu, food nutrition and food therapy, establishing the diet knowledge map according to the information in the representation mode of the knowledge map, then eliminating redundant entities by using a multimodality entity alignment technology of fusion images and diet common knowledge to realize knowledge fusion, improving the quality of the diet knowledge map, obtaining the multimodality knowledge map for healthy diet, then storing the multimodality knowledge map for healthy diet by using a database, and finally realizing the multimodality knowledge map for healthy diet, providing a certain man-machine interaction function, and enabling a user to visually and efficiently understand diet knowledge.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for constructing a multi-modal knowledge graph service platform for healthy diet is characterized by comprising the following specific steps:
step 1: establishing a healthy diet knowledge map body model according to a diet field knowledge system;
step 2: acquiring relevant data including a menu, food nutrient elements and food therapy by using a healthy diet knowledge map body model;
and step 3: constructing a diet knowledge extraction model based on the healthy diet knowledge map body model, automatically extracting entities, relations, attributes, attribute values and images from relevant data of a recipe, food nutrient elements and food therapy, and establishing a diet knowledge map according to the representation mode of the knowledge map;
and 4, step 4: eliminating redundant entities of the diet knowledge graph by adopting a multi-mode entity alignment method for fusing images and diet common knowledge to obtain a multi-mode knowledge graph for healthy diet;
and 5: and storing the multi-modal knowledge graph facing the healthy diet into a constructed service platform, and performing visual display.
2. The method for constructing the multi-modal knowledge-graph service platform for the healthy diet according to claim 1, wherein the specific process of the step 1 is as follows: according to the requirement of a user on healthy diet, a healthy diet knowledge system is established by analyzing common knowledge in the diet field and data characteristics in a menu website; and setting hierarchical concepts, relationships and attributes among the concepts and corresponding value ranges thereof related to the recipe, food nutrition and food therapy according to the knowledge characteristics of the healthy diet, and establishing a body model of the healthy diet knowledge map.
3. The method for constructing a multimodal knowledge graph service platform facing healthy diet according to claim 1, characterized in that the relevant data obtained in step 2 are obtained by: compiling a crawler script by combining a healthy diet knowledge graph ontology model, automatically crawling dishes, food nutrient elements and food therapy related webpage data from a website through the crawler script, wherein the webpage data comprises semi-structured data in the form of an information frame and unstructured data in the form of texts and images, and labeling the names of the dishes or food materials for the images by using webpage subject words for introducing the dishes or food materials.
4. The method for constructing a multi-modal knowledge graph service platform for healthy diet according to claim 3, characterized in that in the step 3, an ontology-enhanced diet knowledge extraction model is constructed according to semi-structured data and unstructured data, and comprises a wrapper and an entity relationship joint extraction model; the wrapper automatically extracts the entity, the attribute value and the relationship among the entities from the semi-structured data; the entity-relationship joint extraction model automatically extracts the entities, the attributes, the attribute values and the relationships among the entities from the unstructured data.
5. The method for constructing the multimodal knowledge base service platform for healthy diet as claimed in claim 4, wherein the wrapper is constructed by XPath expressions according to nodes corresponding to data belonging to concepts contained in the knowledge base body model of healthy diet in HTML webpage.
6. The method for constructing the multimodal knowledge graph service platform for healthy diet as claimed in claim 4, wherein the constructed entity relationship joint extraction model comprises an entity recognition submodel and a relationship extraction submodel, and the entities and the relationships between the entities, the entities and the attributes thereof and the corresponding attribute values are automatically extracted from the unstructured text data; the attributes comprise images of the dishes or food materials; organizing the information associated with each entity in a triple structure of entity-relationship-entity or entity-attribute values; the representation mode of the knowledge graph converts the triple structure information into a diet knowledge graph; the representation of the dietary knowledgemaps include RDF, RDFS, OWL, N-Triples and XML.
7. The method for constructing the multimodal knowledge graph service platform for healthy diet according to claim 6, wherein the construction process of the entity-relationship joint extraction model comprises the following steps:
step 321: the entity recognition submodel adopts a BERT model to extract the context characteristics of each character in the text, and calculates the probability of all named entity label sequences by using a conditional random field CRF, wherein the expression is as follows:
Figure FDA0003942348220000021
wherein X and Y represent a text input sequence and a named entity tag sequence, respectively; y is i And y i+1 Represents the ith and (i + 1) th tags in the named entity tag sequence Y;
Figure FDA0003942348220000024
representing results obtained by BERT modelA feature representation of the ith character; y' represents any named entity tag sequence; y' i And y' i+1 Represents the ith and (i + 1) th tags in the named entity tag sequence Y';
Figure FDA0003942348220000023
respectively representing the parameters of the conditional random field CRF; p (Y | X) represents the probability of input as a text input sequence X and output as a named entity tag sequence Y;
solving the objective function of the optimal named entity tag sequence as follows:
L ner =argmaxP(Y|X);
step 322: constructing a relationship extraction submodule to extract an entity type part in a named entity label sequence;
taking the relation or attribute of the entity type in the healthy diet knowledge map body model as a newly added label sequence Y o Inputting the named entity tag sequence obtained by the entity identification submodule into the embedding layer to obtain a named entity tag sequence vector representation s Y And a new tag sequence vector representation
Figure FDA0003942348220000022
Representing a named entity tag sequence vector by s Y And a new tag sequence vector representation
Figure FDA0003942348220000031
And (3) jointly inputting the data into a relation classifier to obtain a relation extraction result, wherein the relation classifier is expressed as:
Figure FDA0003942348220000037
Figure FDA0003942348220000032
wherein, W 1 、W 2 All represent learnable weight matrices;b 1 Is a deviation vector;
Figure FDA0003942348220000033
represents s Y And
Figure FDA0003942348220000034
transposition after vector splicing; h is a hidden vector; r is i A probability representing the ith relationship; [ W ] 1 h T ] i And [ W ] 1 h T ] j Respectively represent W 1 h T The ith and jth dimension values of (a); reLU represents a shaping linear unit function, specifically:
Figure FDA0003942348220000035
wherein x represents a variable;
the objective function of the optimization relation classifier is as follows:
Figure FDA0003942348220000036
wherein n represents the number of all relationships; q (r) i ) Represents the ith relation r i If the ith relation r i If the relation is correct corresponding to the currently input text, Q (r) i ) =1, otherwise Q (r) i )=0;
Step 323: and combining the objective functions of the entity identification submodule and the relation extraction submodule to obtain an overall objective function, and performing combined learning on the entity relation combined extraction model enhanced by the body by using the overall objective function, wherein the overall objective function is expressed as:
L=L ner1 L re
wherein alpha is 1 Is a weighting factor.
8. The method for constructing the multimodal knowledge graph service platform for the healthy diet according to the claim 6, wherein the specific process of the step 4 is as follows:
step 41: constructing a food material type entity in a food map according to common knowledge of food, and converting all entities matched with the food field homonymy mapping table into uniform expression;
extracting a characteristic vector from the entity image of each food material type by using a ViT model based on a Transformer, calculating the visual similarity of the characteristic vectors of the entity images of each two food material types, and simultaneously calculating the character string similarity of each two entities to obtain the entity similarity fusing the vision and the text, wherein the corresponding formula is as follows:
S mat =Sim 1 (ViT(m 1 ),ViT(m 2 ))+α 2 Sim 2 (C(m 1 ),C(m 2 ));
wherein m is 1 And m 2 Entities representing two food material types; viT (m) 1 ) And ViT (m) 2 ) Respectively represent a pair of entities m 1 And m 2 Extracting a characteristic vector by adopting a ViT model; c (m) 1 ) And C (m) 2 ) Are respectively entity m 1 And m 2 The character string of (1); sim 1 Calculating a visual similarity function; sim 2 Calculating a string similarity function; alpha is alpha 2 Is a weight factor;
if the entity similarity of the fused vision of the two food material type entities and the text is greater than or equal to the food material type entity similarity threshold, aligning the two food material type entities; if the entity similarity of the fusion vision of the two food material type entities and the text is smaller than the threshold value of the similarity of the food material type entities, regarding the two food material type entities as different entities;
step 42: extracting a characteristic vector from the entity image of each dish type in the diet knowledge graph by using a ViT model based on a Transformer, calculating the visual similarity of the characteristic vectors of the entity images of every two dish types, and converting each entity and the relation in the diet knowledge graph into vector representation by using a knowledge graph embedding method;
adopting a graph attention neural network to encode neighborhood information of each dish type entity to obtain neighborhood code representation of each dish type entity, calculating neighborhood code representation similarity between every two dish type entities to obtain entity similarity fusing vision and neighborhood information, wherein the formula is as follows:
S rec =Sim 1 (ViT(rec 1 ),ViT(rec 2 ))+α 3 Sim 1 (N(rec 1 ),N(rec 2 ));
wherein rec 1 And rec 2 Representing two dish type entities; viT (rec) 1 ) And ViT (rec) 2 ) Respectively represent to entities rec 1 And rec 2 Extracting a characteristic vector by adopting a ViT model; alpha is alpha 3 Is a weight factor; n (rec) 1 ) And N (rec) 2 ) Are respectively directed to the entity rec 1 And rec 2 Neighborhood coding representation is obtained by adopting a graph attention neural network; the graph attention neural network is represented as:
n i =ReLU(U[e ni ;r ni ] T )
Figure FDA0003942348220000041
Figure FDA0003942348220000042
wherein n is i A hidden vector representing the ith neighbor in the neighborhood; u represents a learnable parameter vector; [ e ] a ni ;r ni ] T To represent the entity vector of the ith neighbor as e ni And the relation vector representation r ni Transposition after splicing; a is i An attention weight representing the ith neighbor; n (e) is a neighborhood coding representation of entity e; w 3 And b 2 Representing a learnable parameter matrix and parameter vector;
if the entity similarity of the fusion vision of the two dish type entities and the neighborhood information is greater than or equal to the dish type entity similarity threshold, aligning the two entities; and if the entity similarity of the fused vision and the text of the two dish type entities is less than the dish type entity similarity threshold value, the two dish type entities are regarded as different entities.
9. The method for constructing the multimodal knowledge graph service platform for the healthy diet according to the claim 6, wherein the way of storing the multimodal knowledge graph for the healthy diet comprises: storing a multi-mode knowledge graph facing to healthy diet by using a graph database Neo4j and a triple library Jena, wherein attributes and relations of each dish or food type entity are distinguished when Neo4j is used for storage, tags are added to the entities by using the attributes, food relations are removed when Jena is used for storage, the attributes and the relations are not distinguished, and entity-relation-entity and entity-attribute values are stored in a triple mode.
10. The method for constructing the multimodal knowledge graph service platform facing the healthy diet as claimed in claim 6, wherein the step of visually displaying the multimodal knowledge graph facing the healthy diet comprises the steps of: displaying a complete multi-modal knowledge map facing the healthy diet by adopting a data structure of a directed graph; entities belonging to different concepts are displayed in different colors; when one or more concepts are selected, carrying out visual display on sub-graphs associated with entities corresponding to the selected concepts; when an entity is selected, carrying out visual display on a subgraph associated with the selected entity; when an entity name is input in the search box, all entities similar to the entity name are presented, and subgraphs associated with all similar entities are visually displayed.
CN202211426196.1A 2022-11-14 2022-11-14 Method for constructing multi-modal knowledge graph service platform for healthy diet Pending CN115714001A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211426196.1A CN115714001A (en) 2022-11-14 2022-11-14 Method for constructing multi-modal knowledge graph service platform for healthy diet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211426196.1A CN115714001A (en) 2022-11-14 2022-11-14 Method for constructing multi-modal knowledge graph service platform for healthy diet

Publications (1)

Publication Number Publication Date
CN115714001A true CN115714001A (en) 2023-02-24

Family

ID=85233124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211426196.1A Pending CN115714001A (en) 2022-11-14 2022-11-14 Method for constructing multi-modal knowledge graph service platform for healthy diet

Country Status (1)

Country Link
CN (1) CN115714001A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116417115A (en) * 2023-06-07 2023-07-11 北京四海汇智科技有限公司 Personalized nutrition scheme recommendation method and system for gestational diabetes patients
CN116467482A (en) * 2023-04-04 2023-07-21 广东省科学院广州地理研究所 Multi-mode plant knowledge query method, system and computer equipment
CN117373619A (en) * 2023-12-08 2024-01-09 四川省肿瘤医院 Recipe generation method and generation system based on intestinal ostomy bag excrement monitoring result

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467482A (en) * 2023-04-04 2023-07-21 广东省科学院广州地理研究所 Multi-mode plant knowledge query method, system and computer equipment
CN116467482B (en) * 2023-04-04 2024-04-09 广东省科学院广州地理研究所 Multi-mode plant knowledge query method, system and computer equipment
CN116417115A (en) * 2023-06-07 2023-07-11 北京四海汇智科技有限公司 Personalized nutrition scheme recommendation method and system for gestational diabetes patients
CN116417115B (en) * 2023-06-07 2023-12-01 北京四海汇智科技有限公司 Personalized nutrition scheme recommendation method and system for gestational diabetes patients
CN117373619A (en) * 2023-12-08 2024-01-09 四川省肿瘤医院 Recipe generation method and generation system based on intestinal ostomy bag excrement monitoring result
CN117373619B (en) * 2023-12-08 2024-03-05 四川省肿瘤医院 Recipe generation method and generation system based on intestinal ostomy bag excrement monitoring result

Similar Documents

Publication Publication Date Title
Cui et al. Text-to-viz: Automatic generation of infographics from proportion-related natural language statements
CN111428053B (en) Construction method of tax field-oriented knowledge graph
CN115714001A (en) Method for constructing multi-modal knowledge graph service platform for healthy diet
Chang et al. A survey of web information extraction systems
CN112860908A (en) Knowledge graph automatic construction method based on multi-source heterogeneous power equipment data
CA2489236C (en) Data storage, retrieval, manipulation and display tools enabling multiple hierarchical points of view
Yang et al. Vistopic: A visual analytics system for making sense of large document collections using hierarchical topic modeling
CN103559199B (en) Method for abstracting web page information and device
Baumgartner et al. Declarative information extraction, web crawling, and recursive wrapping with lixto
Hoque et al. Searching the visual style and structure of d3 visualizations
CN105243129A (en) Commodity property characteristic word clustering method
WO2014160379A1 (en) Dimensional articulation and cognium organization for information retrieval systems
Bechhofer et al. Thesaurus construction through knowledge representation
Caldarola et al. Big graph-based data visualization experiences: The wordnet case study
Riehmann et al. WORDGRAPH: Keyword-in-context visualization for NETSPEAK's wildcard search
Ying et al. MetaGlyph: Automatic generation of metaphoric glyph-based visualization
Jiang et al. Visual font pairing
Khalili et al. WYSIWYM–Integrated visualization, exploration and authoring of semantically enriched un-structured content
Mulwad et al. Automatically generating government linked data from tables
Lymperaiou et al. A survey on knowledge-enhanced multimodal learning
Menin et al. From linked data querying to visual search: towards a visualization pipeline for LOD exploration
CN116523041A (en) Knowledge graph construction method, retrieval method and system for equipment field and electronic equipment
Han et al. Developing smart service concepts: morphological analysis using a Novelty-Quality map
US20220292367A1 (en) Ideation platform device and method using diagram
Phan et al. Automated data extraction from the web with conditional models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination