CN117725995B - Knowledge graph construction method, device and medium based on large model - Google Patents

Knowledge graph construction method, device and medium based on large model Download PDF

Info

Publication number
CN117725995B
CN117725995B CN202410179462.8A CN202410179462A CN117725995B CN 117725995 B CN117725995 B CN 117725995B CN 202410179462 A CN202410179462 A CN 202410179462A CN 117725995 B CN117725995 B CN 117725995B
Authority
CN
China
Prior art keywords
information
score
tuple information
tuple
original text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410179462.8A
Other languages
Chinese (zh)
Other versions
CN117725995A (en
Inventor
邓邱伟
张旭
司福东
徐静
刘朝振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Qingdao Haier Intelligent Home Appliance Technology Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Qingdao Haier Intelligent Home Appliance Technology Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Qingdao Haier Intelligent Home Appliance Technology Co Ltd, Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202410179462.8A priority Critical patent/CN117725995B/en
Publication of CN117725995A publication Critical patent/CN117725995A/en
Application granted granted Critical
Publication of CN117725995B publication Critical patent/CN117725995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Machine Translation (AREA)

Abstract

The application discloses a knowledge graph construction method, a device and a medium based on a large model, which relate to the technical field of smart families, and comprise the following steps: the method comprises the steps of inputting an original text and prompt information into a large model, so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, inputting the original text, the first tuple information and a judging rule into the large model, so that the large model can judge the first tuple information according to the original text and the judging rule to obtain first evaluation information, wherein the first evaluation information at least comprises: a first score and a second score; calculating the first score and the second score based on the first formula to determine a target score for the first tuple information; and adjusting the first tuple information according to the target score, and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information, so that the efficiency of generating the knowledge graph is improved.

Description

Knowledge graph construction method, device and medium based on large model
Technical Field
The application relates to the technical field of intelligent families, in particular to a knowledge graph construction method, device and medium based on a large model.
Background
The domain knowledge graph is a form which shows the unique knowledge of the domain most directly and furthest in the domain, is widely applied in the aspects of question answering, relation exploration and the like based on the knowledge graph, and is the best form with the most accurate and networked knowledge. However, because of the relatively large and messy knowledge in the field, it is difficult to comb the knowledge manually, for example, in the field of home appliances, the knowledge is in text form, and the knowledge needs to be summarized to form effective knowledge, and then clean and structured knowledge is obtained. Therefore, how to solve the problem of inefficient extraction of effective knowledge is an important part of knowledge graph formation, and is a great challenge.
Aiming at the problems of low efficiency and the like caused by manually extracting knowledge in an original text and generating a knowledge graph according to the extracted knowledge in the related art, no effective solution has been proposed yet.
Disclosure of Invention
The embodiment of the application provides a knowledge graph construction method, device and medium based on a large model, which at least solve the problems of low efficiency and the like of knowledge graph generation caused by manually extracting knowledge in an original text and further generating the knowledge graph according to the extracted knowledge in the related technology.
According to an embodiment of the present application, there is provided a knowledge graph construction method based on a large model, including: inputting an original text and prompt information into the large model, so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, wherein the original text is at least used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object; inputting the original text, the first tuple information and the discrimination rule into the large model, so that the large model judges the first tuple information according to the original text and the discrimination rule to obtain first evaluation information, wherein the first evaluation information at least comprises: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information; calculating the first score and the second score based on a first formula to determine a target score of the first tuple information, wherein the first formula is: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score; and adjusting the first tuple information according to the target score of the first tuple information, and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information.
In an exemplary embodiment, in the case that the discrimination rules are a first discrimination rule, a second discrimination rule, and a third discrimination rule, the original text, the first tuple information, and the discrimination rule are input into the large model, so that the large model determines the first tuple information according to the original text and the discrimination rule, to obtain first evaluation information, including: inputting the original text, the first tuple information, the first discriminant rule, the second discriminant rule and the third discriminant rule into the large model, so that the large model judges the first tuple information according to the original text and the first discriminant rule to obtain the first score, the error tuple information in the first tuple information and the correct tuple information corresponding to the error tuple information; the large model judges the first tuple information according to the original text and the second judging rule to obtain the second score, and judges the first tuple information according to the original text and the third judging rule to obtain the second tuple information, wherein the first judging rule is used for indicating the large model to score the accuracy of the first tuple information, determining the error tuple information in the first tuple information and determining the correct tuple information corresponding to the error tuple information; the second discriminant rule is used for indicating the large model to score the integrity of the first tuple information; the third discrimination rule is used for indicating the large model to extract information of the original text except the first tuple information according to the prompt information again; the second tuple information is tuple information in the original text other than the first tuple information.
In one exemplary embodiment, adjusting the first tuple information according to the target score of the first tuple information includes: determining a first size relation between the target score of the first tuple information and a first preset threshold value; inputting the original text, the first tuple information and the prompt information into the large model again under the condition that the first size relation indicates that the target score of the first tuple information is larger than or equal to the first preset threshold value, so that the large model extracts the original text again according to the original text, the first tuple information and the prompt information to obtain the adjusted first tuple information; and determining a union of the correct tuple information and the second tuple information in the first tuple information if the first size relationship indicates that the target score of the first tuple information is less than the first preset threshold.
In an exemplary embodiment, before calculating the first score and the second score based on the first formula, the method further comprises: determining whether the first score is within a first range of values and/or the second score is within a second range of values; inputting the original text, the first tuple information and the prompt information into the large model again under the condition that the first score is in the first numerical range and/or the second score is in the second numerical range, so that the large model extracts information of the original text again according to the original text, the first tuple information and the prompt information to obtain the adjusted first tuple information; inputting the original text, the adjusted first tuple information and the discrimination rule into the large model, so that the large model judges the adjusted first tuple information according to the original text and the discrimination rule to obtain second evaluation information, wherein the second evaluation information at least comprises: a third score indicating the accuracy of the second tuple information and a fourth score indicating the integrity of the second tuple information; calculating the third score and the fourth score based on a second formula to determine a target score for the second tuple information, wherein the second formula is: Wherein/> For the target score of the second tuple of information, p 3 is the value determined from the third score, and p 4 is the value determined from the fourth score; the adjusted first tuple information is adjusted again according to the target score of the second tuple information to obtain third tuple information; and constructing a knowledge graph corresponding to the original text according to the third tuple information.
In an exemplary embodiment, the adjusting the adjusted first tuple information according to the target score of the second tuple information includes: determining the number of times of information extraction on the original text under the condition that the target score of the second tuple information is smaller than a first preset threshold value; when the number of times is greater than or equal to a preset number of times, prohibiting the second-tuple information from being adjusted again according to the target score of the second-tuple information; and under the condition that the times are smaller than the preset times, the adjusted first tuple information is adjusted again according to the target scores of the second tuple information.
In an exemplary embodiment, before calculating the first score and the second score based on the first formula, the method further comprises: determining whether the first score is in a third numerical range and the second score is in a fourth numerical range, wherein a minimum value of the third numerical range is greater than a maximum value of the first numerical range and a minimum value of the fourth numerical range is greater than a maximum value of the second numerical range; updating the error tuple information in the first tuple information to the correct tuple information to obtain fourth tuple information when the first score is in the third numerical range and the second score is in the fourth numerical range; and constructing a knowledge graph corresponding to the original text according to the fourth tuple information.
In an exemplary embodiment, before calculating the first score and the second score based on the first formula, the method further comprises: determining whether the first score is in a fifth numerical range and the second score is in a fourth numerical range, wherein a minimum value of the fifth numerical range is greater than a maximum value of the third numerical range and a minimum value of the fourth numerical range is greater than a maximum value of the second numerical range; determining a union of the correct tuple information and the second tuple information if the first score is in the fifth range of values and the second score is in the fourth range of values; and constructing a knowledge graph corresponding to the original text according to the union.
In one exemplary embodiment, calculating the first score and the second score based on a first formula to determine a target score for the first tuple information includes: determining whether the first score is in a first numerical range and the second score is in a sixth numerical range, wherein a minimum value of the sixth numerical range is greater than a maximum value of the second numerical range and a maximum value of the sixth numerical range is less than a minimum value of the fourth numerical range; determining a first difference between the first score and a first preset value and a second difference between the second score and a second preset value when the first score is not in the first value range and the second score is in the sixth value range; inputting the first difference value and the second difference value into a first formula to determine a target score of first tuple information, wherein the first formula is: wherein y 1 is the target score of the first tuple, n=2, p 1 is the first difference, and p 2 is the second difference.
In one exemplary embodiment, inputting the original text and the prompt information into the large model includes: determining a domain type corresponding to the original text, and determining training data corresponding to the domain type; determining a loss function of the large model, and training the large model according to the loss function and the training data to obtain a trained large model; and inputting the original text and the prompt information into the trained large model.
In one exemplary embodiment, training the large model according to a loss function and the training data includes: determining the data amount of the training data and determining a second magnitude relation between the data amount and a second preset threshold; training parameters in each neural network layer of the large model according to the loss function and the training data under the condition that the second size relation indicates that the data amount is greater than or equal to the second preset threshold value; and under the condition that the second size relation indicates that the data amount is smaller than the second preset threshold value, freezing a target neural network layer in the large model, and training parameters in other neural network layers of the large model according to the loss function and the training data, wherein the other neural network layers are other neural network models except the target neural network layer in the large model.
In an exemplary embodiment, inputting an original text and a prompt message into the large model, so that the large model performs information extraction on the original text according to the prompt message to obtain first tuple information, including: inputting the original text information and the prompt information into the big model so as to enable the big model to learn a data format of sample data, and extracting information from the original text information according to the data format and output instructions to obtain the first tuple information, wherein the prompt information comprises: the sample data and the output indication.
In an exemplary embodiment, the knowledge-graph corresponding to the original text is constructed according to the adjusted first tuple information, and at least one of the following is included: the adjusted first tuple information includes: under the condition of the first information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; the adjusted first tuple information includes: under the condition of the second information, establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object; the adjusted first tuple information includes: under the condition of the first information and the second information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; and establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object.
According to another embodiment of the present application, there is also provided a knowledge graph construction apparatus based on a large model, including: the first input module is used for inputting an original text and prompt information into the large model so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, wherein the original text is at least used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object; the second input module is configured to input the original text, the first tuple information and the discriminant rule into the large model, so that the large model judges the first tuple information according to the original text and the discriminant rule to obtain first evaluation information, where the first evaluation information at least includes: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information; the computing module is configured to calculate the first score and the second score based on a first formula to determine a target score of the first tuple information, where the first formula is: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score; and the construction module is used for adjusting the first tuple information according to the target score of the first tuple information and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information.
According to still another aspect of the embodiments of the present application, there is also provided a computer-readable medium having a program stored therein, wherein the program is configured to execute the above-described knowledge graph construction method based on a large model at runtime.
According to still another aspect of the embodiment of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the knowledge graph construction method based on the large model through the computer program.
In the embodiment of the application, an original text and prompt information are input into the large model, so that the large model carries out information extraction on the original text according to the prompt information to obtain first tuple information, wherein the original text is at least used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object; inputting the original text, the first tuple information and the discrimination rule into the large model, so that the large model judges the first tuple information according to the original text and the discrimination rule to obtain first evaluation information, wherein the first evaluation information at least comprises: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information; calculating the first score and the second score based on a first formula to determine a target score of the first tuple information, wherein the first formula is: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score; adjusting the first tuple information according to the target score of the first tuple information, and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information; in the embodiment of the application, the information extraction is carried out on the original text through the large model according to the input prompt information to obtain the tuple information, the original text and the discrimination rule are input into the large model again, so that the large model evaluates the generated tuple information, adjusts the tuple information according to the evaluation information, and finally generates a knowledge graph according to the adjusted tuple information; by adopting the technical scheme, the problems that the efficiency of generating the knowledge graph is low and the like due to the fact that the knowledge graph is generated according to the extracted knowledge by manually extracting the knowledge in the original text are solved, the extraction rate of the knowledge is improved, and the efficiency of generating the knowledge graph is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of a knowledge graph construction method based on a large model according to an embodiment of the present application;
FIG. 2 is a flow chart of a knowledge graph construction method based on a large model, in accordance with an embodiment of the application;
FIG. 3 is an overall architecture diagram of a knowledge graph construction method based on a large model, in accordance with an embodiment of the application;
FIG. 4 is a schematic application diagram of a knowledge graph construction method based on a large model according to an embodiment of the application;
fig. 5 is a block diagram of a knowledge graph construction apparatus based on a large model, according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like herein are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of the embodiment of the application, a knowledge graph construction method based on a large model is provided. The knowledge graph construction method based on the large model is widely applied to full-house intelligent digital control application scenes such as intelligent Home (Smart Home), intelligent Home equipment ecology, intelligent Home (INTELLIGENCE HOUSE) ecology and the like. Alternatively, in the present embodiment, the knowledge graph construction method based on the large model described above may be applied to a hardware environment constituted by the terminal device 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the terminal device 102 through a network, and may be used to provide services (such as application services and the like) for a terminal or a client installed on the terminal, a database may be set on the server or independent of the server, for providing data storage services for the server 104, and cloud computing and/or edge computing services may be configured on the server or independent of the server, for providing data computing services for the server 104.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (WIRELESS FIDELITY ), bluetooth. The terminal device 102 may not be limited to a PC, a mobile phone, a tablet computer, an intelligent air conditioner, an intelligent smoke machine, an intelligent refrigerator, an intelligent oven, an intelligent cooking range, an intelligent washing machine, an intelligent water heater, an intelligent washing device, an intelligent dish washer, an intelligent projection device, an intelligent television, an intelligent clothes hanger, an intelligent curtain, an intelligent video, an intelligent socket, an intelligent sound box, an intelligent fresh air device, an intelligent kitchen and toilet device, an intelligent bathroom device, an intelligent sweeping robot, an intelligent window cleaning robot, an intelligent mopping robot, an intelligent air purifying device, an intelligent steam box, an intelligent microwave oven, an intelligent kitchen appliance, an intelligent purifier, an intelligent water dispenser, an intelligent door lock, and the like.
In this embodiment, a knowledge graph construction method based on a large model is provided, and the knowledge graph construction method is applied to the terminal device, where the terminal device includes: fig. 2 is a flowchart of a knowledge graph construction method based on a large model according to an embodiment of the present application, where the flowchart includes the following steps:
Step S202, inputting an original text and prompt information into the large model, so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, wherein the original text is at least used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object;
it should be noted that, the large model may be understood as a complex machine learning model or a deep learning model; or the large model can be understood as a large model based on a large-scale corpus (including language training materials such as sentences and paragraphs), designing a language model training task, training a large-scale neural network algorithm structure to learn and realize, and finally obtaining the large-scale neural network algorithm structure and parameters, wherein the finally obtained large-scale neural network algorithm structure and parameters are the large model.
For example, in the case where the attribute information is red and the target object is a refrigerator, the relationship between the attribute information and the target object is: color; when the attribute information is xxx model and the target object is a refrigerator, the relationship between the attribute information and the target object is: model number.
In the case that the usage information is that the temperature of the refrigerating chamber is set to be between 0 and 5 ℃, and the target object is a refrigerator, the relationship between the attribute information and the target object is: the use mode is as follows; in the case that the usage information is that the refrigerator is prohibited from being placed in a humid environment and the target object is a refrigerator, the relationship between the attribute information and the target object is: notice matters.
Optionally, before inputting the original text and the prompt information into the large model, the method further comprises: receiving an original text input by a target object; carrying out word splitting on the original text to obtain a plurality of words corresponding to the original text; semantic understanding is carried out on the words so as to determine the field corresponding to the original text; determining target prompt information sets corresponding to the fields of the original texts in a plurality of prompt information sets, wherein the plurality of prompt information sets respectively correspond to the fields of different original texts; and determining the prompt information in the target prompt information set.
Step S204, inputting the original text, the first tuple information and the discriminant rule into the large model, so that the large model judges the first tuple information according to the original text and the discriminant rule to obtain first evaluation information, where the first evaluation information at least includes: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information;
step S206, calculating the first score and the second score based on a first formula to determine a target score of the first tuple information, where the first formula is: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score;
step S208, adjusting the first tuple information according to the target score of the first tuple information, and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information.
In the embodiment of the application, the information extraction is carried out on the original text through the large model according to the input prompt information to obtain the tuple information, the original text and the discrimination rule are input into the large model again, so that the large model evaluates the generated tuple information, adjusts the tuple information according to the evaluation information, and finally generates a knowledge graph according to the adjusted tuple information; by adopting the technical scheme, the problems that the efficiency of generating the knowledge graph is low and the like due to the fact that the knowledge graph is generated according to the extracted knowledge by manually extracting the knowledge in the original text are solved, the extraction rate of the knowledge is improved, and the efficiency of generating the knowledge graph is improved.
Alternatively, the above step S202 may be implemented by:
inputting the original text information and the prompt information into the big model so as to enable the big model to learn a data format of sample data, and extracting information from the original text information according to the data format and output instructions to obtain the first tuple information, wherein the prompt information comprises: the sample data and the output indication.
Optionally, the prompt information further includes the following information:
roles: specifying the type or format that the model is to output, for example: the large model is now a text extraction expert;
instructions to: and issuing specific tasks or instructions to be executed by the model. For example, knowledge extracted based on the entered text information.
Context information (corresponding to sample data in the above embodiment): providing context information, such as sample data and corresponding output formats, helps to guide the model to better process according to the sample data and output results in a specified format. For example, the following is an example:
text: the air conditioner of s type is white in color, and the weight is 50kg, and three years of maintenance period, the function includes: refrigerating, heating and dehumidifying;
returning: [ (air conditioner, color, white), (air conditioner, model, s), (air conditioner, maintenance time, three years), (refrigerator, function, refrigeration, heating, dehumidification) ].
Data to be processed (equivalent to the original text in the above embodiment): data that the model needs to process is needed. For example: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover a 30-level area, and can keep healthy and pleasant mood.
Outputting an indication: the model is allowed to return results according to a given sample or a given format. For example, please extract all triples in the text information, and the contents of attributes, functions, maintenance, after-sales, prices, parameters and the like are related, and please directly return to the triples.
In the embodiment of the application, roles, context information, data to be processed, instructions and output instructions are used in the process of generating the triples, and the large model is trained by data with larger scale, so that the original text is processed according to the prompt information and the result is returned.
Optionally, in the case that the discrimination rules are the first discrimination rule, the second discrimination rule and the third discrimination rule, the above step S204 may be further implemented by:
Inputting the original text, the first tuple information, the first discriminant rule, the second discriminant rule and the third discriminant rule into the large model, so that the large model judges the first tuple information according to the original text and the first discriminant rule to obtain the first score, the error tuple information in the first tuple information and the correct tuple information corresponding to the error tuple information; judging the first tuple information according to the original text and the second judging rule by the large model to obtain the second score and the second score; judging the first tuple information by the large model according to the original text and the third judging rule to obtain second tuple information, wherein the first judging rule is used for indicating the large model to score the accuracy of the first tuple information, determining the error tuple information in the first tuple information and determining the correct tuple information corresponding to the error tuple information; the second discriminant rule is used for indicating the large model to score the integrity of the first tuple information; the third discrimination rule is used for indicating the large model to extract information of the original text except the first tuple information according to the prompt information again; the second tuple information is tuple information in the original text other than the first tuple information.
In the embodiment of the application, three discriminant rules are input into the large model, so that the large model outputs evaluation information of the first tuple information according to the three discriminant rules, wherein the evaluation information specifically comprises: a first score for evaluating the accuracy of the first tuple information, a second score for evaluating the integrity of the first tuple information, error tuple information in the first tuple information, correct tuple information corresponding to the error tuple information, and second tuple information except the first tuple information in the original text; and further determining an adjustment mode of the first tuple information according to the evaluation information.
For example, the first discriminant rule may be as follows:
The big model is now a knowledge judgment expert, able to judge, based on the text and the given triplet, whether the triplet's knowledge is derived from the original text, if the content of a triplet does not match the original text, please give the triplet and the score and modified content such as ((refrigerator, color, blue): 0, (refrigerator, color, red)), if the whole triplet knowledge is accurate, please give the score using the percentile.
Thus, the returned content has the following two points: 1. error triples and modified content; 2. scoring as a whole.
Where a triplet refers to (entity, relationship, entity). Examples are as follows:
Example one:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
Triplet: [ (refrigerator, color, red), (refrigerator, model, s), (refrigerator, maintenance time, three years), (refrigerator, refrigeration function), (refrigeration function, description, refrigeration can be used up to 30 pieces, can cover an area of 30 pieces, can keep people healthy and pleasant mood) ].
Returning: 1: empty, 2:100 minutes.
Example two:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
Triplet: [ (refrigerator, color, blue), (refrigerator, model, s), (refrigerator, maintenance time, three years), (refrigerator, function, refrigeration function), (refrigeration function, description, refrigeration can be used up to 30, can cover an area of 30 flat, can keep people healthy and pleasant mood).
Returning: 1: ((refrigerator, color, blue): 0, (refrigerator, color, red)) 2:80 minutes.
Example three:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
Triplet: [ (refrigerator, color, blue) ];
Returning: 1: ((refrigerator, color, blue): 0, (refrigerator, color, red)) 2:20 minutes.
The following original text and triples exist:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
Triplet: [ (refrigerator, color, red), (refrigerator, model, s), (refrigerator, maintenance time, three years), (refrigerator, refrigeration function), (refrigeration function, description, refrigeration can be used up to 30 pieces, can cover an area of 30 pieces, can keep people healthy and pleasant mood) ].
And directly giving the judgment result.
The second discriminant rule may be as follows:
the big model is now a knowledge judgment expert able to judge, based on the text and given triples, whether the content of the triples is profiled with all knowledge, and if not, to ask for a score, to use a percentile.
Examples are as follows:
Example 1:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
Triplet: [ (refrigerator, color, red), (refrigeration function, description, refrigeration can be used up to 30 pieces, can cover an area of 30 pieces, can keep people healthy and pleasant mood) ].
Returning: 30 minutes.
The original text and triples of the existing bottom side are scored:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
Triplet: [ (refrigerator, color, red), (refrigerator, model, s), (refrigerator, maintenance time, three years), (refrigerator, refrigeration function), (refrigeration function, description, refrigeration can be used up to 30 pieces, can cover an area of 30 pieces, can keep people healthy and pleasant mood) ].
Please return the score directly.
The third discriminant rule may be as follows:
The large model is now a knowledge regeneration expert, which analyzes and summarizes the existing triples according to the given original text, re-extracts the triples that were not extracted, and returns the extracted triples together.
Examples are as follows:
Example 1:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
Triplet: [ (refrigerator, color, red), (refrigeration function, description, refrigeration can be used up to 30 pieces, can cover an area of 30 pieces, can keep people healthy and pleasant mood) ].
Returning: [ (refrigerator, model, s), (refrigerator, maintenance time, three years), (refrigerator, function, refrigeration function) ].
The following text and triples are used for analysis and the results are returned directly:
Original text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood;
triplet: [ (refrigerator, function, refrigeration function) ].
It should be noted that the above-mentioned first evaluation rule, second evaluation rule, and third evaluation rule are only examples.
The adjustment mode of the first tuple information is determined according to the evaluation information, and the following specific situations exist:
1) Determining whether the first score is within a first range of values and/or the second score is within a second range of values; inputting the original text, the first tuple information and the prompt information into the large model again under the condition that the first score is in the first numerical range and/or the second score is in the second numerical range, so that the large model extracts information of the original text again according to the original text, the first tuple information and the prompt information to obtain the adjusted first tuple information; inputting the original text, the adjusted first tuple information and the discrimination rule into the large model, so that the large model judges the adjusted first tuple information according to the original text and the discrimination rule to obtain second evaluation information, wherein the second evaluation information at least comprises: a third score indicating the accuracy of the second tuple information and a fourth score indicating the integrity of the second tuple information; calculating the third score and the fourth score based on a second formula to determine a target score for the second tuple information, wherein the second formula is: Wherein y 2 is a target score of the second tuple information, p 3 is a value determined according to the third score, and p 4 is a value determined according to the fourth score; the adjusted first tuple information is adjusted again according to the target score of the second tuple information to obtain third tuple information; and constructing a knowledge graph corresponding to the original text according to the third tuple information.
In the embodiment of the application, three numerical ranges are divided in advance aiming at the accuracy score, and the three numerical ranges are respectively as follows: a first numerical range, a third numerical range, and a fifth numerical range, wherein a maximum value of the first numerical range is less than a minimum value of the third numerical range, a maximum value of the third numerical range is less than a minimum value of the fifth numerical range, for example, the first numerical range is [0, 70); the second range of values is [70, 90 ], the third range of values is [90, 100], and it should be noted that the above values are only for better understanding of the embodiments of the present application, and specific values in the first range of values, the second range of values, and the third range of values are not specifically limited in the implementation of the present application.
In the embodiment of the application, three numerical ranges are also divided in advance for scoring the integrity, and the numerical ranges are respectively as follows: a second range of values, a fourth range of values, and a sixth range of values, wherein a maximum value of the second range of values is less than a minimum value of the sixth range of values, and a maximum value of the sixth range of values is less than a minimum value of the fourth range of values, e.g., the second range of values is [0, 70); the sixth numerical range is [70, 90 ], the fourth numerical range is [90, 100], and it should be noted that the above numerical values are only for better understanding of the embodiments of the present application, and specific numerical values in the second numerical range, the fourth numerical range, and the sixth numerical range are not specifically limited in the implementation of the present application.
And under the condition that the first score is in the first numerical range and/or the second score is in the second numerical range, the accuracy and/or the completeness of the first tuple information output in the large model are/is lower, and under the condition that the first score is in the first numerical range and/or the second score is in the second numerical range, the original text, the adjusted first tuple information and the judging rule are directly input into the large model again, so that the large model extracts the tuple information in the original text again, and the adjusted first tuple information is obtained.
It should be noted that, after the big model evaluates the adjusted first tuple information to obtain the second evaluation information, after determining that the target score of the second tuple information is smaller than the first preset threshold, the following steps are further needed:
Determining the number of times of information extraction on the original text; when the number of times is greater than or equal to a preset number of times, prohibiting the second-tuple information from being adjusted again according to the target score of the second-tuple information; and under the condition that the times are smaller than the preset times, the adjusted first tuple information is adjusted again according to the target scores of the second tuple information.
In the embodiment of the application, the preset times for adjusting the first tuple information are preset, and under the condition that the score of the tuple information extracted by the large model is smaller than the first preset threshold value for a plurality of times, the large model is considered to be incapable of extracting the accurate tuple information, and then the adjustment of the first tuple information is stopped, so that the condition of infinite circulation of the large model can be avoided through the embodiment, and unnecessary resource waste is further avoided.
2) Determining whether the first score is in a third numerical range and the second score is in a fourth numerical range, wherein a minimum value of the third numerical range is greater than a maximum value of the first numerical range and a minimum value of the fourth numerical range is greater than a maximum value of the second numerical range; updating the error tuple information in the first tuple information to the correct tuple information to obtain fourth tuple information when the first score is in the third numerical range and the second score is in the fourth numerical range; and constructing a knowledge graph corresponding to the original text according to the fourth tuple information.
When the first score is in the third numerical range and the second score is in the fourth numerical range, the accuracy of the tuple information extracted by the large model is generally higher, and therefore, the error tuple information in the first tuple information is updated to the correct tuple information, and the fourth tuple information with higher integrity and accuracy can be obtained.
3) Determining whether the first score is in a fifth numerical range and the second score is in a fourth numerical range, wherein a minimum value of the fifth numerical range is greater than a maximum value of the third numerical range and a minimum value of the fourth numerical range is greater than a maximum value of the second numerical range; determining a union of the correct tuple information and the second tuple information if the first score is in the fifth range of values and the second score is in the fourth range of values; and constructing a knowledge graph corresponding to the original text according to the union.
In the case that the first score is in the fifth numerical range and the second score is in the fourth numerical range, the accuracy and the completeness of the tuple information extracted by the large model are higher, and therefore, the union of the correct tuple information and the second tuple information in the first tuple information is determined.
4) Determining whether the first score is in a first numerical range and the second score is in a sixth numerical range, wherein a minimum value of the sixth numerical range is greater than a maximum value of the second numerical range and a maximum value of the sixth numerical range is less than a minimum value of the fourth numerical range; determining a first difference between the first score and a first preset value and a second difference between the second score and a second preset value when the first score is not in the first value range and the second score is in the sixth value range; inputting the first difference value and the second difference value into a first formula to determine a target score of first tuple information, wherein the first formula is: Wherein y 1 is a target score of the first tuple information, n=2, p 1 is the first difference value, and p 2 is the second difference value.
Specifically, a first size relation between the target score of the first tuple information and a first preset threshold value is determined; inputting the original text, the first tuple information and the prompt information into the large model again under the condition that the first size relation indicates that the target score of the first tuple information is larger than or equal to the first preset threshold value, so that the large model extracts the original text again according to the original text, the first tuple information and the prompt information to obtain the adjusted first tuple information; and determining a union of the correct tuple information and the second tuple information in the first tuple information if the first size relationship indicates that the target score of the first tuple is less than the first preset threshold.
In an embodiment of the present application, in a case where the first score is not located in the first numerical range and the second score is located in the sixth numerical range, the target score of the first tuple is determined based on the following formula: ; wherein the ReLU function is as follows: /(I) ; The ReLU function is a modified linear unit function, and when the input p i is greater than 0, the output is p i; when the input p i is equal to or less than 0, the output is 0. Where p i = ith rule score (corresponding to the first and second scores) minus the lowest decision value x (corresponding to the first and second preset values), e.g./>The lowest decision value of (2) is x=70;
If p 1 is 20 and p 2 is 10, the first preset threshold is 0.5, and y is 0.2, which is less than the first preset threshold, then the generation is resumed. If p 1 is 25 and p 2 is 20, the first preset threshold is 0.5, and y is 0.5 at this time, which indicates that the result has a certain availability, and a union of the correct tuple information in the first tuple information and the second tuple information is determined.
Further, in order to obtain more accurate tuple information, before the original text and the prompt information are input into the large model, the embodiment of the application performs fine tuning operation on the large model: determining a domain type corresponding to the original text, and determining training data corresponding to the domain type; determining a loss function of the large model, and training the large model according to the loss function and the training data to obtain a trained large model; and inputting the original text and the prompt information into the trained large model.
When the fine tuning operation is performed on the large model, the diversity of the training data can be increased and the generalization capability of the model can be improved by performing operations such as random rotation, cutting, scaling, overturning and the like on the training data; and dynamically adjusting the learning rate according to the performance of the large model on the verification set so as to speed up convergence or avoid overfitting.
In the embodiment of the application, the proper loss function is selected according to the characteristics of the task, or the weight of the loss function is adjusted so as to improve the performance of the model.
Specifically, training the large model according to the loss function and the training data includes: determining the data amount of the training data and determining a second magnitude relation between the data amount and a second preset threshold; training parameters in each neural network layer of the large model according to the loss function and the training data under the condition that the second size relation indicates that the data amount is greater than or equal to the second preset threshold value; and under the condition that the second size relation indicates that the data amount is smaller than the second preset threshold value, freezing a target neural network layer in the large model, and training parameters in other neural network layers of the large model according to the loss function and the training data, wherein the other neural network layers are other neural network models except the target neural network layer in the large model.
The neural network layers of the large model are frozen during model training in order to keep the parameters of these layers unchanged, thereby avoiding them being updated during the training process. In general, freezing a large model will typically freeze all layers of the convolution base, i.e., all pre-trained convolution layers and part of the top fully connected layer, since the convolution layers will typically learn common features, while the top fully connected layer will learn task-specific features. Freezing the layers of a large model can be accomplished in several ways:
1. When creating the model, it may be decided whether to freeze each layer by setting the 'trainable' attribute of that layer. The 'trainable' attribute of the layer to be frozen is set to False.
2. During training, certain layers in the large model may be frozen using the 'tf. Stop_gradient' function, which may prevent gradients from being applied to parameters of a given layer.
Freezing portions of the neural network layer of the large model can help to speed training and better utilize the knowledge of the pre-trained model.
It should be noted that, the partial neural network layer of the frozen large model may also be triggered by:
under the condition that the training time and the computing resources are smaller than the preset training time and the preset computing resources, a part of layers can be frozen selectively, so that the training time and the resource consumption are reduced;
under the condition that the performance of the large model accords with the preset performance, all or part of layers can be selected to be frozen, so that the performance of the model is prevented from being over-fitted and damaged;
according to the experimental results, the frozen portion layer is selected to obtain the best performance and generalization ability.
The above step S208 may also be implemented by:
Constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information, wherein the knowledge graph at least comprises one of the following components: the adjusted first tuple information includes: under the condition of the first information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; the adjusted first tuple information includes: under the condition of the second information, establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object; the adjusted first tuple information includes: under the condition of the first information and the second information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; and establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object.
The first tuple information in the embodiment of the present application includes: 1) The attribute information, and the relationship of the attribute information and the target object; 2) The instructions, and the relationship of the instructions and the target object; thus, the node in the indication map, and the edge feature of the edge between the nodes, can be established based on the first tuple information.
In order to better understand the process of the knowledge graph construction method based on the large model, the implementation method flow of the knowledge graph construction based on the large model is described below in conjunction with the optional embodiment, but is not used for limiting the technical scheme of the embodiment of the application.
In this embodiment, a knowledge graph construction method based on a large model is provided, and fig. 3 is an overall architecture diagram of the knowledge graph construction method based on a large model according to an embodiment of the present application, as shown in fig. 3, specifically as follows:
A knowledge extraction server for inputting the original text in combination with extraction rules (which can be understood as extraction rules) (corresponding to the prompt information in the above embodiment) to the large model service;
And the large model service outputs the knowledge of the required structure according to the extraction rule and judges the output knowledge.
The knowledge supervision service sends the output knowledge to the knowledge supervision service, and the knowledge supervision service sends the discrimination rules and the original text to the large model for analysis, so that the large model disc judges whether the discrimination rules can be met, for example: whether the extracted knowledge is in the original material, whether the extracted knowledge is matched, whether the extracted knowledge structurally accords with rules and the like, the matched knowledge is sent to the atlas service, and non-matched knowledge can be regenerated according to the judgment score (for example, the score is lower and can be completely regenerated, and the score is medium and can be modified by using a large model to match with the original material).
And the storage service is used for storing the output knowledge.
The map service is used for the tasks of map generation, management and the like, so that knowledge can be more reasonably generated into a required knowledge network.
Besides the rule general technology, the large model in the large model service can strengthen the knowledge extraction technology in the field, the knowledge in the field is used for fine tuning the large model, training data is prepared, and all samples are put into a list and stored in json files. Each sample corresponds to a dictionary containing ids and convections, the latter of which is a list. Examples are as follows:
[
{
"id": "0",
"conversations": [
{
"from": "user",
"value" existing source text: the s-type refrigerator has red color and 100kg weight, can achieve 30 pieces of refrigeration function in three-year maintenance period, can cover 30 pieces of area, and can keep healthy and pleasant mood; "
},
{
"from": "assistant",
The "value" generation triplet is: [ (refrigerator, color, red), (refrigeration function, description, refrigeration can be used up to 30 pieces, can cover an area of 30 pieces, can keep healthy and pleasant mood) ] ".
}
]
}
]。
It should be noted that, the manner of fine tuning the large model may be selected by: the whole parameter fine tuning, loRA, Q-LoRA, p-tuning and other modes can be selected under the condition of sufficient resources, otherwise, other modes can be selected for fine tuning. The model subjected to the fine adjustment of a large amount of data can obtain triples only by inputting a source text; although fine tuning techniques are employed, rule techniques and supervisory services are also incorporated to prevent the generated triples from being too divergent and inaccurate.
In summary, large model knowledge generation is performed according to the original field text and rules, the generated knowledge is judged, the generated available knowledge has a map service to form a map, and a supervisory program is used for knowledge management, so that a set of green framework for closed-loop knowledge generation, management and regeneration is formed, and the generation efficiency of the knowledge map is greatly improved.
In this embodiment, a knowledge graph construction method based on a large model is also provided, fig. 4 is an application schematic diagram of the knowledge graph construction method based on the large model according to the embodiment of the application, as shown in fig. 4, a given in-field text is analyzed according to rules of the large model and the trimmed large model, a knowledge structure required for generating a graph is performed, a supervision service makes a large model service judge whether the generated knowledge accords with expectations according to the rules, scores the knowledge, and performs different actions according to the evaluation scores.
According to the embodiment of the application, knowledge acquisition is more convenient and quick, and the knowledge processing efficiency in the field is improved. From the whole, the conversion process from complex knowledge to structuring is realized, the freedom of text to knowledge is truly realized, and the requirement of knowledge reproduction is met.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the various embodiments of the present application.
FIG. 5 is a block diagram of a knowledge graph construction apparatus based on a large model, in accordance with an embodiment of the application; as shown in fig. 5, includes:
The first input module 52 is configured to input an original text and a prompt message into the large model, so that the large model performs information extraction on the original text according to the prompt message to obtain first tuple information, where the original text is at least used for describing attribute information and a usage instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object;
The second input module 54 is configured to input the original text, the first tuple information, and the discriminant rule into the large model, so that the large model performs judgment on the first tuple information according to the original text and the discriminant rule, to obtain first evaluation information, where the first evaluation information at least includes: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information;
A calculating module 56, configured to calculate the first score and the second score based on a first formula to determine a target score of the first tuple information, where the first formula is: Wherein, the method comprises the steps of, wherein, Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score;
And the construction module 58 is configured to adjust the first tuple information according to the target score of the first tuple information, and construct a knowledge graph corresponding to the original text according to the adjusted first tuple information.
According to the device, information extraction is carried out on an original text through a large model according to input prompt information to obtain tuple information, the original text and a discrimination rule are input into the large model again, so that the large model evaluates the generated tuple information, the tuple information is adjusted according to the evaluation information, and finally a knowledge graph is generated according to the adjusted tuple information; by adopting the technical scheme, the problems that the efficiency of generating the knowledge graph is low and the like due to the fact that the knowledge graph is generated according to the extracted knowledge by manually extracting the knowledge in the original text are solved, the extraction rate of the knowledge is improved, and the efficiency of generating the knowledge graph is improved.
In an exemplary embodiment, in the case that the discrimination rules are a first discrimination rule, a second discrimination rule, and a third discrimination rule, the second input module 54 is configured to input the original text, the first tuple information, the first discrimination rule, the second discrimination rule, and the third discrimination rule into the large model, so that the large model determines the first tuple information according to the original text and the first discrimination rule, and obtains the first score, the error tuple information in the first tuple information, and the correct tuple information corresponding to the error tuple information; the large model judges the first tuple information according to the original text and the second judging rule to obtain the second score, and judges the first tuple information according to the original text and the third judging rule to obtain the second tuple information, wherein the first judging rule is used for indicating the large model to score the accuracy of the first tuple information, determining the error tuple information in the first tuple information and determining the correct tuple information corresponding to the error tuple information; the second discriminant rule is used for indicating the large model to score the integrity of the first tuple information; the third discrimination rule is used for indicating the large model to extract information of the original text except the first tuple information according to the prompt information again; the second tuple information is tuple information in the original text other than the first tuple information.
In one exemplary embodiment, the constructing module 58 is configured to determine a first magnitude relation between the target score of the first tuple information and a first preset threshold value; inputting the original text, the first tuple information and the prompt information into the large model again under the condition that the first size relation indicates that the target score of the first tuple information is larger than or equal to the first preset threshold value, so that the large model extracts the original text again according to the original text, the first tuple information and the prompt information to obtain the adjusted first tuple information; and determining a union of the correct tuple information and the second tuple information in the first tuple information if the first size relationship indicates that the target score of the first tuple information is less than the first preset threshold.
In an exemplary embodiment, the above apparatus further includes:
A determining module, configured to determine whether the first score is located in a first numerical range, and/or whether the second score is located in a second numerical range;
The first input module is configured to input the original text, the first tuple information and the prompt information into the large model again when the first score is in the first numerical range and/or the second score is in the second numerical range, so that the large model extracts information from the original text again according to the original text, the first tuple information and the prompt information, and obtains the adjusted first tuple information;
The second input module is configured to input the original text, the adjusted first tuple information and a discrimination rule into the large model, so that the large model determines the adjusted first tuple information according to the original text and the discrimination rule to obtain second evaluation information, where the second evaluation information at least includes: a third score indicating the accuracy of the second tuple information and a fourth score indicating the integrity of the second tuple information;
a calculation module, configured to calculate the third score and the fourth score based on a second formula to determine a target score of the second tuple information, where the second formula is: the second formula is: Wherein/> For the target score of the second tuple of information, p 3 is the value determined from the third score, and p 4 is the value determined from the fourth score;
The construction module is used for adjusting the adjusted first tuple information again according to the target score of the second tuple information to obtain third tuple information; and constructing a knowledge graph corresponding to the original text according to the third tuple information.
In an exemplary embodiment, the building module is configured to determine a number of times of information extraction on the original text when the target score of the second tuple of information is less than a first preset threshold; when the number of times is greater than or equal to a preset number of times, prohibiting the second-tuple information from being adjusted again according to the target score of the second-tuple information; and under the condition that the times are smaller than the preset times, the adjusted first tuple information is adjusted again according to the target scores of the second tuple information.
In one exemplary embodiment, the determining module is configured to determine whether the first score is in a third range of values and the second score is in a fourth range of values, where a minimum value of the third range of values is greater than a maximum value of the first range of values and a minimum value of the fourth range of values is greater than a maximum value of the second range of values;
The construction module is used for updating the error tuple information in the first tuple information into the correct tuple information to obtain fourth tuple information under the condition that the first score is in the third numerical range and the second score is in the fourth numerical range; and constructing a knowledge graph corresponding to the original text according to the fourth tuple information.
In one exemplary embodiment, the determining module is configured to determine whether the first score is in a fifth range of values and the second score is in a fourth range of values, where a minimum value of the fifth range of values is greater than a maximum value of the third range of values and a minimum value of the fourth range of values is greater than a maximum value of the second range of values;
a building module configured to determine a union of the correct tuple information and the second tuple information if the first score is in the fifth range of values and the second score is in the fourth range of values; and constructing a knowledge graph corresponding to the original text according to the union.
In one exemplary embodiment, the determining module is configured to determine whether the first score is in a first numerical range and the second score is in a sixth numerical range, where a minimum value of the sixth numerical range is greater than a maximum value of the second numerical range and a maximum value of the sixth numerical range is less than a minimum value of the fourth numerical range;
A calculating module, configured to determine a first difference between the first score and a first preset value and a second difference between the second score and a second preset value when the first score is not in the first numerical range and the second score is in the sixth numerical range; inputting the first difference value and the second difference value into a first formula to determine a target score of the first tuple information, wherein the first formula is: wherein y 1 is the target score of the first tuple information, n=2,/> For the first difference,/>Is the second difference.
In an exemplary embodiment, the above apparatus further includes:
the training module is used for determining the field type corresponding to the original text and determining training data corresponding to the field type; determining a loss function of the large model, and training the large model according to the loss function and the training data to obtain a trained large model;
And the first input module is used for inputting the original text and the prompt information into the trained large model.
In an exemplary embodiment, a training module is configured to determine a data amount of the training data, and determine a second magnitude relation of the data amount to a second preset threshold; training parameters in each neural network layer of the large model according to the loss function and the training data under the condition that the second size relation indicates that the data amount is greater than or equal to the second preset threshold value; and under the condition that the second size relation indicates that the data amount is smaller than the second preset threshold value, freezing a target neural network layer in the large model, and training parameters in other neural network layers of the large model according to the loss function and the training data, wherein the other neural network layers are other neural network models except the target neural network layer in the large model.
In an exemplary embodiment, a first input module is configured to input the original text information and the prompt information into the large model, so that the large model learns a data format of sample data, and extract information from the original text information according to the data format and an output instruction, so as to obtain the first tuple information, where the prompt information includes: the sample data and the output indication.
In one exemplary embodiment, the building block is configured to at least one of: the adjusted first tuple information includes: under the condition of the first information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; the adjusted first tuple information includes: under the condition of the second information, establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object; the adjusted first tuple information includes: under the condition of the first information and the second information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; and establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object.
The embodiment of the application also provides a medium, which comprises a stored program, wherein the program executes the method of any one of the above.
The medium is a storage medium.
Alternatively, in the present embodiment, the above-described medium may be configured to store program code for performing the steps of:
S1, inputting an original text and prompt information into the large model, so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, wherein the original text is at least used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object;
S2, inputting the original text, the first tuple information and the discrimination rule into the large model, so that the large model judges the first tuple information according to the original text and the discrimination rule to obtain first evaluation information, wherein the first evaluation information at least comprises: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information;
S3, calculating the first score and the second score based on a first formula to determine a target score of the first tuple information, wherein the first formula is as follows: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score;
And S4, adjusting the first tuple information according to the target score of the first tuple information, and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, inputting an original text and prompt information into the large model, so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, wherein the original text is at least used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object;
S2, inputting the original text, the first tuple information and the discrimination rule into the large model, so that the large model judges the first tuple information according to the original text and the discrimination rule to obtain first evaluation information, wherein the first evaluation information at least comprises: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information;
S3, calculating the first score and the second score based on a first formula to determine a target score of the first tuple information, wherein the first formula is as follows: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score;
And S4, adjusting the first tuple information according to the target score of the first tuple information, and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information.
Alternatively, in the present embodiment, the above medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a removable hard disk, a magnetic disk, or an optical disk, etc., which can store program codes.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (12)

1. The knowledge graph construction method based on the large model is characterized by comprising the following steps of:
Inputting an original text and prompt information into the large model, so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, wherein the original text is used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object;
Inputting the original text, the first tuple information and the discrimination rule into the large model, so that the large model judges the first tuple information according to the original text and the discrimination rule to obtain first evaluation information, wherein the first evaluation information comprises: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information;
Calculating the first score and the second score based on a first formula to determine a target score of the first tuple information, wherein the first formula is: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score;
Adjusting the first tuple information according to the target score of the first tuple information, and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information;
When the discrimination rules are a first discrimination rule, a second discrimination rule and a third discrimination rule, the original text, the first tuple information and the discrimination rules are input into the large model, so that the large model judges the first tuple information according to the original text and the discrimination rules to obtain first evaluation information, and the method comprises the following steps:
inputting the original text, the first tuple information, the first discriminant rule, the second discriminant rule and the third discriminant rule into the large model, so that the large model judges the first tuple information according to the original text and the first discriminant rule to obtain the first score, the error tuple information in the first tuple information and the correct tuple information corresponding to the error tuple information; and
So that the large model judges the first tuple information according to the original text and the second judging rule to obtain the second score, and
So that the large model judges the first tuple information according to the original text and the third judging rule to obtain second tuple information,
The first discrimination rule is used for indicating the large model to score the accuracy of the first tuple information, determining the error tuple information in the first tuple information and determining the correct tuple information corresponding to the error tuple information; the second discriminant rule is used for indicating the large model to score the integrity of the first tuple information; the third discrimination rule is used for indicating the large model to extract information of the original text except the first tuple information according to the prompt information again; the second tuple information is tuple information except the first tuple information in the original text;
The method comprises the steps of constructing a knowledge graph corresponding to an original text according to adjusted first tuple information, wherein the knowledge graph comprises one of the following steps:
The adjusted first tuple information includes: under the condition of the first information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object;
the adjusted first tuple information includes: under the condition of the second information, establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object;
The adjusted first tuple information includes: under the condition of the first information and the second information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; and establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object.
2. The knowledge-graph construction method based on a large model according to claim 1, wherein adjusting the first tuple information according to the target score of the first tuple information comprises:
Determining a first size relation between the target score of the first tuple information and a first preset threshold value;
Inputting the original text, the first tuple information and the prompt information into the large model again under the condition that the first size relation indicates that the target score of the first tuple information is larger than or equal to the first preset threshold value, so that the large model extracts the original text again according to the original text, the first tuple information and the prompt information to obtain the adjusted first tuple information;
and determining a union of the correct tuple information and the second tuple information in the first tuple information if the first size relationship indicates that the target score of the first tuple information is less than the first preset threshold.
3. The knowledge-graph construction method based on a large model according to claim 1, wherein before calculating the first score and the second score based on a first formula, the method further comprises:
determining whether the first score is within a first range of values and/or the second score is within a second range of values;
Inputting the original text, the first tuple information and the prompt information into the large model again under the condition that the first score is in the first numerical range and/or the second score is in the second numerical range, so that the large model extracts information of the original text again according to the original text, the first tuple information and the prompt information to obtain the adjusted first tuple information;
Inputting the original text, the adjusted first tuple information and the discrimination rule into the large model, so that the large model judges the adjusted first tuple information according to the original text and the discrimination rule to obtain second evaluation information, wherein the second evaluation information at least comprises: a third score indicating the accuracy of the second tuple information and a fourth score indicating the integrity of the second tuple information;
calculating the third score and the fourth score based on a second formula to determine a target score for the second tuple information, wherein the second formula is: Wherein/> For the target score of the second tuple of information, p 3 is the value determined from the third score, and p 4 is the value determined from the fourth score;
The adjusted first tuple information is adjusted again according to the target score of the second tuple information to obtain third tuple information;
And constructing a knowledge graph corresponding to the original text according to the third tuple information.
4. The knowledge graph construction method based on the large model according to claim 3, wherein the adjusting the adjusted first tuple information again according to the target score of the second tuple information includes:
determining the number of times of information extraction on the original text under the condition that the target score of the second tuple information is smaller than a first preset threshold value;
When the number of times is greater than or equal to a preset number of times, prohibiting the second-tuple information from being adjusted again according to the target score of the second-tuple information;
and under the condition that the times are smaller than the preset times, the adjusted first tuple information is adjusted again according to the target scores of the second tuple information.
5. The knowledge-graph construction method based on a large model according to claim 1, wherein before calculating the first score and the second score based on a first formula, the method further comprises:
determining whether the first score is in a third numerical range and the second score is in a fourth numerical range, wherein a minimum value of the third numerical range is greater than a maximum value of the first numerical range and a minimum value of the fourth numerical range is greater than a maximum value of the second numerical range;
Updating the error tuple information in the first tuple information to the correct tuple information to obtain fourth tuple information when the first score is in the third numerical range and the second score is in the fourth numerical range; and constructing a knowledge graph corresponding to the original text according to the fourth tuple information.
6. The knowledge-graph construction method based on a large model according to claim 1, wherein before calculating the first score and the second score based on a first formula, the method further comprises:
determining whether the first score is in a fifth numerical range and the second score is in a fourth numerical range, wherein a minimum value of the fifth numerical range is greater than a maximum value of the third numerical range and a minimum value of the fourth numerical range is greater than a maximum value of the second numerical range;
determining a union of the correct tuple information and the second tuple information if the first score is in the fifth range of values and the second score is in the fourth range of values; and constructing a knowledge graph corresponding to the original text according to the union.
7. The large model-based knowledge-graph construction method of claim 1, wherein calculating the first score and the second score based on a first formula to determine a target score of the first tuple information comprises:
Determining whether the first score is in a first numerical range and the second score is in a sixth numerical range, wherein a minimum value of the sixth numerical range is greater than a maximum value of the second numerical range and a maximum value of the sixth numerical range is less than a minimum value of the fourth numerical range;
determining a first difference between the first score and a first preset value and a second difference between the second score and a second preset value when the first score is not in the first value range and the second score is in the sixth value range;
Inputting the first difference value and the second difference value into a first formula to determine a target score of the first tuple information, wherein the first formula is: wherein y 1 is the target score of the first tuple information, n=2,/> For the first difference,/>Is the second difference.
8. The knowledge graph construction method based on the large model according to claim 1, wherein inputting the original text and the hint information into the large model comprises:
Determining a domain type corresponding to the original text, and determining training data corresponding to the domain type;
Determining a loss function of the large model, and training the large model according to the loss function and the training data to obtain a trained large model;
and inputting the original text and the prompt information into the trained large model.
9. The knowledge-graph construction method based on a large model according to claim 8, wherein training the large model according to a loss function and the training data comprises:
Determining the data amount of the training data and determining a second magnitude relation between the data amount and a second preset threshold;
Training parameters in each neural network layer of the large model according to the loss function and the training data under the condition that the second size relation indicates that the data amount is greater than or equal to the second preset threshold value;
And under the condition that the second size relation indicates that the data amount is smaller than the second preset threshold value, freezing a target neural network layer in the large model, and training parameters in other neural network layers of the large model according to the loss function and the training data, wherein the other neural network layers are other neural network models except the target neural network layer in the large model.
10. The knowledge graph construction method based on the big model according to claim 1, wherein the method for inputting the original text and the prompt information into the big model to enable the big model to extract the information of the original text according to the prompt information to obtain the first tuple information comprises:
inputting the original text information and the prompt information into the big model so as to enable the big model to learn a data format of sample data, and extracting information from the original text information according to the data format and output instructions to obtain the first tuple information, wherein the prompt information comprises: the sample data and the output indication.
11. The knowledge graph construction device based on the large model is characterized by comprising:
The first input module is used for inputting an original text and prompt information into the large model so that the large model can extract information from the original text according to the prompt information to obtain first tuple information, wherein the original text is at least used for describing attribute information and a use instruction of a target object; the first tuple information includes: first information and/or second information; the first information includes: the attribute information, and the relationship between the attribute information and the target object, the second information including: the instructions, and the relationship of the instructions and the target object;
The second input module is configured to input the original text, the first tuple information and the discriminant rule into the large model, so that the large model judges the first tuple information according to the original text and the discriminant rule to obtain first evaluation information, where the first evaluation information at least includes: a first score indicating the accuracy of the first tuple information and a second score indicating the integrity of the first tuple information;
The computing module is configured to calculate the first score and the second score based on a first formula to determine a target score of the first tuple information, where the first formula is: wherein/> Scoring a target of the first tuple information, n=2,/>For a value determined from the first score,/>A value determined from the second score;
The construction module is used for adjusting the first tuple information according to the target score of the first tuple information and constructing a knowledge graph corresponding to the original text according to the adjusted first tuple information;
The second input module is further configured to input the original text, the first tuple information, the first discrimination rule, the second discrimination rule, and the third discrimination rule into the large model, so that the large model judges the first tuple information according to the original text and the first discrimination rule to obtain the first score, the error tuple information in the first tuple information, and the correct tuple information corresponding to the error tuple information; and
So that the large model judges the first tuple information according to the original text and the second judging rule to obtain the second score, and
So that the large model judges the first tuple information according to the original text and the third judging rule to obtain second tuple information,
The first discrimination rule is used for indicating the large model to score the accuracy of the first tuple information, determining the error tuple information in the first tuple information and determining the correct tuple information corresponding to the error tuple information; the second discriminant rule is used for indicating the large model to score the integrity of the first tuple information; the third discrimination rule is used for indicating the large model to extract information of the original text except the first tuple information according to the prompt information again; the second tuple information is tuple information except the first tuple information in the original text;
wherein the building block is further configured to perform one of:
The adjusted first tuple information includes: under the condition of the first information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object;
the adjusted first tuple information includes: under the condition of the second information, establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object;
The adjusted first tuple information includes: under the condition of the first information and the second information, a first node and a second node in the knowledge graph are established according to the identification information of the target object and the attribute information of the target object, and edge characteristics between the first node and the second node are determined according to the relationship between the attribute information and the target object; and establishing a third node and a fourth node in the knowledge graph according to the identification information of the target object and the use description of the target object, and determining edge characteristics between the third node and the fourth node according to the relationship between the use description and the target object.
12. A computer readable medium, characterized in that the computer readable medium comprises a stored program, wherein the program when run performs the method of any of the preceding claims 1 to 10.
CN202410179462.8A 2024-02-18 2024-02-18 Knowledge graph construction method, device and medium based on large model Active CN117725995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410179462.8A CN117725995B (en) 2024-02-18 2024-02-18 Knowledge graph construction method, device and medium based on large model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410179462.8A CN117725995B (en) 2024-02-18 2024-02-18 Knowledge graph construction method, device and medium based on large model

Publications (2)

Publication Number Publication Date
CN117725995A CN117725995A (en) 2024-03-19
CN117725995B true CN117725995B (en) 2024-05-24

Family

ID=90207424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410179462.8A Active CN117725995B (en) 2024-02-18 2024-02-18 Knowledge graph construction method, device and medium based on large model

Country Status (1)

Country Link
CN (1) CN117725995B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11416754B1 (en) * 2021-10-20 2022-08-16 Mckinsey & Company, Inc. Automated cloud data and technology solution delivery using machine learning and artificial intelligence modeling
CN116110570A (en) * 2023-02-07 2023-05-12 深圳清华大学研究院 Diabetes auxiliary diagnosis system, text processing method and map construction method
CN116186258A (en) * 2022-12-31 2023-05-30 青岛海尔电冰箱有限公司 Text classification method, equipment and storage medium based on multi-mode knowledge graph
CN116680407A (en) * 2023-05-18 2023-09-01 南京理工大学 Knowledge graph construction method and device
CN117033667A (en) * 2023-10-07 2023-11-10 之江实验室 Knowledge graph construction method and device, storage medium and electronic equipment
CN117290489A (en) * 2023-11-24 2023-12-26 烟台云朵软件有限公司 Method and system for quickly constructing industry question-answer knowledge base

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9037529B2 (en) * 2011-06-15 2015-05-19 Ceresis, Llc Method for generating visual mapping of knowledge information from parsing of text inputs for subjects and predicates
US10445376B2 (en) * 2015-09-11 2019-10-15 Microsoft Technology Licensing, Llc Rewriting keyword information using search engine results
US20230139720A1 (en) * 2021-10-28 2023-05-04 Rakuten Asia Pte. Ltd. Method and system for performing product matching on an e-commerce platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11416754B1 (en) * 2021-10-20 2022-08-16 Mckinsey & Company, Inc. Automated cloud data and technology solution delivery using machine learning and artificial intelligence modeling
CN116186258A (en) * 2022-12-31 2023-05-30 青岛海尔电冰箱有限公司 Text classification method, equipment and storage medium based on multi-mode knowledge graph
CN116110570A (en) * 2023-02-07 2023-05-12 深圳清华大学研究院 Diabetes auxiliary diagnosis system, text processing method and map construction method
CN116680407A (en) * 2023-05-18 2023-09-01 南京理工大学 Knowledge graph construction method and device
CN117033667A (en) * 2023-10-07 2023-11-10 之江实验室 Knowledge graph construction method and device, storage medium and electronic equipment
CN117290489A (en) * 2023-11-24 2023-12-26 烟台云朵软件有限公司 Method and system for quickly constructing industry question-answer knowledge base

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automatic document processing system with learning capability;Xuhong Li;《Proceedings of the SPIE - The International Society for Optical Engineering》;20000430;第4050卷;全文 *
手术器械图谱在消毒供应中心应用效果分析;姚秀芬;;世界最新医学信息文摘;20191126(第95期);全文 *
面向多模态知识图谱的实体对齐方法研究;张艺玮;《小型微型计算机系统》;20230222;全文 *

Also Published As

Publication number Publication date
CN117725995A (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN113485144B (en) Intelligent home control method and system based on Internet of things
CN115826428B (en) Control method and device of household equipment, storage medium and electronic device
US20220262372A1 (en) Method for Controlling Electrical Appliance, and Non-Transitory Computer-Readable Storage Medium
WO2023168838A1 (en) Sentence text recognition method and apparatus, and storage medium and electronic apparatus
CN113217340A (en) Air compressor control method, device, equipment and storage medium
Sarkar et al. Rich Dynamics of a Predator‐Prey System with Different Kinds of Functional Responses
CN111310905B (en) Neural network model training method and device and heating and ventilation system energy efficiency optimization method
CN110263373A (en) Strategy game and war game deduction system based on non-structural data knowledge and self-adaptive Bayesian network
CN116245019A (en) Load prediction method, system, device and storage medium based on Bagging sampling and improved random forest algorithm
CN117725995B (en) Knowledge graph construction method, device and medium based on large model
Chivilikhin et al. Learning finite-state machines with ant colony optimization
CN113827978A (en) Loss user prediction method and device and computer readable storage medium
CN114915514B (en) Method and device for processing intention, storage medium and electronic device
Xiaolong et al. A knowledge-based fruit fly optimization algorithm for multi-skill resource-constrained project scheduling problem
CN114925158A (en) Sentence text intention recognition method and device, storage medium and electronic device
CN114861678A (en) Method and apparatus for determining time information, storage medium, and electronic apparatus
CN113987261A (en) Video recommendation method and system based on dynamic trust perception
CN106844515B (en) Computer user behavior analysis method based on gene expression programming
CN116682422A (en) Method and device for determining semantic understanding template, storage medium and electronic device
Wang et al. Prediction model of glutamic acid production of data mining based on R language
CN117725423B (en) Method and device for generating feedback information based on large model
CN116910245A (en) Category determining method and device, storage medium and electronic device
CN117095677A (en) Semantic understanding template generation method and device, storage medium and electronic device
CN116451140A (en) Method and device for determining type information, storage medium and electronic device
CN116386597A (en) Dialect recognition model construction method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant