CN117076607A - Method, device and query system for establishing logic expression by large language model - Google Patents
Method, device and query system for establishing logic expression by large language model Download PDFInfo
- Publication number
- CN117076607A CN117076607A CN202311078223.5A CN202311078223A CN117076607A CN 117076607 A CN117076607 A CN 117076607A CN 202311078223 A CN202311078223 A CN 202311078223A CN 117076607 A CN117076607 A CN 117076607A
- Authority
- CN
- China
- Prior art keywords
- model
- expression
- language model
- natural language
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 111
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 48
- 230000002787 reinforcement Effects 0.000 claims abstract description 13
- 238000013507 mapping Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 6
- 238000003058 natural language processing Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 8
- 238000002372 labelling Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 235000002198 Annona diversifolia Nutrition 0.000 description 1
- 241000282842 Lama glama Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3322—Query formulation using system suggestions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/355—Class or cluster creation or modification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Machine Translation (AREA)
Abstract
The present disclosure provides a method, an apparatus and a query system for building a logic expression by using a large language model, which relate to the technical field of natural language processing, and include: detecting the training type of the large language model; if the training type is that continuous training is completed, inputting a natural language text describing a logic expression into the large language model, and obtaining a logic expression corresponding to the natural language text output by the large language model; continuing training includes reinforcement learning a fine tuning model derived from the large language model for generating the predictive expression and a reward model for scoring the predictive expression to update parameters of the fine tuning model. By adopting the method for establishing the logic expression in the method, the logic expression corresponding to the natural language text can be automatically obtained, the method is efficient and convenient, the processing efficiency is greatly improved, meanwhile, manual participation is not needed, and the labor cost is greatly reduced.
Description
Technical Field
The present disclosure relates generally to the field of natural language processing technologies, and in particular, to a method, an apparatus, and a query system for building a logic expression using a large language model.
Background
With the continuous development of electronic informatization, massive recording data are generated in the fields of banks, insurance and the like, wherein the recording data comprise structural information such as attributes, service dimensions, data time and the like, and unstructured information such as call recording, voice recognition text and the like.
Business scenarios in which enterprises analyze unstructured speech and text content are numerous, including complaint cause analysis, contact analysis, incoming call cause analysis, and seat violation analysis, for example. At present, related technology establishes a logic expression of a semantic tag manually, for example, greetings can be expressed by a logic expression of 'hello or hello', and then the greetings are converted into full-text search engines to be input, matched indexes are detected through the full-text search engines, and structured semantic tags are output.
However, different logic expressions have complex grammar and various forms, which brings great learning cost to business personnel, and meanwhile, a great deal of manpower is required to read texts by manually establishing the logic expressions, and the logic expressions are established after summarizing rules, especially in complex business scenes, the manually established logic expressions are difficult to cover all business scenes, so that recall rate is low.
Disclosure of Invention
In view of the foregoing drawbacks or shortcomings of the related art, it is desirable to provide a method, apparatus, and query system for building a logic expression using a large language model, which can automatically obtain the logic expression, improve processing efficiency, and reduce labor costs.
In a first aspect, the present disclosure provides a method for building a logical expression for a large language model, the method comprising:
detecting the training type of the large language model;
if the training type is that continuous training is completed, inputting a natural language text describing a logic expression into the large language model, and obtaining a logic expression corresponding to the natural language text output by the large language model; the continuous training comprises reinforcement learning of a fine tuning model obtained according to the large language model and a reward model, wherein the fine tuning model is used for generating a prediction expression, and the reward model is used for scoring the prediction expression so as to update parameters of the fine tuning model.
Optionally, in some embodiments of the present disclosure, the fine-tuning model is obtained by performing fine-tuning training on the large language model according to manual labeling data, where the manual labeling data includes a mapping relationship between a natural language and a logical expression.
Optionally, in some embodiments of the disclosure, the reward model is obtained by training a manual scoring ranking result of the predictive expression.
Optionally, in some embodiments of the disclosure, the method further comprises:
and if the training type is incomplete continuous training, providing an example of converting natural language into a logic expression for the large language model, and then inputting the natural language text to obtain the logic expression corresponding to the natural language text.
Optionally, in some embodiments of the present disclosure, the output format of the large language model includes json format, xml format, yaml format, or field names.
In a second aspect, the present disclosure provides an apparatus for building a logical expression from a large language model, the apparatus comprising:
the detection module is used for detecting the training type of the large language model;
the building module is used for inputting a natural language text describing a logic expression into the large language model if the training type is that continuous training is completed, and obtaining a logic expression corresponding to the natural language text output by the large language model; the continuous training comprises reinforcement learning of a fine tuning model obtained according to the large language model and a reward model, wherein the fine tuning model is used for generating a prediction expression, and the reward model is used for scoring the prediction expression so as to update parameters of the fine tuning model.
Optionally, in some embodiments of the present disclosure, the fine-tuning model is obtained by performing fine-tuning training on the large language model according to manual labeling data, where the manual labeling data includes a mapping relationship between a natural language and a logical expression.
Optionally, in some embodiments of the disclosure, the reward model is obtained by training a manual scoring ranking result of the predictive expression.
Optionally, in some embodiments of the present disclosure, the building module is further configured to provide an example of converting a natural language into a logical expression to the large language model if the training type is incomplete, and then input the natural language text to obtain the logical expression corresponding to the natural language text.
In a third aspect, the present disclosure provides a query system, a logical expression of which is obtained by the method for establishing a logical expression according to any one of the first aspects.
From the above technical solutions, the embodiments of the present disclosure have the following advantages:
the embodiment of the disclosure provides a method, a device and a query system for building a logic expression by using a large language model, which are capable of enabling a prediction result to be more accurate by continuously training the large language model, further inputting a natural language text describing the logic expression into the large language model, automatically obtaining the logic expression corresponding to the natural language text, and being efficient and convenient, greatly improving the processing efficiency, simultaneously avoiding manual participation and greatly reducing the labor cost.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is a flow chart of a method for building a logical expression for a large language model provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a process for continuous training of a large language model according to an embodiment of the present disclosure;
fig. 3 is a block diagram of an apparatus for building a logic expression using a large language model according to an embodiment of the present disclosure.
Detailed Description
In order that those skilled in the art will better understand the present disclosure, a technical solution in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the described embodiments of the disclosure may be capable of operation in sequences other than those illustrated or described herein.
Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules that are expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. To facilitate a better understanding of the embodiments of the present disclosure, a method, apparatus, and query system for building a logical expression for a large language model provided by the embodiments of the present disclosure are described in detail below with reference to fig. 1 through 3.
Please refer to fig. 1, which is a flowchart illustrating a method for creating a logic expression by using a large language model according to an embodiment of the present disclosure. The method specifically comprises the following steps:
s101, detecting the training type of the large language model.
It should be noted that, in the embodiment of the present disclosure, the large language model (Large Language Models, LLM) refers to a language model that is trained on a large-scale text corpus and includes billions (or more) of parameters, for example, the large language model may include, but is not limited to, GPT-3, paLM, LLaMA, and the like. Although the existing large language model has strong text generation capability in the general field, the performance is still deficient for specific fields and specific tasks, especially in the case of little data in general corpus. Therefore, the embodiment of the disclosure uses the collected domain data and the artificial annotation data to continuously train the large language model, so that the capability of converting the natural language of the large language model in the related service domain into a logic expression can be enhanced, wherein the logic expression refers to an expression which has a fixed grammar and is generally composed of keywords, reserved words and logic relations (such as AND, OR and NOT) and can be used for describing the structural and unstructured characteristics of a certain document.
S102, if the training type is that continuous training is completed, natural language text describing the logic expression is input into the large language model, and the logic expression corresponding to the natural language text output by the large language model is obtained.
It should be noted that, continuing training includes, but is not limited to, reinforcement learning (Reinforcement Learning, RL) on the fine tuning model and the reward model obtained according to the large language model, where reinforcement learning is also called re-excitation learning, evaluation learning or reinforcement learning, is one of the paradigms and methodologies of machine learning for describing and solving the problem that an agent (agent) maximizes return or achieves a specific objective through a learning strategy during interaction with an environment. The fine tuning model is used for generating a prediction expression, and the rewarding model is used for scoring the prediction expression so as to update parameters of the fine tuning model.
Illustratively, as shown in fig. 2, the embodiment of the disclosure performs fine-tuning training on a large language Model to obtain a fine-tuning Model 1 by using artificial annotation data, where the artificial annotation data may include a mapping relationship between a natural language and a logic expression, that is, a (natural language-logic expression) pair, so that the fine-tuning Model has a capability of converting a preliminary natural language into a logic expression.
For example, natural language [ prompt corresponding to prediction phase (prompt) ]: the martial arts seat querying 2022, 3, 1 to 2023, 5, 1 has no text asking the customer.
The logic expression [ expected output corresponding to the prediction phase ]:
it is understood that the output format of the large language model is json format herein for example only. In fact, the output format of the large language model may also be xml format, yaml format, field name, etc. according to the requirements of different logic expression systems.
Further, embodiments of the present disclosure may also gather a further batch of natural language and use the fine-tuning model to generate a corresponding predictive expression. Then, by scoring manually, the higher the score, the higher the quality of the predictive expression generated by the Model is indicated, and at this time, a Reward Model (Reward Model) is trained by using the score sorting result, wherein the Reward Model is a separate classification Model, i.e. is used for judging which of the predictive expressions generated by the two models is better, and can be obtained based on a fine tuning Model or a brand new large language Model.
Further, the embodiments of the present disclosure enhance the fine tuning model by reinforcement learning, and in this stage, the parameters of the fine tuning model are updated by reinforcement learning without manually labeling data, i.e., using the rewarding model learned in the previous stage. Finally, the scoring process and the reinforcement learning process are repeated until the generation effect of the model reaches the expected value, and a large language model which is subjected to continuous training is obtained and is called a Text2Query model.
In actual use, if the large language model is continuously trained, natural language text describing the logic expression is directly input into the large language model according to the service requirement, for example, the input is 'you are a robot with the natural language converted into the logic expression', and I help me convert the following text into the logic expression: the user is queried for the text which is not asked to be good by the staff seat from the 1 st 3 rd year of 2022 to the 1 st 5 th year of 2023, and the model output is the corresponding logic expression.
If the large language model is not continuously trained, that is, if the training type is incomplete, S103, an example of converting natural language into a logic expression is provided for the large language model, so that the set advantage is that under the scene that the precision requirement is not high and the annotation data is difficult to acquire, the context learning (In Context Learning, ICL) capability of the large language model can be utilized for quick learning, meanwhile, the calculation force is saved, and the diversified use requirements are met. Examples are, for example: you are a natural language to logical expression robot that can convert natural language text like "query 2022, 3, 1, 5, 1, 2023, martial arts, seat no text asked to the customer" to the following logical expression: "{
And then, inputting the natural language text into the large language model to obtain a logic expression corresponding to the natural language text. For example, the input natural language text is: when the natural language is described as "no text to customer's talent of Beijing job site from 2022, 3, 1, to 2023, 5, 1?
At this time, the embodiment of the present disclosure expects the output of the large language model to be:
according to the method for establishing the logic expression by using the large language model, the large language model is continuously trained, so that the prediction result is more accurate, the natural language text describing the logic expression is further input into the large language model, the logic expression corresponding to the natural language text can be automatically obtained, the method is efficient and convenient, the processing efficiency is greatly improved, meanwhile, manual participation is not needed, and the labor cost is greatly reduced.
Based on the foregoing embodiments, the disclosed embodiments provide an apparatus for building a logical expression with a large language model. The apparatus 100 for building a logical expression of a large language model may be applied to the method for building a logical expression of a large language model of the corresponding embodiment of fig. 1 to 2. Referring to fig. 3, the apparatus 100 for building a logic expression of the large language model includes:
a detection module 101, configured to detect a training type of the large language model;
the building module 102 is configured to input a natural language text describing the logic expression to the large language model if the training type is that continuous training is completed, and obtain a logic expression corresponding to the natural language text output by the large language model; continuing training includes reinforcement learning a fine tuning model derived from the large language model for generating the predictive expression and a reward model for scoring the predictive expression to update parameters of the fine tuning model.
Optionally, in some embodiments of the present disclosure, the fine-tuning model is obtained by performing fine-tuning training on the large language model according to manual labeling data, where the manual labeling data includes a mapping relationship between a natural language and a logical expression.
Alternatively, the winning model of some embodiments of the present disclosure is obtained by training the results of the manual scoring ordering of predictive expressions.
Optionally, the building module 102 in some embodiments of the present disclosure is further configured to provide an example of converting the natural language into the logical expression to the large language model if the training type is incomplete, and then input the natural language text to obtain the logical expression corresponding to the natural language text.
Optionally, the output format of the large language model in some embodiments of the present disclosure includes json format, xml format, yaml format, or field names.
It should be noted that, in this embodiment, the descriptions of the same steps and the same content as those in other embodiments may refer to the descriptions in other embodiments, and are not repeated here.
According to the device for establishing the logic expression by using the large language model, the large language model is continuously trained, so that the prediction result is more accurate, the natural language text describing the logic expression is further input into the large language model, the logic expression corresponding to the natural language text can be automatically obtained, the device is efficient and convenient, the processing efficiency is greatly improved, meanwhile, manual participation is not needed, and the labor cost is greatly reduced.
Based on the foregoing embodiments, the embodiments of the present disclosure provide a query system, where the logical expression of the query system is obtained by the method for creating the logical expression according to the corresponding embodiments of fig. 1 to 2. For example, the query system may be used for full text retrieval, which refers to the computer program indicating the number and location of occurrences of each word in an article by scanning the word and creating an index of the necessary words. When a user queries text containing certain words, a lookup can be performed from the established index, similar to the process of looking up words by retrieving word lists of a dictionary.
As another aspect, an embodiment of the present disclosure provides an electronic device including a processor and a memory. The memory stores at least one instruction, at least one program, code set, or instruction set that is loaded and executed by the processor to implement the steps of the method of creating a logical expression of the corresponding embodiment of fig. 1-2.
As yet another aspect, the disclosed embodiments provide a computer readable storage medium storing program code for executing any one of the foregoing methods of establishing a logical expression of the corresponding embodiments of fig. 1-2.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, apparatuses and modules described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms. The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present disclosure may be integrated in one processing unit, or each module may exist alone physically, or two or more units may be integrated in one module. The integrated units may be implemented in hardware or in software functional units. And the integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium.
Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method for establishing a logic expression of the various embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that the above embodiments are merely for illustrating the technical solution of the disclosure, and are not limiting; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.
Claims (10)
1. A method for building a logical expression for a large language model, the method comprising:
detecting the training type of the large language model;
if the training type is that continuous training is completed, inputting a natural language text describing a logic expression into the large language model, and obtaining a logic expression corresponding to the natural language text output by the large language model; the continuous training comprises reinforcement learning of a fine tuning model obtained according to the large language model and a reward model, wherein the fine tuning model is used for generating a prediction expression, and the reward model is used for scoring the prediction expression so as to update parameters of the fine tuning model.
2. The method of claim 1, wherein the fine-tuning model is obtained by fine-tuning the large language model based on artificial annotation data comprising a mapping of natural language to logical expressions.
3. The method of claim 1, wherein the reward model is obtained by training a manual scoring ranking result of the predictive expression.
4. A method according to any one of claims 1 to 3, characterized in that the method further comprises:
and if the training type is incomplete continuous training, providing an example of converting natural language into a logic expression for the large language model, and then inputting the natural language text to obtain the logic expression corresponding to the natural language text.
5. The method of claim 4, wherein the output format of the large language model comprises json format, xml format, yaml format, or field name.
6. An apparatus for building a logical expression from a large language model, the apparatus comprising:
the detection module is used for detecting the training type of the large language model;
the building module is used for inputting a natural language text describing a logic expression into the large language model if the training type is that continuous training is completed, and obtaining a logic expression corresponding to the natural language text output by the large language model; the continuous training comprises reinforcement learning of a fine tuning model obtained according to the large language model and a reward model, wherein the fine tuning model is used for generating a prediction expression, and the reward model is used for scoring the prediction expression so as to update parameters of the fine tuning model.
7. The apparatus of claim 6, wherein the fine-tuning model is obtained by fine-tuning the large language model based on artificial annotation data comprising a mapping of natural language to logical expressions.
8. The apparatus of claim 6, wherein the reward model is obtained by training a manual scoring ranking result of the predictive expression.
9. The apparatus according to any one of claims 6 to 8, wherein the building module is further configured to provide an example of a natural language conversion to a logical expression to the large language model if the training type is incomplete, and then input the natural language text to obtain the logical expression corresponding to the natural language text.
10. A query system, characterized in that a logical expression of the query system is obtained by the method for building a logical expression of a large language model according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311078223.5A CN117076607A (en) | 2023-08-25 | 2023-08-25 | Method, device and query system for establishing logic expression by large language model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311078223.5A CN117076607A (en) | 2023-08-25 | 2023-08-25 | Method, device and query system for establishing logic expression by large language model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117076607A true CN117076607A (en) | 2023-11-17 |
Family
ID=88703875
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311078223.5A Pending CN117076607A (en) | 2023-08-25 | 2023-08-25 | Method, device and query system for establishing logic expression by large language model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117076607A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118331152A (en) * | 2024-05-22 | 2024-07-12 | 山东和信智能科技有限公司 | Industrial control system logic optimization method and system based on natural language big model |
-
2023
- 2023-08-25 CN CN202311078223.5A patent/CN117076607A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118331152A (en) * | 2024-05-22 | 2024-07-12 | 山东和信智能科技有限公司 | Industrial control system logic optimization method and system based on natural language big model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765244A (en) | Method and device for acquiring answering, computer equipment and storage medium | |
CN112667794A (en) | Intelligent question-answer matching method and system based on twin network BERT model | |
US20200304433A1 (en) | Interactive virtual conversation interface systems and methods | |
CN111767716B (en) | Method and device for determining enterprise multi-level industry information and computer equipment | |
WO2023035330A1 (en) | Long text event extraction method and apparatus, and computer device and storage medium | |
CN113297365B (en) | User intention judging method, device, equipment and storage medium | |
US12001797B2 (en) | System and method of automatic topic detection in text | |
CN113342958A (en) | Question-answer matching method, text matching model training method and related equipment | |
CN117076607A (en) | Method, device and query system for establishing logic expression by large language model | |
CN118170955B (en) | Marketing business supporting method, system, electronic equipment and storage medium | |
CN117668205A (en) | Smart logistics customer service processing method, system, equipment and storage medium | |
CN117112767A (en) | Question and answer result generation method, commercial query big model training method and device | |
CN117540004B (en) | Industrial domain intelligent question-answering method and system based on knowledge graph and user behavior | |
CN113918697A (en) | Optimization method and optimization system of intelligent question-answering system | |
CN109684357B (en) | Information processing method and device, storage medium and terminal | |
CN114077834A (en) | Method, device and storage medium for determining similar texts | |
EP3617970A1 (en) | Automatic answer generation for customer inquiries | |
Ali et al. | Identifying and Profiling User Interest over time using Social Data | |
CN113254623B (en) | Data processing method, device, server, medium and product | |
CN113807920A (en) | Artificial intelligence based product recommendation method, device, equipment and storage medium | |
US11914844B2 (en) | Automated processing and dynamic filtering of content for display | |
CN112632991B (en) | Method and device for extracting characteristic information of Chinese language | |
CN114791944A (en) | Data processing method, data processing device, storage medium and processor | |
CN117033622A (en) | Data processing method, device, computer equipment and storage medium | |
CN117421397A (en) | Question answering method, apparatus, electronic device, and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |