CN117952211A - Method, device and system for determining operation instruction by using large language model - Google Patents

Method, device and system for determining operation instruction by using large language model Download PDF

Info

Publication number
CN117952211A
CN117952211A CN202311862953.4A CN202311862953A CN117952211A CN 117952211 A CN117952211 A CN 117952211A CN 202311862953 A CN202311862953 A CN 202311862953A CN 117952211 A CN117952211 A CN 117952211A
Authority
CN
China
Prior art keywords
language model
request information
large language
parameters
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311862953.4A
Other languages
Chinese (zh)
Inventor
张迪
李明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zetyun Tech Co ltd
Original Assignee
Beijing Zetyun Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zetyun Tech Co ltd filed Critical Beijing Zetyun Tech Co ltd
Priority to CN202311862953.4A priority Critical patent/CN117952211A/en
Publication of CN117952211A publication Critical patent/CN117952211A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Stored Programmes (AREA)

Abstract

The embodiment of the invention provides a method, a device and a system for determining an operation instruction by using a large language model. The method comprises the following steps: acquiring request information input by a user; sending an instruction for processing the request information to the large language model, and receiving an analysis result returned by the large language model; and determining an operation instruction corresponding to the request information based on the analysis result, wherein the operation instruction is used for indicating a target application to respond to the request information. According to the embodiment of the invention, the large language model is utilized to determine the operation instruction, and the target application can be utilized to automatically complete the request task of the user, so that the efficiency of completing the task of the user is improved.

Description

Method, device and system for determining operation instruction by using large language model
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, and a system for determining an operation instruction by using a large language model.
Background
Along with the rapid development of artificial intelligence and big data, model training is performed by using a machine learning platform, and intelligent processing of big data service is realized by using a trained service model, so that the method is gradually a common means in big data industry. A machine learning platform is a collection of software tools and services that provide support and convenience for machine learning workflows, and provides data processing, model training, model deployment, and monitoring functions. The existing machine learning platform usually only supports manual operation to perform project management, data set management, operator management and more functions available in the platform, for example, when a project is newly built, a project name and a project brief introduction are input, when a data set is newly built, a file is uploaded, the data set name and the data set brief introduction can meet the requirement of function use and the like. All platform functions of the machine learning platform require manual operation by a user, which results in low operation efficiency of the machine learning platform.
Disclosure of Invention
The embodiment of the invention provides a method, a device and a system for determining an operation instruction by using a large language model, which solve the problem that in the prior art, all platform functions of a machine learning platform are manually operated by a user, so that the operation efficiency of the machine learning platform is low.
To solve the above technical problem, the present invention provides a method for determining an operation instruction by using a large language model, the method comprising:
Acquiring request information input by a user;
Sending an instruction for processing the request information to the large language model, and receiving an analysis result returned by the large language model;
and determining an operation instruction corresponding to the request information based on the analysis result, wherein the operation instruction is used for indicating a target application to respond to the request information.
Optionally, in the above method, the obtaining the request information input by the user includes:
Responding to natural language information input by a user, outputting dialogue response information until the dialogue is ended, and obtaining at least one round of dialogue information;
and assembling task parameters contained in the at least one round of dialogue information to generate the request information.
Optionally, in the above method, the task parameter includes an action parameter corresponding to a task, and the assembling the task parameter included in the at least one round of dialogue information includes:
matching task parameters in the at least one round of dialogue information with function parameters corresponding to the target application;
And when the matching of the task parameters and the function parameters corresponding to the target application fails, assembling the task parameters to generate request information.
Optionally, the foregoing further includes:
and when the task parameters are successfully matched with the function parameters corresponding to the target application, transmitting the action parameters corresponding to the task to the target application for execution.
Optionally, in the foregoing method, the request information includes a task parameter, where the task parameter includes an object corresponding to dialogue information, content of dialogue information, and an action parameter corresponding to a task, and sending an instruction for processing the request information to the large language model, and receiving an analysis result returned by the large language model includes:
determining the large language model corresponding to the request information according to the request information;
sending an instruction for analyzing the task parameters to the large language model;
receiving an analysis result corresponding to the task parameter returned by the large language model, wherein the analysis result comprises operation parameters, and the operation parameters comprise: and the operation action is used for controlling the instruction corresponding to the operation action and the service class of the target application.
Optionally, in the above method, the determining, based on the analysis result, an operation instruction corresponding to the request information includes:
determining an application programming interface of the target application based on the operating parameter;
assembling the operation parameters according to the application programming interface to generate an operation instruction corresponding to the request information;
The method further comprises the steps of:
Sending the operation instruction to the application programming interface;
And receiving an execution result of the operation instruction by the application programming interface.
Optionally, in the above method, after the generating the operation instruction corresponding to the request information, the method further includes:
displaying target information of the operation instruction to be executed;
When receiving a modification instruction of a user on the target information, updating an operation instruction corresponding to the request information according to the modified operation instruction.
The embodiment of the invention also provides a device for determining the operation instruction by using the large language model, which comprises:
The acquisition module is used for acquiring request information input by a user;
The processing module is used for sending an instruction for processing the request information to the large language model and receiving an analysis result returned by the large language model;
And the determining module is used for determining an operation instruction corresponding to the request information based on the analysis result, wherein the operation instruction is used for indicating a target application to respond to the request information.
Optionally, in the above apparatus, the acquiring module includes:
The sub-module is used for responding to the natural language information input by the user, outputting dialogue response information until the dialogue is ended, and obtaining at least one round of dialogue information;
And the first generation sub-module is used for assembling task parameters contained in the at least one round of dialogue information to generate the request information.
Optionally, in the foregoing apparatus, the task parameter includes an action parameter corresponding to a task, and the generating submodule includes:
The matching unit is used for matching the task parameters in the at least one round of dialogue information with the function parameters corresponding to the target application;
and the generating unit is used for assembling the task parameters to generate request information when the matching of the task parameters and the function parameters corresponding to the target application fails.
Optionally, in the above apparatus, the generating sub-module further includes:
And the sending unit is used for transmitting the action parameters corresponding to the task to the target application for execution when the task parameters are successfully matched with the function parameters corresponding to the target application.
Optionally, in the above device, the request information includes a task parameter, where the task parameter includes an object corresponding to the dialogue information, a content of the dialogue information, and an action parameter corresponding to the task, and the processing module includes:
the first determining submodule is used for determining the large language model corresponding to the request information according to the request information;
the sending submodule is used for sending an instruction for analyzing the task parameters to the large language model;
the receiving sub-module is used for receiving an analysis result corresponding to the task parameter, which is returned by the large language model, wherein the analysis result comprises operation parameters, and the operation parameters comprise: and the operation action is used for controlling the instruction corresponding to the operation action and the service class of the target application.
Optionally, in the above apparatus, the determining module includes:
a second determination sub-module for determining an application programming interface of the target application based on the operating parameters;
The second generation sub-module is used for assembling the operation parameters according to the application programming interface and generating an operation instruction corresponding to the request information;
The apparatus further comprises:
the sending module is used for sending the operation instruction to the application programming interface;
and the receiving module is used for receiving the execution result of the operation instruction by the application programming interface.
Optionally, in the foregoing apparatus, the determining module further includes:
the display sub-module is used for displaying target information of the operation instruction to be executed;
And the updating sub-module is used for updating the operation instruction corresponding to the request information according to the modified operation instruction when receiving the modification instruction of the target information from the user.
The embodiment of the invention also provides a data set processing system, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program is executed by the processor to realize the steps of the method for determining operation instructions by using a large language model.
Embodiments of the present invention also provide a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a method of determining operational instructions using a large language model as described above.
According to the embodiment of the invention, the large language model is utilized to determine the operation instruction, and the target application can be utilized to automatically complete the request task of the user, so that the task completion efficiency of the user is improved, and the use threshold of the target application such as a machine learning platform is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for determining operational instructions using a large language model provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining operational instructions using a large language model in accordance with an embodiment of the present invention;
Fig. 3 is a block diagram of an apparatus for determining an operation instruction using a large language model according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The large language model (large language model, LLM) in the embodiment of the application is also called a large-scale language model, is a language model with larger parameter scale, aims at understanding and generating human language, and can perform a wide task including text summarization, translation, emotion analysis and the like by training through a large amount of text data. .
The intelligent assistant in the embodiment of the application is an artificial intelligent assistant integrated on a machine learning platform, and can interact with a user in natural language, understand the questions input by the user, answer the questions of the user, summarize the user information, adjust the setting of the machine learning platform, edit the information, provide suggestions and the like. And interaction information between the machine learning platform and the user can be sent to the machine learning platform for processing. In some embodiments, the intelligent assistant may be Copilot (Copilot is an AI assistant in Windows 11), or may be an AI assistant designed for autonomous development, which is not limited by the embodiment of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a method for determining an operation instruction using a large language model according to an embodiment of the present invention. As shown in fig. 1, the method for determining an operation instruction by using a large language model according to an embodiment of the present invention includes the following steps:
step 101, obtaining request information input by a user.
Wherein the request information includes tasks or functions the user wants to complete, and the like. The user can input the request information on the front-end page provided by the target application through input modes such as voice or input characters. As an optional implementation manner, the step 101 of obtaining the request information input by the user includes:
Responding to natural language information input by a user, outputting dialogue response information until the dialogue is ended, and obtaining at least one round of dialogue information;
and assembling task parameters contained in the at least one round of dialogue information to generate the request information.
Specifically, the user performs a dialogue with the intelligent assistant by inputting natural language information, and the intelligent assistant outputs dialogue response information in response to the natural language information input by the user. Wherein at the end of the session, the user performs at least one session with the intelligent assistant. And assembling task parameters contained in the at least one round of dialogue information to generate the request information.
As an optional implementation manner, the task parameters include action parameters corresponding to a task, and the assembling task parameters included in the at least one round of dialogue information, and generating the request information includes:
matching task parameters in the at least one round of dialogue information with function parameters corresponding to the target application;
And when the matching of the task parameters and the function parameters corresponding to the target application fails, assembling the task parameters to generate request information.
In some embodiments, the target application may be, for example, a machine learning platform, a deep learning platform, a mobile phone app, a computer application software, and the embodiment of the present application is not limited thereto. Wherein the at least one round of dialogue information includes task parameters. The target application comprises a plurality of conventional basic functions, and takes the target application as a machine learning platform as an example, and the target application comprises functions of creating projects or creating data sets and the like. And matching the task parameters in the at least one round of dialogue information with the conventional function parameters corresponding to the target application, and if the task parameters fail to be matched with the conventional function parameters corresponding to the target application, indicating that the task parameters in the at least one round of dialogue information are not the operation functions which can be directly returned by the target application, and calling a large language model to perform further recognition processing.
As an alternative embodiment, the method further comprises:
and when the task parameters are successfully matched with the function parameters corresponding to the target application, transmitting the action parameters corresponding to the task to the target application for execution.
Specifically, if the task parameter is successfully matched with the conventional function parameter corresponding to the target application, it indicates that the task parameter in the at least one round of dialogue information is an operation function that the target application can directly return, so that the action parameter corresponding to the task is directly transferred to the target application, and the target application executes the corresponding task in the user request information according to the action parameter corresponding to the task.
Step 102, sending an instruction for processing the request information to the large language model, and receiving an analysis result returned by the large language model.
The background service module of the target application sends an instruction for processing the request information to the large language model according to the request information input by a user, and after the large language model receives the instruction for analyzing the request information, the request information is analyzed, an analysis result is returned, and the background service receives the analysis result returned by the large language model. In some embodiments, the large language model may be a large language model trained with the target application or an existing large language model such as chatgpt or the like.
Optionally, the request information includes task parameters, where the task parameters include an object corresponding to dialogue information, content of dialogue information, and an action parameter corresponding to a task, and the step 102 of sending an instruction for processing the request information to the large language model and receiving an analysis result returned by the large language model includes:
determining the large language model corresponding to the request information according to the request information;
sending an instruction for analyzing the task parameters to the large language model;
receiving an analysis result corresponding to the task parameter returned by the large language model, wherein the analysis result comprises operation parameters, and the operation parameters comprise: and the operation action is used for controlling the instruction corresponding to the operation action and the service class of the target application.
In some embodiments, the object corresponding to the dialogue information refers to a body of the dialogue information, such as an intelligent assistant or a user; the content of the dialogue information is text content input by an intelligent assistant or a user; the action parameters corresponding to the task refer to functions or operation action information of the task to be executed and the like contained in the content of the dialogue information, such as project creation or data set creation.
Specifically, the large language model may include a plurality of different types of models or include a plurality of versions of large language models, and according to the request information, selecting a large language model suitable for the plurality of different types of models or including the plurality of versions of large language models as a processing model; and further sending an instruction for analyzing the task parameters to the determined large language model, analyzing the task parameters by the large language model according to the instruction to obtain an analysis result, and returning the analysis result to a background service, wherein the background service receives the analysis result corresponding to the task parameters, which is returned by the large language model, and the analysis result comprises operation parameters, and the operation parameters comprise: and the operation action is used for controlling the instruction corresponding to the operation action and the service class of the target application. The operation action is an operation action to be executed, for example, a project is created or a data set is created, a control instruction corresponding to the operation action can be a jump to a project list, and a business category of the target application can be a project. The operating parameters may also include other supplemental information such as interface parameters, etc.
Optionally, the step 103 of determining the operation instruction corresponding to the request information based on the analysis result includes:
determining an application programming interface of the target application based on the operating parameter;
assembling the operation parameters according to the application programming interface to generate an operation instruction corresponding to the request information;
The method further comprises the steps of:
Sending the operation instruction to the application programming interface;
And receiving an execution result of the operation instruction by the application programming interface.
Specifically, the operating parameters may be converted into JSON formatted information before determining an application programming interface of the target application based on the operating parameters. The background service matches the JSON format information with the function, method or function of the target application, and determines the code implementation class corresponding to the request information; and determining an application programming interface corresponding to the target application according to the code implementation class. And assembling the operation parameters according to the application programming interface to generate an operation instruction corresponding to the request information. In some embodiments, different application programming interfaces have different parameter formats or requirements, so that the embodiment of the present application performs the assembly of the operation parameters based on the specific application programming interface corresponding to the operation parameters.
And sending the operation instruction to the application programming interface, processing the operation instruction by the programming interface of the target application, and returning an execution result. And the background service receives an execution result of the operation instruction by the application programming interface, and then the operation task which the user wants to complete is completed.
As an optional implementation manner, after the generating the operation instruction corresponding to the request information, the method further includes:
displaying target information of the operation instruction to be executed;
When receiving a modification instruction of a user on the target information, updating an operation instruction corresponding to the request information according to the modified operation instruction.
Specifically, after generating the operation instruction corresponding to the request information, the user may also modify the target information of the operation instruction, such as an action name, an execution parameter, a name, and the like. The target information of the operation instruction to be executed is displayed to a user, and when a modification instruction of the target information by the user is received, the operation instruction corresponding to the request information is updated according to the modified operation instruction, so that a final operation instruction is determined.
According to the embodiment of the invention, the large language model is utilized to determine the operation instruction, and the target application can be utilized to automatically complete the request task of the user, so that the efficiency of completing the task of the user is improved, the intelligent level of the target application is improved, and the labor cost is reduced.
Taking the target application as a machine learning platform as an example, as shown in fig. 2, fig. 2 is a further flowchart of a method for determining an operation instruction by using a large language model according to an embodiment of the present invention, as shown in fig. 2, the method for determining an operation instruction by using a large language model includes the following steps:
step 1: the user logs into the machine learning platform page.
Step 2: the user has a dialogue with an intelligent assistant used by the machine learning platform system. In this step, information of the operation function that the user wants to realize may be assembled through a plurality of rounds of dialogue.
Step 3: the user sends the wanted call, where the wanted call sent by the user is the request information. For example, "I want to create an item" or "I want to create a dataset" and so on. The request information includes task parameters such as a type (type: human, ai, action) and a dialog content (content). When type is human representing the main body of the dialogue information as a user, when type is ai representing the main body of the dialogue information as an intelligent assistant of a target application, and when type is action representing the function or operation action information of a task to be executed. Content is dialogue information Content is text Content entered by the intelligent assistant or the user.
Step 4: the front end (i.e., machine learning platform page) communicates the user's dialog content with the intelligent assistant to the background service.
Step 5: the background service determines whether the dialogue content is a directly returnable operation. If yes, returning to the step 2. If not, the process goes to step 6. The background service judges whether the operation included in the dialogue content is a normal operation or not, and if the action transferred by the dialogue content belongs to the normal operation, the background service directly returns an AI message to the user. If the actions of the dialog content delivery do not belong to normal operations, a large language model needs to be invoked.
Step 6: the background service calls the large language model for analysis. The background service determines a model version from a plurality of large language models according to the dialogue content, and analyzes the dialogue content by using the determined model version.
Step 7: the large language model returns corresponding information.
The returned corresponding information returned by the large language model comprises the following parameters: "action", "name", "object" and "message". The action is an operation action to be executed, and the name is a control instruction description corresponding to the specific operation action, for example: jumping to the item list, objects are business categories of the corresponding machine learning platform such as: project, message is some other supplementary information such as interface parameters, etc.
Step 8: and converting the corresponding information returned by the large language model into extracted information in the JSON format. The extracted information includes "action", "name", "subject" and "other information".
Step 9: the background service finds a specific code implementation class according to the extracted information in the JSON format, namely a specific openAPI interface of a machine learning platform to be called, and the background service matches the extracted information in the JSON format with a machine learning platform function to determine the code implementation class corresponding to the extracted information in the JSON format; and determining an application programming interface corresponding to the machine learning platform according to the code implementation class.
Step 10: and the background service assembles the parameters returned by the large language model according to the application programming interface, generates corresponding operation instructions, and sends the corresponding operation instructions to the application programming interface, namely a specific API of the machine learning platform, and the operation instructions are executed through the application programming interface so as to complete operation tasks contained in the dialogue content.
Example one: logging in through the machine learning platform entrance to find and converse with the assistant. For it: i want to create an item, the small assistant responds: what name the item is called? Continuing the dialogue: test, then, the project is created directly and entered into the project detail page.
Example two: logging in through the machine learning platform entrance to find and converse with the assistant. For it: i want to create a dataset, the small assistant responds: please upload the data set file, click the assistant to upload, the assistant will inquire after the uploading: what name is the dataset called? And (3) responding: the bank data can also directly create a data set and enter a page of the data set details to display the data under analysis and the data preview result.
According to the embodiment of the invention, the AI assistant is used for calling the large model to automatically complete the request task of the user, so that the efficiency of the user in operating the machine learning platform is improved, and the use threshold of the machine learning platform is reduced.
Based on the method for determining the operation instruction by using the large language model provided by the embodiment, the embodiment of the invention also provides a device for determining the operation instruction by using the large language model, which is used for implementing the method. Referring to fig. 3, an apparatus 300 for determining an operation instruction by using a large language model according to an embodiment of the present invention is provided, where the apparatus 300 for determining an operation instruction by using a large language model includes:
An obtaining module 301, configured to obtain request information input by a user;
the processing module 302 is configured to send an instruction for processing the request information to the large language model, and receive an analysis result returned by the large language model;
and the determining module 303 is configured to determine an operation instruction corresponding to the request information based on the analysis result, where the operation instruction is used to instruct the target application to respond to the request information.
Optionally, in the above apparatus 300, the acquiring module 301 includes:
The sub-module is used for responding to the natural language information input by the user, outputting dialogue response information until the dialogue is ended, and obtaining at least one round of dialogue information;
And the first generation sub-module is used for assembling task parameters contained in the at least one round of dialogue information to generate the request information.
Optionally, in the foregoing apparatus 300, the task parameter includes an action parameter corresponding to a task, and the generating submodule includes:
The matching unit is used for matching the task parameters in the at least one round of dialogue information with the function parameters corresponding to the target application;
and the generating unit is used for assembling the task parameters to generate request information when the matching of the task parameters and the function parameters corresponding to the target application fails.
Optionally, in the foregoing apparatus 300, the generating sub-module further includes:
And the sending unit is used for transmitting the action parameters corresponding to the task to the target application for execution when the task parameters are successfully matched with the function parameters corresponding to the target application.
Optionally, in the foregoing apparatus 300, the request information includes task parameters, where the task parameters include an object corresponding to the session information, content of the session information, and an action parameter corresponding to the task, and the processing module 302 includes:
the first determining submodule is used for determining the large language model corresponding to the request information according to the request information;
the sending submodule is used for sending an instruction for analyzing the task parameters to the large language model;
the receiving sub-module is used for receiving an analysis result corresponding to the task parameter, which is returned by the large language model, wherein the analysis result comprises operation parameters, and the operation parameters comprise: and the operation action is used for controlling the instruction corresponding to the operation action and the service class of the target application.
Optionally, in the foregoing apparatus 300, the determining module 303 includes:
a second determination sub-module for determining an application programming interface of the target application based on the operating parameters;
The second generation sub-module is used for assembling the operation parameters according to the application programming interface and generating an operation instruction corresponding to the request information;
The apparatus further comprises:
the sending module is used for sending the operation instruction to the application programming interface;
and the receiving module is used for receiving the execution result of the operation instruction by the application programming interface.
Optionally, in the foregoing apparatus 300, the determining module 303 includes further:
the display sub-module is used for displaying target information of the operation instruction to be executed;
And the updating sub-module is used for updating the operation instruction corresponding to the request information according to the modified operation instruction when receiving the modification instruction of the target information from the user.
An embodiment of the present invention provides a data processing system, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program when executed by the processor implements the steps of the method for determining operation instructions using a large language model as described in the above embodiment.
An embodiment of the present invention provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for determining operating instructions using a large language model as described in the above embodiment.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process of the above embodiment of the method for determining an operation instruction by using a large language model, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. The computer readable storage medium is, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A method for determining operational instructions using a large language model, the method comprising:
Acquiring request information input by a user;
Sending an instruction for processing the request information to the large language model, and receiving an analysis result returned by the large language model;
and determining an operation instruction corresponding to the request information based on the analysis result, wherein the operation instruction is used for indicating a target application to respond to the request information.
2. The method for determining operation instructions using a large language model according to claim 1, wherein the acquiring the request information input by the user comprises:
Responding to natural language information input by a user, outputting dialogue response information until the dialogue is ended, and obtaining at least one round of dialogue information;
and assembling task parameters contained in the at least one round of dialogue information to generate the request information.
3. The method for determining operation instructions using a large language model according to claim 2, wherein the task parameters include action parameters corresponding to a task, and the assembling task parameters included in the at least one round of dialogue information, and generating the request information includes:
matching task parameters in the at least one round of dialogue information with function parameters corresponding to the target application;
And when the matching of the task parameters and the function parameters corresponding to the target application fails, assembling the task parameters to generate request information.
4. A method of determining operational instructions utilizing a large language model as defined in claim 3, further comprising:
and when the task parameters are successfully matched with the function parameters corresponding to the target application, transmitting the action parameters corresponding to the task to the target application for execution.
5. The method according to any one of claims 2-4, wherein the request information includes task parameters, the task parameters include an object corresponding to dialogue information, content of dialogue information, and an action parameter corresponding to a task, the sending an instruction to the large language model to process the request information, and receiving a parsing result returned by the large language model includes:
determining the large language model corresponding to the request information according to the request information;
sending an instruction for analyzing the task parameters to the large language model;
receiving an analysis result corresponding to the task parameter returned by the large language model, wherein the analysis result comprises operation parameters, and the operation parameters comprise: and the operation action is used for controlling the instruction corresponding to the operation action and the service class of the target application.
6. The method for determining operation instructions using a large language model according to claim 5, wherein determining operation instructions corresponding to the request information based on the parsing result comprises:
determining an application programming interface of the target application based on the operating parameter;
assembling the operation parameters according to the application programming interface to generate an operation instruction corresponding to the request information;
The method further comprises the steps of:
Sending the operation instruction to the application programming interface;
And receiving an execution result of the operation instruction by the application programming interface.
7. The method for determining operation instructions using a large language model according to claim 6, wherein after generating the operation instruction corresponding to the request information, the method further comprises:
displaying target information of the operation instruction to be executed;
When receiving a modification instruction of a user on the target information, updating an operation instruction corresponding to the request information according to the modified operation instruction.
8. An apparatus for determining operational instructions using a large language model, the apparatus comprising:
The acquisition module is used for acquiring request information input by a user;
The processing module is used for sending an instruction for processing the request information to the large language model and receiving an analysis result returned by the large language model;
And the determining module is used for determining an operation instruction corresponding to the request information based on the analysis result, wherein the operation instruction is used for indicating a target application to respond to the request information.
9. A data processing system comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the method of determining operational instructions using a large language model as claimed in any one of claims 1 to 7.
10. A readable storage medium, wherein a computer program is stored on the readable storage medium, which when executed by a processor, implements the steps of the method of determining operational instructions using a large language model as claimed in any one of claims 1 to 7.
CN202311862953.4A 2023-12-29 2023-12-29 Method, device and system for determining operation instruction by using large language model Pending CN117952211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311862953.4A CN117952211A (en) 2023-12-29 2023-12-29 Method, device and system for determining operation instruction by using large language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311862953.4A CN117952211A (en) 2023-12-29 2023-12-29 Method, device and system for determining operation instruction by using large language model

Publications (1)

Publication Number Publication Date
CN117952211A true CN117952211A (en) 2024-04-30

Family

ID=90795381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311862953.4A Pending CN117952211A (en) 2023-12-29 2023-12-29 Method, device and system for determining operation instruction by using large language model

Country Status (1)

Country Link
CN (1) CN117952211A (en)

Similar Documents

Publication Publication Date Title
EP1277201B1 (en) Web-based speech recognition with scripting and semantic objects
US8024422B2 (en) Web-based speech recognition with scripting and semantic objects
EP3617896A1 (en) Method and apparatus for intelligent response
CN116644145B (en) Session data processing method, device, equipment and storage medium
CN116719911B (en) Automatic flow generation method, device, equipment and storage medium
CN112232198A (en) Table content extraction method, device, equipment and medium based on RPA and AI
CN112083926A (en) Web user interface generation method and device
CN114416049B (en) Configuration method and device of service interface combining RPA and AI
EP1382032B1 (en) Web-based speech recognition with scripting and semantic objects
CN111723559B (en) Real-time information extraction method and device
CN117112082A (en) Task execution method, device, system, equipment and storage medium
CN117952211A (en) Method, device and system for determining operation instruction by using large language model
CN110931010A (en) Voice control system
WO2021006168A1 (en) Webform input assistance system
CN114462376A (en) RPA and AI-based court trial record generation method, device, equipment and medium
CN112685031A (en) Analysis method and system for DBC file of fuel cell test system
WO2020150009A1 (en) Profile data store automation via bots
CN110796265B (en) Interactive operation method, device, terminal equipment and medium of decision tree model
CN115221305B (en) Robot cooperation system and control method
CN117874211B (en) Intelligent question-answering method, system, medium and electronic equipment based on SAAS software
CN118034670A (en) Software code generation method and device, electronic equipment and storage medium
CN118151905A (en) Method and device for generating tool use instance, storage medium and electronic equipment
CN117093691A (en) System help method, device, equipment and storage medium based on large language model
CN117634441A (en) Method, device and storage medium for generating low-code form
CN117891922A (en) Man-machine interaction method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination