CN117472552A - Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium - Google Patents

Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium Download PDF

Info

Publication number
CN117472552A
CN117472552A CN202311827229.8A CN202311827229A CN117472552A CN 117472552 A CN117472552 A CN 117472552A CN 202311827229 A CN202311827229 A CN 202311827229A CN 117472552 A CN117472552 A CN 117472552A
Authority
CN
China
Prior art keywords
service
target
scheme
generating
execution result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311827229.8A
Other languages
Chinese (zh)
Inventor
陆志鹏
韩光
李嘉宁
郑曦
郭祎萍
国丽
刘彬彬
马博原
连森
顾杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdian Data Industry Co ltd
Original Assignee
Zhongdian Data Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdian Data Industry Co ltd filed Critical Zhongdian Data Industry Co ltd
Priority to CN202311827229.8A priority Critical patent/CN117472552A/en
Publication of CN117472552A publication Critical patent/CN117472552A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a service scene intelligent arrangement and dynamic scheduling method, a device, equipment and a medium, wherein at least one target scheme is generated or determined according to service description content of a target user by responding to the received service description content; generating and executing target codes based on the at least one target scheme, obtaining an execution result, feeding the execution result back to the target user, generating or determining at least one target scheme according to service description contents provided by the target user, generating and executing target codes based on the at least one target scheme, and completing multi-step tasks in an actual service system by executing the target codes, so as to directly obtain the execution result, thereby improving the intelligent level of service process arrangement, meeting the personalized requirements of the target user, reducing the dependence on manual experience in the service process arrangement process, and further improving the service process arrangement efficiency.

Description

Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium
Technical Field
The invention relates to the technical field of government affair management, in particular to a service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium.
Background
With the development of artificial intelligence (AI, artificial Intelligence) technology, large language base models can provide powerful dialog, context learning, and code generation capabilities over open domain tasks, and can also generate advanced solution schemas for domain-specific tasks. However, in the aspect of complex task solution, due to different implementation mechanisms, the existing large model application technology is insufficient to support task planning and scheduling in a government system, and a large model is difficult to complete multi-step tasks in an actual business system, so that the intelligent level of the government processing system is low, and the efficiency of business process arrangement is influenced.
Therefore, there is a need for a solution that improves the efficiency of business process orchestration by increasing the level of intelligence of the business process orchestration.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium, aiming at improving the service flow arrangement efficiency by improving the intelligent level of service flow arrangement.
In order to achieve the above object, the present invention provides a service scene intelligent arrangement and dynamic scheduling method, which includes:
generating or determining at least one target scheme according to service description contents of a target user in response to receiving the service description contents;
and generating and executing target codes based on the at least one target scheme to obtain an execution result, and feeding back the execution result to the target user.
Optionally, the step of generating or determining at least one target scheme according to the service description content includes:
generating an embedded vector according to the service description content;
performing similarity search on the embedded vectors based on a preset vector database, and determining at least one associated vector and a corresponding task number;
and determining the at least one target scheme according to the task number in a preset relational database.
Optionally, the step of generating the object code based on the at least one object scheme and executing, and obtaining an execution result includes:
inputting the at least one target scheme into a preset large model, and setting the format of the preset large model output scheme;
Obtaining scheme contents output by the preset large model according to the at least one target scheme;
and generating an execution code according to the scheme content, and executing the execution code to obtain the execution result.
Optionally, the step of generating an execution code according to the solution content and executing the execution code to obtain the execution result includes:
selecting at least one API from a service resource pool according to the scheme content through an API selector;
forming an API queue according to the at least one API, and generating the execution code according to the API queue;
collecting API parameters corresponding to the API queue, and filling the execution codes according to the API parameters;
and executing the filled execution code to obtain the execution result.
Optionally, the step of generating an execution code according to the solution content and executing the execution code to obtain the execution result further includes:
and storing the execution result to be used for strengthening learning of the preset large model.
Optionally, the step of obtaining the solution content output by the preset large model according to the at least one target solution further includes:
Converting the scheme content into structured data;
and storing the structured data into the relational database for calling in the next business process arrangement process.
Optionally, the step of inputting the at least one target scheme into a preset large model and setting a format of the preset large model output scheme further includes:
acquiring historical question-answering text data;
formulating a corresponding task flow according to the historical question-answering text data to form a government affair service data set;
and training based on the government service data set to obtain the preset large model.
In addition, in order to achieve the above purpose, the present invention also provides a service scene intelligent arrangement and dynamic scheduling device, the service scene intelligent arrangement and dynamic scheduling device includes:
the response module is used for responding to the received service description content of the target user and generating or determining at least one target scheme according to the service description content;
and the execution module is used for generating and executing the target code based on the at least one target scheme, obtaining an execution result and feeding back the execution result to the target user.
In addition, in order to achieve the above object, the present invention also provides a terminal device, which includes a memory, a processor, and a service scene intelligent arrangement and dynamic scheduling program stored in the memory and capable of running on the processor, wherein the service scene intelligent arrangement and dynamic scheduling program implements the steps of the service scene intelligent arrangement and dynamic scheduling method as described above when executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a computer readable storage medium, on which a service scene intelligent arrangement and dynamic scheduling program is stored, which implements the steps of the service scene intelligent arrangement and dynamic scheduling method described above when being executed by a processor.
The embodiment of the invention provides a service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium, which are used for generating or determining at least one target scheme according to service description contents of target users by responding to the received service description contents; generating and executing target codes based on the at least one target scheme, obtaining an execution result, feeding the execution result back to the target user, generating or determining at least one target scheme according to service description contents provided by the target user, generating and executing target codes based on the at least one target scheme, and completing multi-step tasks in an actual service system by executing the target codes, so as to directly obtain the execution result, thereby improving the intelligent level of service process arrangement, meeting the personalized requirements of the target user, reducing the dependence on manual experience in the service process arrangement process, and further improving the service process arrangement efficiency.
Drawings
FIG. 1 is a schematic diagram of functional modules of terminal equipment to which a service scenario intelligent arrangement and dynamic scheduling device of the present invention belongs;
FIG. 2 is a flow chart of an exemplary embodiment of a service scenario intelligent orchestration and dynamic scheduling method according to the present invention;
FIG. 3 is a schematic flowchart of step S10 in the embodiment of FIG. 2;
FIG. 4 is a schematic diagram of a functional architecture according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a specific flow of step S20 in the embodiment of FIG. 2;
fig. 6 is an overall flow chart of an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The main solutions of the embodiments of the present invention are: generating or determining at least one target scheme according to service description contents of a target user by responding to the received service description contents; generating and executing target codes based on the at least one target scheme, obtaining an execution result, feeding the execution result back to the target user, generating or determining at least one target scheme according to service description contents provided by the target user, generating and executing target codes based on the at least one target scheme, and completing multi-step tasks in an actual service system by executing the target codes, so as to directly obtain the execution result, thereby improving the intelligent level of service process arrangement, meeting the personalized requirements of the target user, reducing the dependence on manual experience in the service process arrangement process, and further improving the service process arrangement efficiency.
Technical terms related to the embodiment of the invention:
API (Application Programming Interface ): is a set of rules and tools for communicating between different software applications. The API defines the methods and data formats that an application can use to request and exchange information. APIs are often used to enable integration of different software systems, enabling them to work in concert and share data.
With the development of artificial intelligence technology, a large language base model can provide powerful dialogue, context learning and code generation capabilities on open domain tasks, and can also generate high-level solution schemas for domain-specific tasks. However, in complex task solutions, due to the different implementation mechanisms, large models cannot be compatible with systems or other models and accomplish multi-step tasks in the real world, so a model is needed to link a large language base model with a model or system of a specific function to accomplish tasks in a real business system.
In the operation and use of government systems, the flow of a single task may need to be completed by calling a large number of interfaces, which results in a great deal of manpower being required to develop coding work for different flows to plan flow scenes, and the difficulty is great for non-professional staff. The task planning reasoning capability exhibited by the large language model can decompose the user requirements according to steps and output the natural language description of each step. However, large language models still face difficulties in describing the correctness and specificity of the steps. The AI model may generate a phantom problem due to the deviation between the training data and the application scene, so as to create an answer which does not conform to the objective situation, and thus the task steps cannot be performed smoothly. Furthermore, how to execute tasks in steps from natural language descriptions is also a problem to be solved, requiring a large language model to associate a series of tool APIs and supporting completion of tasks by invocation.
Existing large model application techniques are inadequate to support task planning and scheduling in government systems. The problems are in the aspects of irregular text format, high knowledge error rate in the government field, poor scheme step logic, serious illusion problem, inaccurate generated codes and the like. At present, all levels of systems in the government field have data barriers, service resources are mutually independent, the collaboration capability among departments is poor, the intelligent level of the system is low, and the overall office efficiency is required to be improved.
In particular, the prior art faces many challenges, such as:
1. because of the limited number of large model token, a large number of APIs cannot be associated;
2. the model has limited performance and insufficient accuracy and effectiveness to overcome a large number of API scheduling functions;
3. a large number of prompt projects need to be constructed to complete the distinguishing and calling of the APIs, and a large amount of manpower and expertise are consumed;
4. the large model has poor content controllability and can generate content which does not accord with objective facts;
5. in terms of code generation, large models have difficulty grasping programming core logic.
The invention provides a method for flexibly configuring task flows of a government system based on an AI large model, which decouples the task processing flow from core logic of the system, allows business personnel to flexibly configure and modify the task flows under the condition of not having programming capability, ensures that the system is more intelligent, adapts to different business scenes, provides personalized demand support, and improves operation efficiency.
The method aims at solving the difficulties and optimizing the application performance capability of the large model in the government affair system. Firstly, realizing accurate search of prompt texts by mounting a professional knowledge base, optimizing the problem of limiting the number of tokens in a mode of reducing the number of input characters, and enabling a large number of APIs to be searched in a large model association mode. And then fine-tuning the ChatGLM large model by using the government affair data set, enhancing the generation performance of the model text, and simultaneously constructing an API service pool to solve the text and knowledge requirements required by prompt engineering. And constructing prompt engineering by using a Langchain framework, ensuring that the generated content of the model is controllable to the maximum extent, and meeting the format requirement. In the aspect of code generation, a data set is constructed based on an API resource library, and model code generation logic is optimized in a fine tuning mode.
Specifically, referring to fig. 1, fig. 1 is a schematic diagram of functional modules of a terminal device to which the service scenario intelligent arrangement and dynamic scheduling apparatus of the present invention belongs. The service scene intelligent arrangement and dynamic scheduling device can be a device which is independent of the terminal equipment and can carry out business process arrangement, and the device can be carried on the terminal equipment in a form of hardware or software. The terminal equipment can be an intelligent mobile terminal with a data processing function such as a mobile phone and a tablet personal computer, and can also be a fixed terminal equipment or a server with a data processing function.
In this embodiment, the terminal device to which the service scene intelligent arrangement and dynamic scheduling apparatus belongs at least includes an output module 110, a processor 120, a memory 130 and a communication module 140.
The memory 130 stores an operating system and a service scenario intelligent arrangement and dynamic scheduling program, and the service scenario intelligent arrangement and dynamic scheduling device can store information such as service description content of a target user, at least one target scheme generated or determined according to the service description content, target codes generated based on the at least one target scheme, execution results obtained by executing the target codes, and the like in the memory 130; the output module 110 may be a display screen or the like. The communication module 140 may include a WIFI module, a mobile communication module, a bluetooth module, and the like, and communicates with an external device or a server through the communication module 140.
Wherein, the intelligent arrangement of the service scene and the dynamic scheduling program in the memory 130 realize the following steps when being executed by the processor:
generating or determining at least one target scheme according to service description contents of a target user in response to receiving the service description contents;
and generating and executing target codes based on the at least one target scheme to obtain an execution result, and feeding back the execution result to the target user.
Further, the service scenario intelligent orchestration and dynamic scheduling program in the memory 130, when executed by the processor, further implements the following steps:
generating an embedded vector according to the service description content;
performing similarity search on the embedded vectors based on a preset vector database, and determining at least one associated vector and a corresponding task number;
and determining the at least one target scheme according to the task number in a preset relational database.
Further, the service scenario intelligent orchestration and dynamic scheduling program in the memory 130, when executed by the processor, further implements the following steps:
inputting the at least one target scheme into a preset large model, and setting the format of the preset large model output scheme;
obtaining scheme contents output by the preset large model according to the at least one target scheme;
and generating an execution code according to the scheme content, and executing the execution code to obtain the execution result.
Further, the service scenario intelligent orchestration and dynamic scheduling program in the memory 130, when executed by the processor, further implements the following steps:
selecting at least one API from a service resource pool according to the scheme content through an API selector;
Forming an API queue according to the at least one API, and generating the execution code according to the API queue;
collecting API parameters corresponding to the API queue, and filling the execution codes according to the API parameters;
and executing the filled execution code to obtain the execution result.
Further, the service scenario intelligent orchestration and dynamic scheduling program in the memory 130, when executed by the processor, further implements the following steps:
and storing the execution result to be used for strengthening learning of the preset large model.
Further, the service scenario intelligent orchestration and dynamic scheduling program in the memory 130, when executed by the processor, further implements the following steps:
converting the scheme content into structured data;
and storing the structured data into the relational database for calling in the next business process arrangement process.
Further, the service scenario intelligent orchestration and dynamic scheduling program in the memory 130, when executed by the processor, further implements the following steps:
acquiring historical question-answering text data;
formulating a corresponding task flow according to the historical question-answering text data to form a government affair service data set;
and training based on the government service data set to obtain the preset large model.
According to the scheme, particularly, at least one target scheme is generated or determined according to service description contents of a target user in response to receiving the service description contents; generating and executing target codes based on the at least one target scheme, obtaining an execution result, feeding the execution result back to the target user, generating or determining at least one target scheme according to service description contents provided by the target user, generating and executing target codes based on the at least one target scheme, and completing multi-step tasks in an actual service system by executing the target codes, so as to directly obtain the execution result, thereby improving the intelligent level of service process arrangement, meeting the personalized requirements of the target user, reducing the dependence on manual experience in the service process arrangement process, and further improving the service process arrangement efficiency.
The method embodiment of the invention is proposed based on the above-mentioned terminal equipment architecture but not limited to the above-mentioned architecture.
The execution main body of the method of the embodiment can be a service scene intelligent arrangement and dynamic scheduling device or terminal equipment, and the embodiment takes the service scene intelligent arrangement and dynamic scheduling device as an example.
Referring to fig. 2, fig. 2 is a flowchart illustrating an exemplary embodiment of a service scenario intelligent scheduling and dynamic scheduling method according to the present invention. The service scene intelligent arrangement and dynamic scheduling method comprises the following steps:
step S10, generating or determining at least one target scheme according to service description content of a target user in response to receiving the service description content;
specifically, in order to overcome the problems that the large language model is poor in generation capacity in a government affair system, the success rate of using related complex tasks by tools in a business scene is low, and the like, the embodiment of the invention provides an API scheduling method of the government affair system based on the large language model. In preparation, an API service resource pool is firstly constructed, and natural language function description, parameters and return value description are added to each callable API. And collecting a business scene corpus and a task planning step, constructing a government system knowledge data set, and generating a large model by using Fine-tuning (Fine-tuning) of the data set to enhance the generation and generalization capabilities of the model in the government field. And the task step-by-step planning of the government affair system is realized by prompting the mounting mode of the engineering and government affair knowledge base, the API scheduling code of each step is output by utilizing the large model code generating capability, and finally, the task scheduling is realized by executing the code.
On the page end of a system staff, task flow auxiliary arrangement can be carried out through interaction with AI implementation, after a large model identifies user intention, an approximate flow stored in a system is queried through a knowledge graph, a task execution API (application program interface) calling node sequence is generated, parameters are automatically filled according to the user intention, secondary node arrangement and sequence modification are carried out through multi-round dialogue or manual debugging on the page end of the user, and after confirmation, the task execution API calling node sequence is stored in a database for flow release. The published flow is stored in a system database, and the calling sequence and the nesting relation of each node interface are defined by using an XML format.
Optionally, in the embodiment of the present invention, a dialogue platform is provided for the user, and when the target user has a task requirement, the dialogue platform may be used to describe the service description content, for example, if the target user needs to reserve a meeting room, the service description content may include information such as meeting participants, time, location, and the like.
Optionally, after the administrative system receives the service description content of the target user, an embedded vector may be generated according to the service description content, and then similarity search is performed on the generated embedded vector in the vector database, so as to determine at least one associated vector and a corresponding task number; further, in the relational database, a corresponding target scheme can be determined according to the task number, and then the target scheme is input into a preset large model as prompt information to generate a target code for calling each API resource and completing related task requirements.
Optionally, the service description content may include a target step scheme description, based on the target step scheme description, an API service resource pool is searched through the NLP model, an API queue of each target step may be obtained, and the API queue is input to the preset large model, that is, executable codes may be output according to the API queue through the preset large model.
And step S20, generating and executing target codes based on the at least one target scheme, obtaining an execution result, and feeding back the execution result to the target user.
Further, after generating or determining at least one target scheme according to the service description content, generating and executing the target code based on the at least one target scheme to obtain an execution result, and feeding back the execution result to the target user.
Optionally, the preset large model provided in the embodiment of the present invention is a large language base model, which can provide strong dialogue, context learning and code generating capability on an open domain task, and can generate a high-level solution outline for a specific domain task.
Optionally, after generating or determining at least one target scheme according to the service description content, at least one target scheme can be input as a prompt (prompt) to a preset large model, and the format of the output scheme is specified, and the scheme content meeting the requirements of the target user can be generated through the preset large model.
Optionally, through an API selector, according to the scheme content output by each step of the preset large model, in combination with the vector database expertise, the most suitable API is selected from the service resource pool to form an API queue, and the code generation model generates executable codes and fills API parameters according to the API queue and the context, and further executes the executable codes to obtain an execution result.
Optionally, after the execution result is fed back to the target user, the satisfaction evaluation of the user can be collected for optimizing the preset large model; the execution result of the time can be used as the data of reinforcement learning, and the planning capability of the model scheme can be continuously improved.
In this embodiment, at least one target scheme is generated or determined according to service description content of a target user by responding to the received service description content; generating and executing target codes based on the at least one target scheme, obtaining an execution result, feeding the execution result back to the target user, generating or determining at least one target scheme according to service description contents provided by the target user, generating and executing target codes based on the at least one target scheme, and completing multi-step tasks in an actual service system by executing the target codes, so as to directly obtain the execution result, thereby improving the intelligent level of service process arrangement, meeting the personalized requirements of the target user, reducing the dependence on manual experience in the service process arrangement process, and further improving the service process arrangement efficiency.
Referring to fig. 3, fig. 3 is a specific flowchart of step S10 in the embodiment of fig. 2. The present embodiment is based on the embodiment shown in fig. 2, and in the present embodiment, the step S10 includes:
step S101, generating an embedded vector according to the service description content;
step S102, carrying out similarity search on the embedded vectors based on a preset vector database, and determining at least one associated vector and a corresponding task number;
step S103, determining the at least one target scheme according to the task number in a preset relational database.
Specifically, referring to fig. 4, fig. 4 is a schematic diagram of a functional architecture in an embodiment of the present invention, as shown in fig. 4, in the embodiment of the present invention, a computing layer resource with a large language model generated as a core needs to be deployed, natural language reasoning and scheme generation support is provided, and related machine learning models need to be deployed to perform tasks such as code generation, API queue generation, and the like. The data layer needs to deploy a relational database, a vector database and a KV database. To support government data storage, task problem similarity calculation, and to improve system performance through caching. In addition, the application layer provides a large language model dialogue portal and an open interface service as an intelligent command tower interaction platform.
Optionally, in the embodiment of the invention, an API service resource pool is constructed, and specific functions, parameters and natural language descriptions of returned contents are contained for each API.
Optionally, in the embodiment of the invention, question and answer texts in the government affair system are collected, the question and answer texts and the official answers are stored in a relational database, and embedded vectors corresponding to the IDs and the questions are stored in a vector database.
Optionally, after the administrative system receives the service description content of the target user, an embedded vector may be generated according to the service description content, and then similarity search is performed on the generated embedded vector in the vector database, so as to determine at least one associated vector and a corresponding task number; further, in the relational database, the corresponding target scheme may be determined according to the task number.
Optionally, the government system in the embodiment of the invention can generate an embedded vector from the business description content input by the user, perform similarity search with the existing corpus vector in a vector database, acquire topK vectors with similar semantics and corresponding task IDs, search text content and calling schemes corresponding to the tasks according to the IDs in a relational database, further input K schemes in the database as prompts (prompt) to the ChatGLM big model, and prescribe the format of the output scheme thereof to generate target codes for calling each API resource and completing related task requirements. In the embodiment of the invention, the vector database is used as support, and the standardization of the large model answers can be improved by a method of mounting the professional knowledge base. The vector similarity search is used for realizing prompt construction, the problem of limiting the number of the large model token is optimized, a solution for obtaining the optimal generated content in a limited input length is provided, and the controllability of the large model is improved.
According to the scheme, the embodiment particularly generates an embedded vector according to the service description content; performing similarity search on the embedded vectors based on a preset vector database, and determining at least one associated vector and a corresponding task number; the at least one target scheme is determined in a preset relational database according to the task number, and the government affair text generating capacity can be optimized through the vector database and the relational database.
Referring to fig. 5, fig. 5 is a specific flowchart of step S20 in the embodiment of fig. 2. The present embodiment is based on the embodiment shown in fig. 2, and in the present embodiment, the step S20 includes:
step S201, inputting the at least one target scheme to a preset large model, and setting the format of the preset large model output scheme;
step S202, obtaining scheme contents output by the preset large model according to the at least one target scheme;
and step 203, generating an execution code according to the scheme content, and executing the execution code to obtain the execution result.
Optionally, the step of inputting the at least one target scheme into a preset large model and setting a format of the preset large model output scheme further includes:
acquiring historical question-answering text data;
formulating a corresponding task flow according to the historical question-answering text data to form a government affair service data set;
and training based on the government service data set to obtain the preset large model.
Optionally, in the embodiment of the invention, by collecting historical question-answer text data in the government affair system, making related task flows, constructing a government affair service data set, and using the service data set to fine tune the ChatGLM big language model, the generation capacity of service related content is enhanced. While large models exhibit excellent capabilities in open domain conversations and question-answering tasks, it is difficult to qualify the function of a professional model or system in a particular business scenario, as well as in the government domain. According to the embodiment of the invention, the expression and reasoning planning capacity of the large model in the government affair scene is enhanced in a fine tuning mode by collecting the government affair corpus data set, namely the historical question-answer text data, so that the large model has higher accuracy and reliability when processing the dialogue and task flow of the government affair scene, and support is provided for a government affair system.
Optionally, the step of generating an execution code according to the solution content and executing the execution code to obtain the execution result includes:
selecting at least one API from a service resource pool according to the scheme content through an API selector;
forming an API queue according to the at least one API, and generating the execution code according to the API queue;
collecting API parameters corresponding to the API queue, and filling the execution codes according to the API parameters;
and executing the filled execution code to obtain the execution result.
Optionally, the K target schemes in the database are input into the ChatGLM large model as prompts, the formats of the output schemes are regulated, the large model generates scheme contents and confirms, wherein the scheme contents can be returned to a user for confirmation, and/or the user can automatically recognize the confirmation through the model, and the API is selected according to the scheme after the confirmation is correct.
Optionally, the API selector outputs content according to each step of the large model, and combines vector database expertise to select the most suitable API from the service resource pool to form an API queue. The code generation model generates executable code and populates API parameters according to the API queue and context. If the necessary parameters are absent, the large model is fed back to the user through the dialogue and the parameter clarification requirements are described, and multiple rounds of dialogue modification are performed. After the parameter collection is completed, the service execution module executes the code according to steps. After execution is completed, the large model outputs a code execution result and feeds the code execution result back to a user.
Optionally, the step of generating an execution code according to the solution content and executing the execution code to obtain the execution result further includes:
and storing the execution result to be used for strengthening learning of the preset large model.
Optionally, in the embodiment of the present invention, by storing the execution result of each time for use as reinforcement learning data, the planning capability of the model scheme can be continuously improved.
Optionally, the step of obtaining the solution content output by the preset large model according to the at least one target solution further includes:
converting the scheme content into structured data;
and storing the structured data into the relational database for calling in the next business process arrangement process.
Alternatively, after the process is successfully executed, the process may be published and stored in a database. The published flow is stored in a system structured database, and the mapping relation between the flow and the XML file is established through the fields, so that the quick call can be conveniently carried out in the subsequent business flow arranging process.
The embodiment inputs the at least one target scheme to a preset large model, and sets the format of the output scheme of the preset large model; obtaining scheme contents output by the preset large model according to the at least one target scheme; and generating an execution code according to the scheme content, executing the execution code to obtain the execution result, enhancing code generation logic of a large model by generating an API queue and performing parameter filling, and improving the accuracy of generating the execution code by a method combining fine tuning and prompt engineering.
In addition, the embodiment of the invention also provides a service scene intelligent arrangement and dynamic scheduling device, which comprises:
the response module is used for responding to the received service description content of the target user and generating or determining at least one target scheme according to the service description content;
and the execution module is used for generating and executing the target code based on the at least one target scheme, obtaining an execution result and feeding back the execution result to the target user.
Referring to fig. 6, fig. 6 is an overall flow chart in the embodiment of the present invention, and as shown in fig. 6, in the embodiment of the present invention, a conference room predetermined by a user is taken as an example for illustration, and detailed steps are as follows:
1. a user describes the requirement of a preset task (such as meeting room reservation, participant, time and place) through a dialogue platform;
2. the government affair system generates an embedded vector according to the text input by the user;
3. in a vector database, carrying out similarity search with the existing corpus vector to obtain topK vectors with similar semantics and corresponding task IDs;
4. in the relational database, text content corresponding to the task is searched according to the ID and a calling scheme is adopted;
5. Inputting K schemes in a database as prompts (prompt) into a ChatGLM large model, and prescribing the format of the output schemes;
6. the large model generation scheme is returned to the user for confirmation, and after confirmation, the API is selected according to the scheme;
7. the API selector outputs content according to each step of the large model, combines vector database expertise, and selects the most suitable API from the service resource pool to form an API queue;
8. the code generation model generates executable codes and fills API parameters according to the API queue and the context;
9. if the necessary parameters are not filled, the large model is fed back to the user through the dialogue and the parameter clarification requirement is described, and multiple dialogue modification is carried out;
10. after the parameter collection is completed, the service execution module executes codes according to steps;
11. after execution is completed, the large model outputs a code execution result and feeds the code execution result back to a user;
12. storing the execution result as reinforcement learning data to continuously improve the planning capacity of the model scheme;
13. and after the process is successfully executed, releasing and storing the process in a database. The published flow is stored in a system structured database, and the mapping relation between the flow and the XML file is established through the fields.
Optionally, in the pre-stage, at least one step of:
Constructing an API service resource pool, wherein each API comprises specific functions, parameters and natural language description of returned contents;
collecting a question and answer text in a government affair system, formulating a related task flow, and constructing a government affair service data set;
fine-tuning a ChatGLM large language model by using a service data set, and enhancing the generation capacity of service related contents;
the question-answer text and the official answer are stored in a relational database, and the embedded vector corresponding to the ID and question is stored in a vector database.
Optionally, the key modules and techniques involved in the embodiments of the present invention include at least one of:
large Language Model (LLM): the artificial intelligence model based on deep learning can be used for understanding and generating tasks aiming at various natural languages, and supports fine adjustment by using industry data so as to adapt to different service scenes. The open source large model comprises GPT-3, chatGLM, baichuan and the like, and has a certain question-answering and thinking capability after being trained by a general corpus.
Machine learning expert model: the invention uses other machine learning models or professional models to make up the limitation of the large model in the professional scene domain, and completes tasks such as code generation, task queue generation, API selection, image-text conversion and the like.
Fine-tuning (Fine-tuning): on the basis of the model which has been pre-trained, the model parameters are adapted to a specific task or a specific data set by further training and adjusting them. The ChatGLM pre-training model has already completed training on a large scale dataset, with the understanding ability of the language. During fine tuning, the pre-training model is loaded firstly, most of parameters and weights of the model are reserved, and then the model is further trained by using a government data set, so that the parameters of the model can be fine-tuned to better adapt to task characteristics and requirements.
Prompt engineering (Prompt Engineering): in using a language model for generating tasks, the output results of the model are guided by designing and adjusting Prompt text (Prompt). Based on task reasoning planning capability of a large language model in a government system, the invention provides a step scheduling method of user task APIs. The method takes the prompt engineering module and the service executor as assistance, so that the large model can call the API step by step to complete the complex reasoning task. The system uses LangChain as a development framework, first directs a large language model to generate a specific format, high quality output by designing and optimizing a prompt word (prompt). The large language model can understand the natural language intent of the user, but not necessarily can output the expected response, and the model behavior can be better guided by designing prompts to approximate the results intended by the user. After the scheme is constructed, the information is input to a service executor, and the description content of the scheme is searched and executed in the scheme execution stage.
Vector database: the application of the professional knowledge base can reduce the illusion problem of the large model and improve the answer accuracy. The system adopts a vector database to store text vectors of similar questions and answers, and when the system receives natural language questions or commands of a user, a similarity algorithm is adopted to input standard questions and answers in a semantic search library as prompts to a large model so as to improve the accuracy of generating the answers.
XML file: an extensible markup language for storing and transmitting structured data. Tags are typically used in files to tag data elements and represent relationships and hierarchies between data through nesting and attributes. The method stores the flow content in an XML file, and defines an interface ID, an interaction type (manual, automatic synchronous and automatic asynchronous), an executor, whether to start or not and a node type (office and meeting) for each node. The XML file records flow nodes in sequence, layering is achieved through nesting, the outermost layer represents the whole flow by using a < process > tag, each flow needs to comprise a start node and an end node, the start node and the end node are respectively represented by using a < start > tag and a < end > tag, and the tags comprise elements such as a display name (displayname), a shape (shape), an ID (identity), a type (type) and the like. The task node is denoted by < task >, is also nested in < process >, is in parallel relation with the starting node and the ending node, and is displayed according to the flow sequence, and the elements comprise a display name (displayne), a shape (shape), an ID and an interaction type (interactive type). Meanwhile, except for the end node, each node is provided with a nested < transition > tag as an indication of the next node, and the nested < transition > tag comprises an element ID and the next node ID.
In this embodiment, task planning capability in a large model government scene is optimized by fine tuning, and although the large model exhibits excellent capability in open domain conversations and question-answering tasks, it is difficult to play the role of a professional model or system in a specific business scene, as well as in the government field. According to the embodiment of the invention, the government affair corpus data set is collected, and the expression and reasoning planning capacity of the large model under the government affair scene is enhanced in a fine tuning mode, so that the large model has higher accuracy and reliability when processing the dialogue and task flow of the government affair scene, and support is provided for a government affair system; the generation capacity of government affair texts is optimized through the professional knowledge base and the knowledge graph, and the normalization of large model answers is improved through a method of mounting the professional knowledge base by taking the vector database as support in the embodiment of the invention. The vector similarity search is used for realizing prompt construction, so that the problem of limiting the number of the large model token is optimized, a solution for obtaining the optimal generated content in a limited input length is provided, and the large model is ensured to be controllable to the greatest extent; the model code generation capability is optimized, the API description is deeply optimized, and attributes and rule bases are introduced to strengthen the code generation logic of the large model. The accuracy of the generated codes is improved by a method of combining fine adjustment and prompt engineering; the flexibility of the system is increased, the task processing flow and the core logic of the system are decoupled, business personnel are allowed to flexibly configure and modify the task flow under the condition of not having programming capability, so that the system is suitable for different business scenes, personalized demand support is provided, and the operation efficiency is improved.
The present embodiment realizes the principles and implementation processes of intelligent arrangement and dynamic scheduling of service scenarios, please refer to the above embodiments, and will not be described herein.
In addition, the embodiment of the invention also provides a terminal device, which comprises a memory, a processor and a service scene intelligent arrangement and dynamic scheduling program which is stored in the memory and can run on the processor, wherein the service scene intelligent arrangement and dynamic scheduling program realizes the steps of the service scene intelligent arrangement and dynamic scheduling method when being executed by the processor.
Because all the technical schemes of all the embodiments are adopted when the service scene intelligent arrangement and dynamic scheduling program is executed by the processor, the service scene intelligent arrangement and dynamic scheduling program at least has all the beneficial effects brought by all the technical schemes of all the embodiments, and the description is omitted.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a service scene intelligent arrangement and dynamic scheduling program, and the service scene intelligent arrangement and dynamic scheduling program realizes the steps of the service scene intelligent arrangement and dynamic scheduling method when being executed by a processor.
Because all the technical schemes of all the embodiments are adopted when the service scene intelligent arrangement and dynamic scheduling program is executed by the processor, the service scene intelligent arrangement and dynamic scheduling program at least has all the beneficial effects brought by all the technical schemes of all the embodiments, and the description is omitted.
Compared with the prior art, the service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium provided by the embodiment of the invention generate or determine at least one target scheme according to the service description content by responding to the received service description content of the target user; generating and executing target codes based on the at least one target scheme, obtaining an execution result, feeding the execution result back to the target user, generating or determining at least one target scheme according to service description contents provided by the target user, generating and executing target codes based on the at least one target scheme, and completing multi-step tasks in an actual service system by executing the target codes, so as to directly obtain the execution result, thereby improving the intelligent level of service process arrangement, meeting the personalized requirements of the target user, reducing the dependence on manual experience in the service process arrangement process, and further improving the service process arrangement efficiency.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The intelligent arrangement and dynamic scheduling method for the service scene is characterized by comprising the following steps of:
generating or determining at least one target scheme according to service description contents of a target user in response to receiving the service description contents;
and generating and executing target codes based on the at least one target scheme to obtain an execution result, and feeding back the execution result to the target user.
2. The service scenario intelligent orchestration and dynamic scheduling method according to claim 1, wherein the step of generating or determining at least one target scenario from the service description content comprises:
generating an embedded vector according to the service description content;
performing similarity search on the embedded vectors based on a preset vector database, and determining at least one associated vector and a corresponding task number;
And determining the at least one target scheme according to the task number in a preset relational database.
3. The method for intelligent scheduling of service scenarios according to claim 2, wherein the step of generating and executing the object code based on the at least one object scenario to obtain the execution result comprises:
inputting the at least one target scheme into a preset large model, and setting the format of the preset large model output scheme;
obtaining scheme contents output by the preset large model according to the at least one target scheme;
and generating an execution code according to the scheme content, and executing the execution code to obtain the execution result.
4. The service scene intelligent orchestration and dynamic scheduling method according to claim 3, wherein the steps of generating an execution code according to the solution content, and executing the execution code to obtain the execution result include:
selecting at least one API from a service resource pool according to the scheme content through an API selector;
forming an API queue according to the at least one API, and generating the execution code according to the API queue;
collecting API parameters corresponding to the API queue, and filling the execution codes according to the API parameters;
And executing the filled execution code to obtain the execution result.
5. The intelligent scheduling method for service scenes according to claim 3, wherein the steps of generating an execution code according to the scheme content and executing the execution code to obtain the execution result further comprise:
and storing the execution result to be used for strengthening learning of the preset large model.
6. The service scene intelligent orchestration and dynamic scheduling method according to claim 3, wherein the step of obtaining the solution content output by the pre-set big model according to the at least one target solution further comprises:
converting the scheme content into structured data;
and storing the structured data into the relational database for calling in the next business process arrangement process.
7. The service scenario intelligent orchestration and dynamic scheduling method according to claim 3, wherein the step of inputting the at least one target scenario into a pre-defined large model, and setting the format of the pre-defined large model output scenario further comprises, before:
acquiring historical question-answering text data;
Formulating a corresponding task flow according to the historical question-answering text data to form a government affair service data set;
and training based on the government service data set to obtain the preset large model.
8. The intelligent arrangement and dynamic scheduling device for the service scenes is characterized by comprising the following components:
the response module is used for responding to the received service description content of the target user and generating or determining at least one target scheme according to the service description content;
and the execution module is used for generating and executing the target code based on the at least one target scheme, obtaining an execution result and feeding back the execution result to the target user.
9. A terminal device, characterized in that it comprises a memory, a processor and a service scenario intelligent orchestration and dynamic scheduling program stored on the memory and executable on the processor, which when executed by the processor realizes the steps of the service scenario intelligent orchestration and dynamic scheduling method according to any one of claims 1-7.
10. A computer readable storage medium, wherein a service scenario intelligent orchestration and dynamic scheduling program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the service scenario intelligent orchestration and dynamic scheduling method according to any one of claims 1-7.
CN202311827229.8A 2023-12-28 2023-12-28 Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium Pending CN117472552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311827229.8A CN117472552A (en) 2023-12-28 2023-12-28 Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311827229.8A CN117472552A (en) 2023-12-28 2023-12-28 Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117472552A true CN117472552A (en) 2024-01-30

Family

ID=89627832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311827229.8A Pending CN117472552A (en) 2023-12-28 2023-12-28 Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117472552A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108432208A (en) * 2016-12-15 2018-08-21 华为技术有限公司 A kind of arranging service method, apparatus and server
CN113176938A (en) * 2021-05-25 2021-07-27 深圳前海微众银行股份有限公司 Scheduling method, system, terminal device and storage medium for customer service
CN114594927A (en) * 2021-08-12 2022-06-07 湖南亚信安慧科技有限公司 Low code development method, device, system, server and storage medium
CN115391004A (en) * 2022-08-02 2022-11-25 中信建投证券股份有限公司 Task scheduling system, method and device and electronic equipment
CN116911588A (en) * 2023-07-21 2023-10-20 中国移动通信有限公司政企客户分公司 Business process execution method, device, equipment and storage medium
CN116976640A (en) * 2023-08-30 2023-10-31 中电科东方通信集团有限公司 Automatic service generation method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108432208A (en) * 2016-12-15 2018-08-21 华为技术有限公司 A kind of arranging service method, apparatus and server
CN113176938A (en) * 2021-05-25 2021-07-27 深圳前海微众银行股份有限公司 Scheduling method, system, terminal device and storage medium for customer service
CN114594927A (en) * 2021-08-12 2022-06-07 湖南亚信安慧科技有限公司 Low code development method, device, system, server and storage medium
CN115391004A (en) * 2022-08-02 2022-11-25 中信建投证券股份有限公司 Task scheduling system, method and device and electronic equipment
CN116911588A (en) * 2023-07-21 2023-10-20 中国移动通信有限公司政企客户分公司 Business process execution method, device, equipment and storage medium
CN116976640A (en) * 2023-08-30 2023-10-31 中电科东方通信集团有限公司 Automatic service generation method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Harms et al. Approaches for dialog management in conversational agents
US9542940B2 (en) Method and system for extending dialog systems to process complex activities for applications
CN110059170B (en) Multi-turn dialogue online training method and system based on user interaction
CN110377720A (en) The more wheel exchange methods of intelligence and system
CN110096516B (en) User-defined database interaction dialog generation method and system
CN109948151A (en) The method for constructing voice assistant
CN116483980A (en) Man-machine interaction method, device and system
CN116757652B (en) Online recruitment recommendation system and method based on large language model
CN112199486A (en) Task type multi-turn conversation method and system for office scene
CN111538825A (en) Knowledge question-answering method, device, system, equipment and storage medium
CN117077792B (en) Knowledge graph-based method and device for generating prompt data
Xu et al. Dialogue management based on entities and constraints
CN110971683B (en) Service combination method based on reinforcement learning
CN117472552A (en) Service scene intelligent arrangement and dynamic scheduling method, device, equipment and medium
CN116777568A (en) Financial market transaction advanced intelligent dialogue ordering method, device and storage medium
CN112069830A (en) Intelligent conversation method and device
US8752004B2 (en) System and a method for generating a domain-specific software solution
CN115294988A (en) Voice interaction system and method for collaboration
CN113689851A (en) Scheduling professional language understanding system and method
CN111061846A (en) Electric power new installation and capacity increase conversation customer service system and method based on layered reinforcement learning
CN110222161A (en) Talk with robot intelligent response method and device
CN114117024B (en) Platform construction method for multi-round conversation function scene
CN112487170B (en) Man-machine interaction dialogue robot system facing scene configuration
CN107092515A (en) A kind of LPMLN inference methods and system based on rebound strength curve logical program
CN117787668A (en) Target distribution method, device, electronic equipment, storage medium and program product based on large language model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination