CN117112065A - Large model plug-in calling method, device, equipment and medium - Google Patents

Large model plug-in calling method, device, equipment and medium Download PDF

Info

Publication number
CN117112065A
CN117112065A CN202311109649.2A CN202311109649A CN117112065A CN 117112065 A CN117112065 A CN 117112065A CN 202311109649 A CN202311109649 A CN 202311109649A CN 117112065 A CN117112065 A CN 117112065A
Authority
CN
China
Prior art keywords
plug
content
hit
parameter
natural language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311109649.2A
Other languages
Chinese (zh)
Inventor
谢永康
高古明
赵鹏昊
熊雪
王倩
徐东泽
施恩
李雨轩
周胜
李曙鹏
王耀
忻舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311109649.2A priority Critical patent/CN117112065A/en
Publication of CN117112065A publication Critical patent/CN117112065A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural

Abstract

The disclosure provides a method, a device, equipment and a medium for calling a large model plug-in, relates to the field of large models, and particularly relates to the fields of artificial intelligence, large language models and man-machine interaction. The specific implementation scheme is as follows: acquiring natural language content; semantic understanding is carried out on the natural language content, and a hit plug-in for hit of the natural language content is determined; determining language understanding content according to the hit plug-in and the natural language content; the language understanding content is sent to a large language model, and the parameter value of the input parameter of the hit plug-in is obtained; and calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result. The embodiment of the disclosure can improve the universality of a large language model.

Description

Large model plug-in calling method, device, equipment and medium
Technical Field
The disclosure relates to the field of large models, in particular to the fields of artificial intelligence, large language models and man-machine interaction, and especially relates to a large model plug-in calling method, device, equipment and medium.
Background
In recent years, understanding and generating capabilities of large language models are greatly improved, and application fields of the large language models are also widely expanded.
Large language models (LLM, large Language Model), which are essentially generative models, refer to deep learning models trained using large amounts of text data, can understand the meaning of language text and generate content that meets the intent of a user, such as performing tasks, conducting man-machine conversations, problem solutions, image generation, and the like.
Disclosure of Invention
The disclosure provides a large model plug-in calling method, a device, equipment and a medium.
According to an aspect of the present disclosure, there is provided a large model plug-in calling method, including:
acquiring natural language content;
semantic understanding is carried out on the natural language content, and a hit plug-in for hit of the natural language content is determined;
determining language understanding content according to the hit plug-in and the natural language content;
the language understanding content is sent to a large language model, and the parameter value of the input parameter of the hit plug-in is obtained;
and calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
According to an aspect of the present disclosure, there is provided a large model plug-in calling device, including:
the natural language content acquisition module is used for acquiring natural language content;
The plug-in matching module is used for carrying out semantic understanding on the natural language content and determining a hit plug-in hit by the natural language content;
the understanding content determining module is used for determining language understanding content according to the hit plug-in and the natural language content;
the input parameter detection module is used for sending the language understanding content to a large language model to obtain a parameter value of the input parameter of the hit plug-in;
and the plug-in calling module is used for calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the large model plug-in invocation method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the large model plug-in invocation method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the large model plug-in invocation method of any of the embodiments of the present disclosure.
The embodiment of the disclosure can improve the universality of a large language model.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of another large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart of another large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure;
FIG. 4 is a scene graph of a large model plug-in invocation method disclosed in accordance with an embodiment of the disclosure;
FIG. 5 is a scene graph of a large model plug-in invocation method disclosed in accordance with an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a large model plug-in calling device according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of an electronic device of a large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
FIG. 1 is a flow chart of a large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure, which may be applicable to the case of expanding plug-ins for large language models. The method of the embodiment can be executed by a large model plug-in calling device, the device can be realized in a software and/or hardware mode, and the device is specifically configured in an electronic device with a certain data operation capability, wherein the electronic device can be a client device or a server device, and the client device can be a mobile phone, a tablet computer, a vehicle-mounted terminal, a desktop computer and the like.
The device or the system for executing the large model plug-in calling method provided by the embodiment of the disclosure is positioned between the large language model and the plug-in and is used for establishing a bridge between the plug-in and the large language model, and any plug-in can be docked to the large language model with any function. The device or the system for executing the large model plug-in calling method provided by the embodiment of the disclosure can interact with the plug-in through an application program interface (Application Programming Interface, API) and interact with the client through the API, so that natural language content provided by a user and sent by the client is obtained.
S101, acquiring natural language content.
The natural language content may refer to natural language content input by a user during man-machine interaction. Natural language content may be understood as content provided by a user that contains the user's intent in order to invoke functionality required by a large language model. The user can input data in at least one type of text, image, voice or video, and the like, and the directly input data is identified to obtain natural language content. The client may receive user input data and send to the electronic device of an embodiment of the present disclosure, which obtains natural language content from the input data.
S102, carrying out semantic understanding on the natural language content, and determining a hit plug-in for hit of the natural language content.
Semantic understanding is used to identify user intent in natural language content. Hit plugin may refer to a plugin that implements the functionality of a user intent that is derived through natural language content recognition. Wherein the number of hit plugins may be at least one. Input parameter detection and invocation can be performed one by one for hit plugins. The semantic understanding can be carried out on the natural language content to obtain the user intention, the user intention is matched with the functions of all available plug-ins, and the plug-ins corresponding to the user intention are determined to hit the plug-ins. Semantic understanding can be achieved by using a deep learning model, and matching user intent with plug-ins can be achieved by using the deep learning model.
S103, determining language understanding content according to the hit plug-in and the natural language content.
Language understanding content is an input to a large language model. Language understanding content includes information about hit plugins and natural language content, but may also include other content, such as context, without limitation. The related information of the hit plugin can be queried, and the related information of the hit plugin and the natural language content are combined to obtain the language understanding content.
S104, sending the language understanding content to a large language model to obtain the parameter values of the input parameters of the hit plug-in.
The parameter value of the input parameter is the input information of the hit plug-in. The hit plug-in processes the parameter value of the input parameter to obtain a calling result.
Providing the related information and natural language content of the hit plugin to a large language model, enabling the large language model to take charge of semantic understanding, and generating necessary information for calling the hit plugin, wherein the necessary information can comprise input parameters of the hit plugin and parameter values of the input parameters. The information about the hit plugin may include information about the function and input/output of the hit plugin.
And sending the language understanding content to the large language model to obtain the input parameters of the hit plug-in and the parameter values corresponding to the input parameters.
S105, calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
And generating data conforming to a plug-in calling protocol based on the parameter value of the input parameter, sending the data to a hit plug-in, calling the hit plug-in, obtaining a calling result based on the input parameter and the parameter value by the hit plug-in, and feeding back the calling result. The electronic device of the embodiment of the disclosure receives the calling result fed back by the hit plug-in. The call results may be fed back to the client, which may be provided to the user.
Illustratively, the hit plugin is a weather query plugin, the input parameters are time parameters and place parameters, the parameter values of the time parameters are today, and the parameter values of the place parameters are local. The call to hit the plug-in results in that the weather local today is sunny.
Illustratively, the hit plugin is a hotel reservation plugin, the input parameters are parameters including a hotel name parameter and a number of check-ins parameter, the parameter value of the hotel name parameter is a, and the parameter value of the number of check-ins parameter is 2. The call result of the hit plugin is information that the reservation of the room among 2 persons of the hotel A is completed.
According to the large language model of the embodiment of the disclosure, specific training is not required for different user intentions or different application scenes, and the calling function of the plug-in of the user intentions can be realized only by having language understanding and the generating capability of plug-in input information, so that the universality of the large language model is improved.
The existing large language model has some defects, for example, the large language model is limited by timeliness of a pre-training data set, and the large language model cannot answer correctly for objective facts after pre-training time; the large language model cannot directly complete functions such as ticket booking, meal booking and the like which depend on external resources. At present, the large language model can be subjected to supervised learning, so that plug-in expansion of the large language model is realized. The cost of the plug-in is high by training and optimizing the large language model each time, and the instruction data sets of all the plug-ins are usually required to be trained, so that the large language model is difficult to flexibly and timely expand the plug-ins; meanwhile, due to the tuning of the plug-in trigger, the large language model is influenced. Thus, customization for different users is also difficult for the scope and triggering of the plug-in. And a certain specific large language model has a corresponding function through a tuning training mode, so that for other large language models, even the same plug-in unit needs to be subjected to targeted tuning training again. The method is suitable for other large language models, and has no universality.
According to the technical scheme, through semantic understanding of natural language content, a hit plug-in corresponding to user intention is determined, language understanding content is determined according to the natural language content and related information of the hit plug-in, the hit plug-in is sent to a large language model, the natural language content is understood by the large language model, parameter values of input parameters required by operation of the hit plug-in are extracted, parameter values of the input parameters fed back by the large language model are obtained, the hit plug-in is called to obtain a calling result, external resources are obtained on the basis of the large language model, understanding capability and external resources of the large language model are utilized, timeliness defects of the large language model and resource limitations of the user can be broken through, language understanding and generated application scenes of a large language model system are increased, prediction accuracy of language understanding generating tasks is improved, various plug-ins can be expanded in real time, diversity and flexibility of expanding functions are increased, universality of the plug-in can be increased, meanwhile, training of the large language model is not required for a certain scene, and universality of the large language model is improved.
FIG. 2 is a flow chart of another large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure, further optimized and expanded based on the above-described technical solution, and may be combined with the various alternative implementations described above. The determining language understanding content according to the hit plug-in and the natural language content is embodied as follows: acquiring a prompt template corresponding to the hit plug-in; the prompt template corresponding to the hit plug-in comprises input parameters corresponding to the hit plug-in; and combining the natural language content with the prompt template corresponding to the hit plug-in to obtain language understanding content.
S201, natural language content is acquired.
S202, carrying out semantic understanding on the natural language content, and determining a hit plug-in for hit of the natural language content.
S203, acquiring a prompt template corresponding to the hit plug-in; the prompt template corresponding to the hit plug-in comprises input parameters corresponding to the hit plug-in.
A prompt template (prompt) is a piece of text prompt entered into the model that contains keywords and context of information or questions to be queried by the user so that the model better understands the user's intent and gives a more accurate reply. The prompting template corresponding to the hit plugin can refer to text of content, type and the like of input parameters of the hit plugin in prompting natural language content. The prompt templates corresponding to the hit plugins are used for combining natural language contents, and input parameters of the hit plugins are identified through a large language model. The prompt template corresponding to the hit plugin comprises input parameters of the hit plugin, specifically may be description information comprising the input parameters, for example, the description information of the input parameters may comprise at least one of the following: the name of the input parameter, the function description information of the input parameter, and the type of the input parameter.
The plug-in can be registered, and when the plug-in is registered, the relevant information of the plug-in the registration request is acquired, and the corresponding prompt template of the plug-in is generated according to the relevant information of the plug-in. And inquiring the prompt template corresponding to the hit plugin in the stored prompt template corresponding to the plugin.
As in the previous example, the weather query plug-in, the alert template may be: plug-in id: XX01; plug-in description: inquiring weather; the plug-in inputs parameters: inputting parameter names: time, type: a time type; the plug-in inputs parameters: inputting parameter names: location, type: character type.
In addition, the prompt template may further include input and output examples of the large language model, for example, input: weather for YY on XX year XX month XX day; and (3) outputting: the plug-in inputs parameters: inputting parameter names: time, parameter values of the input parameters: XX year XX month XX day, type: a time type; the plug-in inputs parameters: inputting parameter names: location, parameter values of input parameters: YY, type: character type.
Correspondingly, the prompt template is: plug-in id: XX01; plug-in description: inquiring weather; the plug-in inputs parameters: inputting parameter names: time, type: a time type; the plug-in inputs parameters: inputting parameter names: location, type: character type; input: weather for YY on XX year XX month XX day; and (3) outputting: time, parameter values of the input parameters: XX year XX month XX day, type: a time type; the plug-in inputs parameters: inputting parameter names: location, parameter values of input parameters: YY, type: character type.
S204, combining the natural language content with the prompt template corresponding to the hit plug-in to obtain language understanding content.
The natural language content can be spliced with the prompt template corresponding to the hit plug-in directly, or the natural language content is added to the corresponding position in the prompt template. Alternatively, the keyword may be extracted from the natural language content, and the extracted keyword may be combined with the alert template. There are other combinations, and the combination is not particularly limited.
After the prompt template is set, the large language model only needs to have the capability of understanding the natural language content and generating parameters specified by the prompt template based on the prompt template, so that the input parameters of different plug-ins and the natural language content of different scenes do not need to be trained in a targeted manner, and the large language model can still realize the functions of understanding the natural language content and generating the input parameters corresponding to the plug-ins.
S205, sending the language understanding content to a large language model to obtain the parameter values of the input parameters of the hit plug-in.
S206, calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
Optionally, the large model plug-in invoking method further includes: acquiring description information of an alternative plug-in; extracting input parameters of the alternative plug-in from the description information of the alternative plug-in; and combining the input parameters of the alternative plug-in with the plug-in universal template to obtain a prompt template corresponding to the alternative plug-in.
The hit plug-in may be registered prior to the hit plug-in application. A registration request for the alternate plug-in may be received from which description information for the alternate plug-in is obtained. The description information of the alternative plug-in may include at least one of: identification (id) of the alternative plug-in, information of type, function and input parameters, etc. Wherein, the information of the input parameters can include at least one of the following: parameter names, parameter descriptions, and parameter types, etc.
The plug-in universal template is used for combining the input parameters to form a prompt template for prompting to generate the input parameters. The universal plug-in templates can be templates comprising preset slots, and different input parameters are placed at different slots. Different alternative plugins can be combined with a plugin universal template to generate prompt templates corresponding to the different alternative plugins. An alternative plug-in may be configured with at least one input parameter, and all the configured input parameters are combined with a plug-in universal template to generate a prompt template corresponding to the alternative plug-in. Specifically, information of the input parameters of the alternative plug-in is extracted from the description information of the alternative plug-in, for example, a parameter name, a parameter description and a parameter type of the input parameters. Information of the input parameter is added to a corresponding position in the plug-in generic template, for example, a parameter name is placed at a position after the parameter name field in the plug-in generic template as a parameter value of the parameter name field. And respectively placing the parameter name, the parameter description and the parameter type of the input parameter at the corresponding positions in the universal plug-in module to obtain the prompting module corresponding to the alternative plug-in module. And mapping and storing the prompt template corresponding to the alternative plug-in and the identification, the type and the function of the alternative plug-in.
The description information of the alternative plug-in is obtained when the alternative plug-in is registered, the input parameters of the alternative plug-in are extracted, and the plug-in universal template is combined to form a prompt template corresponding to the alternative plug-in, so that the prompt template suitable for the alternative plug-in is obtained, a standard plug-in registration protocol and a standard access mechanism are provided, the realization cost of compatibility between the plug-in and a large language model is reduced, the generation efficiency of the prompt template is improved, the tuning speed of the large language model is accelerated, and the development and maintenance costs of plug-in integration and plug-in scheduling are greatly reduced.
Optionally, the large model plug-in invoking method further includes: the calling result is sent to the large language model to obtain calling reply content; and feeding back the calling result and the calling reply content.
The call reply content may be content in which the call result is described in language, and the call result of the user is fed back in a dialogue form. For example, the call result is call success, and the call reply content is: you have successfully performed XX operations. The call reply content and the call result are used as feedback content together for feedback, so that the richness of the reply content can be increased, and a user can be prompted whether the call result is a function intended to be realized or not, so that the user can correct the call result in time.
The large language model can understand the semantics of the call result and generate language description content corresponding to the call result as call reply content.
Illustratively, the call result of the predetermined ticket is a predicted success. The call reply content is: you have successfully reserved 1 ticket from XX to XX, the departure time is XX, the flight number is XX, the seat number is XX, and check-in is performed from the XX boarding gate. Wherein XX is only exemplary and represents content, different XX may represent the same content or different content.
Semantic understanding is carried out on the calling result through the large language model, calling reply content is generated, the calling result and the calling reply content are fed back together, the richness of the reply content is increased, and the calling result is displayed more completely and comprehensively.
Optionally, the sending the call result to the large language model to obtain call reply content includes: obtaining a reply template corresponding to the hit plug-in; combining the calling result with a reply template corresponding to the hit plug-in to obtain reply understanding content; and sending the reply understanding content to the large language model to obtain call reply content.
The reply template may refer to a prompt template for generating call reply content in combination with the call result. And combining the calling result with the reply template, namely, putting the calling result at the tail of the reply template and splicing to obtain the reply understanding content. The reply understanding content is an input of a large language model. In addition, the reply template can be spliced with the context, for example, the context at the moment can comprise parameter values of input parameters of hit parameters, natural language content, historical multi-turn dialogue content and the like. The reply templates of different hit plugins may be the same or different.
Or the alternative plug-in for replying can be registered, the replying template can be understood as a prompting template corresponding to the alternative plug-in, and the replying template, the calling result and the context are spliced to obtain replying understanding content. The reply understanding content is sent to a large language model, and the large language model feeds back parameter values of input parameters of the alternative plug-in. And calling the alternative plug-in based on the parameter value to obtain call reply content.
The reply template is configured, the calling result and the reply template are combined to generate reply understanding content, the reply understanding content is sent to the large language model, the calling reply content is obtained, the dialogue reply content can be flexibly generated aiming at the calling result, and the richness of the reply content is increased.
According to the technical scheme, the plug-in is adapted through the prompt template, so that the large language model can quickly understand natural language content according to the prompt template to generate input parameters corresponding to the plug-in, the training step of generating corresponding input parameters only when the large language model understands semantics of different scenes for different plug-ins is omitted, the tuning process of the large language model is quickly realized, the labor cost of tuning and updating the large language model is reduced, meanwhile, the corresponding prompt template can be quickly obtained according to the plug-in, the updating instantaneity of the large language model is improved, and the compatibility, universality and universality between the large language model and the plug-in are improved.
FIG. 3 is a flow chart of another large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure, further optimized and expanded based on the above-described technical solution, and may be combined with the various alternative implementations described above. The obtaining the parameter values of the input parameters of the hit plugin is specifically: under the condition that the current input parameter collection of the hit plugin is determined to be missing, acquiring dialogue content fed back by the large language model and feeding back to a user so as to prompt the user to provide a parameter value of the input parameter of the hit plugin; acquiring new natural language content provided by the user; determining new language understanding content based on the new natural language content, and sending the new language understanding content to the large language model; and under the condition that the current input parameter collection of the hit plugin is determined to be completed, acquiring parameter values of the input parameters of the hit plugin fed back by the large language model.
S301, acquiring natural language content.
S302, carrying out semantic understanding on the natural language content, and determining a hit plug-in for hit of the natural language content.
S303, determining language understanding content according to the hit plug-in and the natural language content.
S304, under the condition that the current input parameter collection of the hit plug-in is missing, acquiring dialogue content fed back by the large language model and feeding back to a user so as to prompt the user to provide the parameter values of the input parameters of the hit plug-in.
The missing input parameter collection may refer to that a parameter value of at least one input parameter among input parameters required for the same hit plugin to perform a task is null. The dialogue content is request content of missing input parameters and is used for providing the request content to a user, and prompting the user to provide parameter values of the missing input parameters. The current input parameter of the hit plug-in refers to the parameter value of the hit plug-in input parameter that has been collected at the current time. Detecting whether the current input parameter collection of the hit plug-in is missing or not through the large language model, and generating dialogue content corresponding to the input parameter with the null parameter value when the current input parameter collection of the hit plug-in is missing, and prompting a user to feed back the parameter value of the input parameter. The dialogue content is provided to the user, who replies to the dialogue content to form a multi-turn dialogue.
Illustratively, the hit plugin is a plugin of weather query, the input parameters of time are missing, and the dialog content can be generated as follows: please ask you what day's weather?
The large language model may detect a plurality of missing input parameters, may generate only dialogue content corresponding to one input parameter, specifically, which input parameter may be randomly selected, or may be determined according to a preset priority. Alternatively, dialog content for a plurality of input parameters may be generated, i.e., the user may be prompted to provide parameter values for the input parameters in the same dialog.
S305, acquiring new natural language content provided by the user.
The new natural language content may refer to natural language content provided by the user for the dialog content.
S306, determining new language understanding content based on the new natural language content, and sending the new language understanding content to the large language model.
The processes of S301-S303 are repeatedly performed on the new natural language content, resulting in new language understanding content. The natural language content acquired in S301 is the current natural language content, and S305 acquires new natural language content with respect to the current natural language content. In the multi-round dialogue that the new natural language content actually belongs to the same intention with the current natural language content, the hit plugin corresponding to the new natural language content is not changed, and the hit plugin of the new natural language content is the same as the hit plugin of the current natural language content.
The large language model processes the new language understanding content and detects whether the input parameters of the same hit plugin collect the missing. At this time, the new natural language content is updated to the current natural language content, the new language understanding content is updated to the current language understanding content, and the large language model detects whether to collect the missing or not based on the current language understanding content, and the current input parameters are detected. If the current input parameters of the hit plugin are determined to be collected and missing, the large language model generates reply content corresponding to the missing input parameters so as to prompt a user to continuously provide the parameter values of the missing input parameters. And obtaining the new natural language content provided by the user aiming at the new reply content, and continuously carrying out multiple rounds of dialogue until the current input parameter collection of the hit plug-in is completed.
And combining the description information of the input parameters in the description information of the hit plug-in with the new natural language content to generate new language understanding content. Or adding new natural language content based on the current language understanding content to obtain new language understanding content.
Optionally, the determining new language understanding content based on the new natural language content includes: and adding the new natural language content into the language understanding content to obtain the new language understanding content.
The language understanding content comprises the current natural language content and the description information of the input parameters of the hit plug-in, new natural language content is added in the language understanding content to obtain new language understanding content, and correspondingly, the new language understanding content comprises the current natural language content, the new natural language content and the description information of the input parameters of the hit plug-in.
By adding the new natural language content into the language understanding content, the new language understanding content is obtained, which is equivalent to adding the historical natural language content on the basis that the current natural language understanding content exists in the current natural language content, so that the new language understanding content is enriched, the large language model is assisted to better and comprehensively understand the semantics, the prediction accuracy of the generated parameter value is improved, and the calling accuracy of the plug-in unit is improved.
S307, under the condition that the current input parameter collection of the hit plugin is determined to be completed, acquiring the parameter value of the input parameter of the hit plugin fed back by the large language model.
The completion of the input parameter collection may mean that all the parameter values of the input parameters required by the same hit plugin to execute the task have assigned values that are not null and the data type is correct. At this time, the large language model feeds back the parameter data of the hit plug-in, i.e. the parameter values of the input parameters of the hit plug-in are obtained, and the parameter data comprise the parameter values of all the input parameters required by the hit plug-in to execute the task. In addition, the parameter data may further include a data type and name of the input parameter, and the like. The data structure of the parameter data fed back by the large language model can be pre-agreed.
Illustratively, the large language model feeds back the json format content: plug-in id: XX; inputting parameters: { input parameter name: XX, parameter values: XX, parameter type: XX } { input parameter name: XX, parameter values: XX, parameter type: XX } { input parameter name: XX, parameter values: XX, parameter type: XX }.
When the parameter values of the input parameters of the hit plugin are collected, whether the collection of the parameter values of the input parameters of the hit plugin is completed at the current moment is detected, and if so, the hit plugin is called; if the input parameters of the hit plug-in are not collected, the dialogue content is fed back, new natural language content in the man-machine dialogue is obtained, the parameter values of the input parameters of the hit plug-in are continuously collected, whether the collection of the parameter values of the input parameters of the hit plug-in is completed at the current moment is detected, and the like, and the steps are repeated until the collection of the parameter values of the input parameters of the hit plug-in is completed.
S308, calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
Optionally, the obtaining the parameter value of the input parameter of the hit plugin fed back by the large language model includes: checking input parameters and parameter values fed back by the large language model according to the description information of the hit plug-in; responding to an event of verification failure, sending the language understanding content to the large language model to obtain new input parameters and new parameter values, and verifying the input parameters and the parameter values fed back by the large language model; and responding to the event of successful verification, and obtaining the parameter value of the input parameter of the hit plugin.
And inquiring the description information of the hit plugin in the pre-registered description information of the candidate plugin. And checking the input parameters and the parameter values, and checking whether the input parameters of the hit plugin are collected, whether the data types of the parameter values are correct or not and the like. The description information of the alternative plug-in comprises input parameters and parameter types, the input parameters included in the description information are compared with the input parameters fed back by the large language model, and the parameter types of the input parameters included in the description information are compared with the data types of the same input parameters fed back by the large language model. And when the input parameters included in the description information are consistent with the input parameters fed back by the large language model, and the parameter types of the input parameters included in the description information are consistent with the data types of the same input parameters fed back by the large language model, determining that the verification is successful, and when any missing input parameters or any data types are inconsistent, determining that the verification is failed.
In response to the event of verification failure, the language content can be repeatedly sent to the large language model, so that the large language model can carry out parameter value generation again, a new input parameter or a new parameter value fed back by the large language model is obtained, and verification on the input parameter and the parameter value fed back by the large language model is repeatedly carried out. Until the verification is successful.
And responding to the event of successful verification, determining the input parameters and the parameter values fed back by the large language model as the parameter values of the input parameters of the hit plugin, sending the parameter values to the hit plugin, and calling the hit plugin to obtain a calling result.
It should be noted that, the number of times of verification failure of the same hit plug-in is greater than a preset number of times threshold, an operation and maintenance user can be pre-warned, an abnormality is prompted, or a preset abnormality processing plug-in is called to generate an abnormality reply content, and the abnormality reply content is fed back to the user.
The input parameters and parameter values fed back by the large language model are checked, the hit plug-in is called based on the input parameters and parameter values which are checked successfully, and when the check fails, the large language model is called again to understand and generate language understanding contents, so that the feedback check of the large language model is increased, the abnormality is detected in time, the accuracy of plug-in parameters is improved, and the stability of a plug-in system is improved.
Optionally, the semantic understanding of the natural language content determines a hit plug-in for hit of the natural language content, including: acquiring the description information of a pre-registered alternative plug-in; and determining hit plugins hit by the natural language content according to the description information of each candidate plugin and the natural language content.
The description information of the alternative plug-in is used to determine the functionality of the alternative plug-in. Natural language content indicates the intent of the user, specifically the functionality that the user needs to implement. And matching the description information of the alternative plug-in with the natural language content, and determining the description information of the alternative plug-in corresponding to the natural language content.
By matching the description information of each plug-in registration with natural language content and detecting plug-in hit in natural language, any plug-in can be expanded, flexibility and diversity of expanding plug-in are improved, and universality of plug-in is improved.
Optionally, the determining, according to the description information of each candidate plug-in and the natural language content, a hit plug-in for hit of the natural language content includes: inputting the natural language content into a pre-trained intention recognition model to obtain identification information of a hit plug-in unit output by the intention recognition model; the intention recognition model is used for determining identification information corresponding to the natural language content through the natural language content, the description information of each candidate plug-in which is registered in advance and the identification information of each candidate plug-in which is registered.
The intent recognition model may be a deep learning model. The input of the intention recognition model is natural language content and a plug-in list, wherein the plug-in list comprises the description information of the pre-registered alternative plug-ins and the identification information registered by the same alternative plug-ins; the intention recognition model carries out semantic understanding on natural language content, queries candidate plugins pointed by intention of the natural language content in the pre-registered candidate plugins stored in the plugin list, outputs identification information of the pointed candidate plugins, and determines the candidate plugins corresponding to the identification information as hit plugins.
Semantic understanding is carried out on natural language content through an intention recognition model, hit plugins corresponding to the natural language content are determined from pre-registered alternative plugins, identification information of the hit plugins is output, and detection efficiency and accuracy of plugin matching are improved.
According to the technical scheme, whether the current input parameter collection of the hit plugin is finished or not is detected through the large language model, dialogue content is generated when the input parameter is missing, the dialogue content is provided for a user, the user is prompted to provide the parameter value of the missing input parameter, when the current input parameter collection is finished, the parameter value of the input parameter of the hit plugin fed back by the large language model is obtained, the hit plugin is called based on the parameter value of the complete input parameter, accuracy of plugin calling is improved, information of the missing input parameter can be quickly obtained, and plugin calling speed is accelerated.
In a specific scenario, semantic understanding is performed on natural language content, and determining whether the natural language content hits a plugin may specifically include:
semantic understanding is carried out on the natural language content, whether the natural language content hits the plug-in unit is detected, and a plug-in unit hit result is obtained;
when the plug-in hit result is hit and the pointed first plug-in is the same as the second plug-in corresponding to the current session understanding task, determining the current session understanding task as the session understanding task to be executed, and determining the second plug-in as the third plug-in corresponding to the session understanding task to be executed;
when the plug-in hit result is hit and the first plug-in is the same as the fourth plug-in corresponding to the historical session understanding task, determining the historical session understanding task as a session understanding task to be executed, and determining the fourth plug-in as a third plug-in corresponding to the session understanding task to be executed;
and under the condition that the plug-in hit result is hit, the first plug-in is different from the second plug-in, and the first plug-in is different from the fourth plug-in, establishing a new session understanding task, determining the new session understanding task as a session understanding task to be executed, and determining the first plug-in as a third plug-in corresponding to the session understanding task to be executed.
Wherein establishing a new session understanding task includes: acquiring at least one first plugin pointed by a plugin hit result; generating a plug-in task corresponding to the first plug-in according to the first plug-in pointed by the plug-in hit result, and establishing a corresponding relation between the plug-in task and the corresponding first plug-in; sequencing all plug-in tasks corresponding to the plug-in hit result; and determining the plug-in tasks corresponding to the first plug-ins pointed by the plug-in hit results as new session understanding tasks.
The session understanding task may be a task of providing the natural language content to the large language model, acquiring input parameters of a plug-in hit by the natural language content fed back by the large language model, and calling the hit plug-in to obtain a calling result and feeding back the calling result to the user. Wherein the number of first inserts may be a non-negative integer. The plug-in hit result may refer to a detection result of whether the natural language content hits the plug-in, and related information of the hit first plug-in. The plug-in hit results include hit results or miss results. And when the plug-in hit result is hit, determining the hit plug-in as a first plug-in, and if the current session understanding task exists, determining the plug-in corresponding to the current session understanding task as a second plug-in. The card hit may also include a first card corresponding to the hit, the number of first cards, whether the first card is consistent with the second card, and so on. The plug-in hit results are used to determine which task the session understanding task to be performed is, e.g., whether it is the current session understanding task, whether it is a new session understanding task. The current session understanding task is a session understanding task in a current execution state. The session understanding task to be executed may refer to a session understanding task that needs to be immediately executed as the current priority is highest.
Different session understanding tasks correspond to different third plugins. For example, when the session understanding task to be executed is a new session understanding task, the first plugin is determined to be a third plugin, and when the session understanding task to be executed is an existing session understanding task, the plugin corresponding to the existing session understanding task may be determined to be the third plugin, or a union of the plugin corresponding to the existing session understanding task and the first plugin may be determined to be the third plugin.
In addition, when the session understanding task to be executed is the current session understanding task, the third plug-in is the second plug-in, and the first plug-in and the second plug-in are the same, which is equivalent to that the third plug-in includes the first plug-in. When the session understanding task to be executed is a history session understanding task and the third plug-in is a fourth plug-in, the first plug-in and the fourth plug-in are the same, which is equivalent to the third plug-in including the first plug-in. When the session understanding task to be executed is a new session understanding task, the third plug-in is the first plug-in, and therefore the session understanding task to be executed is used for calling the first plug-in.
In practice, the natural language content may be natural language content input by a user during a multi-round session, for example, during an i-th round of session, an initial session understanding task in the i-th round of session is a current session understanding task, that is, a session understanding task to be executed determined by the i-1-th round of session. Based on the natural language content input by the user in the ith round, determining a session understanding task to be executed, namely the session understanding task to be executed determined by the dialog in the ith round, and taking the session understanding task as an initial session understanding task of the dialog in the (i+1) th round. The session understanding task to be executed in the ith round may be the same as or different from the current session understanding task in the ith round, and if the session understanding task is the same as the current session understanding task in the ith round, the current session understanding task continues to keep the current executing state; if the session understanding tasks are different, replacing the current session understanding task with the session understanding task to be executed in the ith round, namely setting the session understanding task to be executed as the current executing state. In practice, the session understanding task is placed in the task stack, and when the task is executed, the task at the top of the stack is taken to execute, that is, the task at the top of the stack is the session understanding task in the current execution state. And after the corresponding storage context, placing the current session understanding task at the top of the sub-stack, and placing the session understanding task to be executed at the top of the sub-stack, namely realizing the adjustment of the current execution state of the task. The historical session understanding task may be a session understanding task that hits the fourth plug-in but is not in the current execution state, generated by the historical session. Typically, historical sessions differ from the intent of the current session, and hit plugins differ. Historical session understanding tasks are typically placed at a non-stack top position in the task stack.
Selecting a plug-in task to be executed currently from session understanding tasks to be executed; determining the language understanding content of the plug-in task according to the third plug-in of the plug-in task and the natural language content of the plug-in task; the language understanding content determining method is the same as that of the embodiment of the disclosure. The large model plug-in calling method provided by the embodiment of the disclosure can be understood as a plug-in task executing process. That is, the hit plug-in the foregoing embodiment may refer to the third plug-in the present example. From the foregoing, it can be seen that, in any case of hit, the session understanding task to be performed is essentially used to invoke the first plug-in, and thus, the hit plug-in the foregoing embodiment may also refer to the first plug-in this example.
The language understanding content of the currently executed plug-in task is sent to a large language model, and input parameters of a third plug-in corresponding to the currently executed plug-in task are obtained; and calling a third plugin corresponding to the currently executed plugin task according to the input parameters of the third plugin, and obtaining a calling result of the currently executed plugin task.
In addition, if the collection of the parameter values of the input parameters of the third plug-in is completed, the third plug-in is invoked. If there is a loss of the parameter value of the input parameter of the third plug-in, the large language model feeds back the dialogue content and feeds back to the user, and receives new natural language content from the user. Based on the new natural language content, the session understanding task to be executed is repeatedly judged. If the session is ensured to be maintained, the new natural language content is equivalent to the parameter value for collecting the input parameters of the third plug-in; if the session terminal is determined, the new natural language content is equivalent to the parameter value of the input parameters of other plug-ins, namely, the session terminal is switched to other sessions, and the session understanding task is correspondingly switched to be executed.
Optionally, the large model plugin invoking method further includes: and when the natural language content is an intervention command, adjusting the current session understanding task according to the intervention command.
The intervention command is a special command, and the large model plug-in calling device of the embodiment of the disclosure does not perform semantic understanding on the intervention command, and plug-in hit detection, and does not generate a corresponding session understanding task, and does not input the corresponding session understanding task into a large language model for semantic understanding and generation. The intervention command is used for direct execution, and the session understanding task is adjusted. The intervention command is specifically used to adjust the current session understanding task in the current execution state. The intervention command may reset or delete memory and state (e.g., execution state) upon system failure to achieve the goal of deleting and adjusting session understanding tasks. The intervention command can be preset, whether the received natural language content is identical to the intervention command is detected, if so, the natural language content is determined to be the intervention command, otherwise, the natural language content is determined not to be the intervention command, and whether the natural language content hits the plug-in is detected.
In fact, the semantic understanding of the natural language content may have errors, such as a hit third plug-in error, and the generated session understanding task is wrong, so that the large language task extracts wrong input parameters, and the third plug-in is called, so that a wrong calling result is obtained. At this point, the user may choose to re-conduct the session, but may also adjust the current session understanding task through an intervention command, e.g., add a plug-in task to modify or delete the session understanding task. When all plug-in tasks are deleted, the current session understanding task is deleted. The hit plugin can be modified to correspondingly modify the corresponding plugin task, so that the current session understanding task is modified, the intervention of splitting and planning the semantic understanding task is realized, the timely intervention of the execution path and output of the wrongly predicted task is realized, and the resource consumption is reduced.
In addition, the intervention command can interrupt the execution of the current session understanding task or directly delete the execution of the current session understanding task so as to cope with the abnormality and the crash of the system caused by the current session understanding task.
Optionally, the large model plugin invoking method further includes: when the plug-in hit result is hit and the first plug-in pointed by the plug-in hit result is different from the second plug-in corresponding to the current session understanding task, adding the natural language content of the current session understanding task into the context of the current session understanding task; and correspondingly storing the second plug-in and the context corresponding to the current session understanding task and the current session understanding task.
The conversation understanding task is used as a first level, the plug-in task is used as a second level, the identifier of the second plug-in is used as a third level, the plug-in task corresponds to the second plug-in, the context corresponds to the conversation understanding task as a fourth level, the conversation understanding task, the plug-in task, the second plug-in and the context are correspondingly stored, and the storage data of the multi-layer memory structure can be realized.
Optionally, the large model plugin invoking method further includes: determining language understanding content of the session understanding task to be executed according to natural language content under the condition that the session understanding task to be executed is a new session understanding task; acquiring the context of the session understanding task to be executed under the condition that the session understanding task to be executed is different from the new session understanding task; and determining the language understanding content of the session understanding task to be executed according to the context and the natural language content of the session understanding task to be executed.
The historical context and natural language content are provided to a large language model, and the large language model can more accurately understand the intention of the user and generate the content meeting the requirements of the user. For example, after the user successfully subscribes to the air ticket, the user again enters the booking air ticket, and the large language model may reply by: is a request to re-order or modify a history order? To determine whether the user invokes the first add-in to modify the order or generates a new order to provide more accurate input parameters.
FIG. 4 is a scenario diagram of a large model plug-in invocation method disclosed in accordance with an embodiment of the present disclosure. FIG. 5 is a scene graph of training of an intent recognition model. The embodiment of the disclosure provides a large language model universal plug-in system structure for realizing a large model plug-in calling method, which is shown in fig. 4, and a training optimization flow of an intention recognition model is shown in fig. 5.
As shown in FIG. 4, the plug-in system includes an API module, a dispatch module, an intent recognition module, a task planning module, a multi-tiered memory module, a parameter collection module, and a large language model invocation module. The description and specific implementation of the function of the seven modules are as follows:
and an API module. The user calls the plug-in system through the API module, takes natural language information, plug-in definition information, plug-in execution result parameter structure information or system command information as input, and the API module outputs a generated result corresponding to the input information or a callback event of a specific plug-in and a parameter entering structure required by the plug-in. Specific example descriptions can be made with json as follows:
Plug-in system API call request json example
{ message [ { function: user identification, natural language content i want to subscribe to an air ticket }, plug-in [ the air ticket ]
{ plug-in identification: p _001,
plug-in description an airline ticket booking plug-in,
parameters [
{ input parameter name: XX, parameter description: departure place, parameter type: character type },
{ input parameter name: XX, parameter description: destination point, parameter type: character type },
{ input parameter name: XX, parameter description: departure time, parameter type: time type },
{ input parameter name: XX, parameter description: number of air tickets, parameter type: integer },
{ input parameter name: XX, parameter description: price requirement, parameter type: character type },
{ input parameter name: XX, parameter description: seat requirement, parameter type: character },
{ input parameter name: XX, parameter description: identity information, parameter type: character type },
{ plug-in identification: p _002,
plug-in description take-out booking plug-ins,
parameters [
{ input parameter name: XX, parameter description: take-out reception place, parameter type: character },
{ input parameter name: XX, parameter description: take-out store address, parameter type: character },
{ input parameter name: XX, parameter description: time ordered, parameter type: time type },
{ input parameter name: XX, parameter description: order content, parameter type: character },
{ input parameter name: XX, parameter description: price requirement, parameter type: character type },
{ input parameter name: XX, parameter description: remark requirement, parameter type: character type }, }
In the above example, one message may include natural language content sent by the user, and registration information of the ticket reservation plug-in and the take-out reservation plug-in, which is the description information of the alternative plug-in.
Dialogue content json example 1 returned by large language model
Message [ { function: multiple rounds of dialogue, natural language content, please ask you when to go out? }]
The messages of the above examples are dialogue content that needs to be provided to the user for the large language model feedback to enable the user to provide missing parameter values for the input parameters.
Parameter json examples returned by large language models
{ message [ { function: plug-in, message: callback ],
callback information:
{ plug-in identification: p _001,
parameters [
{ input parameter name: XX, parameter value: A, parameter type: character type },
{ input parameter name: XX, parameter value: B, parameter type: character type },
{ input parameter name: XX, parameter value: 3 points, parameter type: time type },
{ input parameter name: XX, parameter value: 1, parameter type: integer },
{ input parameter name: XX, parameter value: <1000, parameter type: character },
{ input parameter name: XX, parameter value: economy class, approach the walkway, parameter type: character },
{ input parameter name: XX, parameter value: 100000000, parameter type: character }, ] }
The above examples illustrate parameter values of input parameters of the ticket reservation plug-in fed back by the large language model. Wherein the callback is used for calling a hit plugin p_001, namely an air ticket reservation plugin, based on the parameter value.
Call reply content json example returned by large language model
Message [ { function: plug-in, message: return),
and (3) returning information:
{ plug-in identification: p _001,
parameters [
{ input parameter name: XX, parameter description: return code, parameter value: successfully, the parameter type, character type,
{ input parameter name: XX, parameter description: return message, parameter value: you have successfully reserved 1 ticket a to B, departure time 3:00, flight number XX, seat number XX, please check-in from XX gate. Parameter type, character type,
the above example demonstrates the results of a predetermined successful invocation of the ticket reservation plug-in and invocation reply content fed back by the large language model.
Plug-in system API call request json example 3
Message [ { function: plug-ins, messages: system interventions),
intervention command information:
{ parameter } [ solution ]
Command name: calling plug-in
{ input parameter name: XX, parameter value: plug-in id, parameter type: character type }
And a scheduling module. And other modules in the interface system play a role in general control.
An intention recognition module. The intent recognition module invokes an intent recognition model, which is a small language model at the 1-hundred million parameter level that functions to determine whether an incoming message contains an invocation intent for one or more plug-ins. As shown in fig. 5, the SFT tuning data format is as follows: the intent recognition model v1 is trained using a { instraction, plugin_list } dataset to obtain an intent recognition model v2, where instraction is an input message and plugin_list is a list of plug-ins associated with the message, which may be empty. v1 and v2 refer to version numbers of the intent recognition model.
And a task planning module. For an intent that requires multiple plug-ins to meet by orchestration, the mission planning module generates an ordered plug-in execution plan based on the input message and the associated plug-in list.
A multi-layer memory module. For saving global session memory, session memory for each plug-in and plug-in execution result memory. The global session memory is stored for a week at maximum, the plug-in session memory is stored for 72 hours at maximum, and the plug-in execution result memory is stored for a week at maximum. The memory module can play a remarkable role in improving the generation effect of the large language model by saving the conversation and calling states.
And a parameter collection module. And accessing a task stack to obtain a certain plug-in task, and collecting various parameters required by the callback of the plug-in task by calling a large language model module.
The large language model call module. And the method is responsible for requesting an external large language model and receiving and analyzing the generation result of the large language model.
The plug-in system of the embodiment of the disclosure can be widely applied to personal and enterprise application scenes needing capability expansion of a large language model through plug-ins. For example, by combining a plug-in system with a large language model, a personal user can realize various requirements such as meal ordering, ticket ordering, scheduling, document questioning and answering, and factual searching by using natural language with experience similar to that of an intelligent assistant. In the enterprise application scenario, the method comprises the steps of question answering of an enterprise knowledge base, summary of a meeting, expansion of a mathematical topic, mapping, code generation and execution, chart generation through natural language, video editing through natural language, document creation and problem analysis combined with the knowledge base and the like.
A plug-in system application for a typical ticket booking scenario may comprise the steps of: firstly, a user inputs a message of 'helping me to order a plurality of air tickets' through an API module, and two kinds of plugins of ticket ordering and meal ordering are provided through plugin parameters of the API module. After receiving the message, the scheduling module uses the intention recognition module to judge that the intention of the message should be satisfied by using the ticket booking plugin, so that a task with a plugin mark of ticket booking plugin id is added in a task stack, the memory of the global session and two levels corresponding to the plugin is updated, and the message of 'helping me to book a plurality of air tickets' is added. Then, the scheduling module invokes the parameter collection module to execute the first task in the task stack. And the parameter collection module calls the large language model module for a plurality of times according to the parameter entering requirement of the first task in the stack to obtain the specific parameter value of the needed parameter entering. And the large language model and the user perform multi-round session through the API module, update the memory module of the corresponding level until all input parameters required by the plug-in are obtained, and return to the scheduling module. If the task parameters are successfully collected, the scheduling module returns a callback message and fills in specific parameter values. The user side obtains a callback event through the API module, calls a corresponding plug-in service, and returns a plug-in execution result through the API module. After the scheduling module obtains the return result message of the plug-in call, the plug-in execution result memory is updated, and the task is popped off the stack.
According to the technical scheme of the disclosure, a large language model of a third party open source or closed source can be widely accessed to be combined with the plug-in; the development cost of a developer in the aspects of plug-in integration, plug-in scheduling, memory maintenance and the like is greatly reduced; providing a standard plug-in registration protocol and a standard access mechanism; providing the SDK (Software Development Kit ) and APIs facilitates integration with the user's own applications; based on standardized plug-in registration protocol, the platform side can further operate plug-in ecology; the plug-in system only depends on understanding and generating capacity of the large language model, can be widely adapted to the large language model of a closed source and an open source, and improves model universality; the plug-in system adopts a general input/output parameter interface design, can be widely connected with various plug-in services, and improves plug-in universality; the plug-in system integrates a context learning template based on prompt, a special intention recognition model and a special task planning model, so that the accuracy of calling intention recognition on the plug-in can be flexibly optimized in various modes; the analysis, disassembly and execution plan making effects of tasks are greatly improved, and the prediction effect is flexible and adjustable. The plug-in system integrates a memory structure for plug-in multi-round dialogs, a global dialog memory structure, and a parameter collection memory structure. The memory structures can provide sufficient context input for a large language model, so that the understanding accuracy of the model is greatly improved; the plug-in system integrates the management of plug-in tasks, and is convenient for tracking and managing the execution of the plug-in tasks. The plug-in system has built-in system level intervention instructions including entering and/or exiting a particular plug-in, clearing a particular memory structure, resetting the plug-in system state, etc. These system-level intervention instructions can efficiently execute plug-in services in scenarios where plug-in calls are event-triggered and return results as input to a large language model.
FIG. 6 is a block diagram of a large model plug-in invocation apparatus in an embodiment of the present disclosure, the disclosed embodiment being applicable to the case of expanding plug-ins for large language models, in accordance with an embodiment of the present disclosure. The device is realized by software and/or hardware, and is specifically configured in the electronic equipment with certain data operation capability. The module included in the large model plug-in calling device shown in fig. 6 may be different from the module shown in fig. 4, but the functions implemented as a whole are the same, the modules in fig. 4 and fig. 6 are only an example, and the functions of the modules shown in fig. 4 or fig. 6 may be split and recombined to obtain a new module, which should be included in the scope of the disclosure.
A large model plug-in invocation apparatus 600 as shown in fig. 6, comprising: a natural language content acquisition module 601, a plug-in matching module 602, an understanding content determination module 603, an input parameter detection module 604, and a plug-in invoking module 605. Wherein,
a natural language content acquisition module 601, configured to acquire natural language content;
the plug-in matching module 602 is configured to perform semantic understanding on the natural language content, and determine a hit plug-in that hits the natural language content;
an understanding content determining module 603, configured to determine language understanding content according to the hit plugin and the natural language content;
An input parameter detection module 604, configured to send the language understanding content to a large language model, to obtain a parameter value of an input parameter of the hit plug-in;
and the plug-in calling module 605 is used for calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
According to the technical scheme, through semantic understanding of natural language content, a hit plug-in corresponding to user intention is determined, language understanding content is determined according to the natural language content and related information of the hit plug-in, the hit plug-in is sent to a large language model, the natural language content is understood by the large language model, parameter values of input parameters required by operation of the hit plug-in are extracted, parameter values of the input parameters fed back by the large language model are obtained, the hit plug-in is called to obtain a calling result, external resources are obtained on the basis of the large language model, understanding capability and external resources of the large language model are utilized, timeliness defects of the large language model and resource limitations of the user can be broken through, language understanding and generated application scenes of a large language model system are increased, prediction accuracy of language understanding generating tasks is improved, various plug-ins can be expanded in real time, diversity and flexibility of expanding functions are increased, universality of the plug-in can be increased, meanwhile, training of the large language model is not required for a certain scene, and universality of the large language model is improved.
Further, the understanding content determining module includes: the input parameter detection unit is used for acquiring a prompt template corresponding to the hit plug-in; the prompt template corresponding to the hit plug-in comprises input parameters corresponding to the hit plug-in; and the understanding content detection unit is used for combining the natural language content with the prompt template corresponding to the hit plug-in to obtain language understanding content.
Further, the large model plug-in calling device further includes: the plug-in description information acquisition module is used for acquiring description information of the alternative plug-ins; the input parameter extraction module is used for extracting input parameters of the alternative plug-in from the description information of the alternative plug-in; and the prompt template generation module is used for combining the input parameters of the alternative plug-in unit with the plug-in universal template to obtain the prompt template corresponding to the alternative plug-in unit.
Further, the input parameter detection module includes: the parameter collection unit is used for acquiring dialogue content fed back by the large language model and feeding back to a user under the condition that the current input parameter collection of the hit plug-in is missing, so as to prompt the user to provide the parameter value of the input parameter of the hit plug-in; a new natural language acquisition unit, configured to acquire new natural language content provided by the user; a new understanding content acquisition unit for determining new language understanding content based on the new natural language content and transmitting the new language understanding content to the large language model; and the parameter determining unit is used for acquiring parameter values of the input parameters of the hit plugin fed back by the large language model under the condition that the current input parameters of the hit plugin are determined to be collected.
Further, the parameter determining unit includes: the parameter verification subunit is used for verifying the input parameters and the parameter values fed back by the large language model according to the description information of the hit plug-in; the parameter re-collection subunit is used for responding to the event of verification failure, sending the language understanding content to the large language model to obtain new input parameters and new parameter values, and verifying the input parameters and the parameter values fed back by the large language model; and the parameter acquisition subunit is used for responding to the event of successful verification and obtaining the parameter value of the input parameter of the hit plug-in.
Further, the new understanding content acquisition unit includes: and the new understanding content adding subunit is used for adding the new natural language content into the language understanding content to obtain the new language understanding content.
Further, the plug-in matching module includes: a plug-in registration information acquisition unit for acquiring description information of a pre-registered alternative plug-in; and the plug-in hit unit is used for determining hit plug-ins hit by the natural language content according to the description information of each alternative plug-in and the natural language content.
Further, the plug-in hit unit includes: the intention recognition subunit is used for inputting the natural language content into a pre-trained intention recognition model to obtain the identification information of the hit plugin output by the intention recognition model; the intention recognition model is used for determining identification information corresponding to the natural language content through the natural language content, the description information of each candidate plug-in which is registered in advance and the identification information of each candidate plug-in which is registered.
Further, the large model plug-in calling device further includes: the reply content acquisition module is used for sending the calling result to the large language model to obtain calling reply content; and the plug-in call feedback module is used for feeding back the call result and the call reply content.
Further, the reply content acquisition module includes: the reply template acquisition unit is used for acquiring a reply template corresponding to the hit plug-in; the reply understanding content determining unit is used for combining the calling result and the reply template corresponding to the hit plug-in unit to obtain reply understanding content; and the call reply content receiving unit is used for sending the reply understanding content to the large language model to obtain call reply content.
The large model plug-in calling device can execute the large model plug-in calling method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of executing the large model plug-in calling method.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 shows a schematic area diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above, such as the large model plug-in call method. For example, in some embodiments, the large model plug-in invocation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the large model plug-in invoking method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the large model plug-in invocation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application specific standard objects (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or region diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligent software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Cloud computing (cloud computing) refers to a technical system that a shared physical or virtual resource pool which is elastically extensible is accessed through a network, resources can comprise servers, operating systems, networks, software, applications, storage devices and the like, and resources can be deployed and managed in an on-demand and self-service mode. Through cloud computing technology, high-efficiency and powerful data processing capability can be provided for technical application such as artificial intelligence and blockchain, and model training.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions provided by the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (23)

1. A large model plug-in invocation method, comprising:
acquiring natural language content;
semantic understanding is carried out on the natural language content, and a hit plug-in for hit of the natural language content is determined;
determining language understanding content according to the hit plug-in and the natural language content;
the language understanding content is sent to a large language model, and the parameter value of the input parameter of the hit plug-in is obtained;
and calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
2. The method of claim 1, wherein the determining language understanding content from the hit plugin and the natural language content comprises:
acquiring a prompt template corresponding to the hit plug-in; the prompt template corresponding to the hit plug-in comprises input parameters corresponding to the hit plug-in;
And combining the natural language content with the prompt template corresponding to the hit plug-in to obtain language understanding content.
3. The method of claim 2, further comprising:
acquiring description information of an alternative plug-in;
extracting input parameters of the alternative plug-in from the description information of the alternative plug-in;
and combining the input parameters of the alternative plug-in with the plug-in universal template to obtain a prompt template corresponding to the alternative plug-in.
4. The method of claim 1, wherein the deriving parameter values for the input parameters of the hit plugin comprises:
under the condition that the current input parameter collection of the hit plugin is determined to be missing, acquiring dialogue content fed back by the large language model and feeding back to a user so as to prompt the user to provide a parameter value of the input parameter of the hit plugin;
acquiring new natural language content provided by the user;
determining new language understanding content based on the new natural language content, and sending the new language understanding content to the large language model;
and under the condition that the current input parameter collection of the hit plugin is determined to be completed, acquiring parameter values of the input parameters of the hit plugin fed back by the large language model.
5. The method of claim 4, wherein the obtaining the parameter values of the input parameters of the hit plugin fed back by the large language model comprises:
checking input parameters and parameter values fed back by the large language model according to the description information of the hit plug-in;
responding to an event of verification failure, sending the language understanding content to the large language model to obtain new input parameters and new parameter values, and verifying the input parameters and the parameter values fed back by the large language model;
and responding to the event of successful verification, and obtaining the parameter value of the input parameter of the hit plugin.
6. The method of claim 4, wherein the determining new language understanding content based on the new natural language content comprises:
and adding the new natural language content into the language understanding content to obtain the new language understanding content.
7. The method of claim 1, wherein the semantic understanding of the natural language content, determining a hit plugin for the natural language content hit, comprises:
acquiring the description information of a pre-registered alternative plug-in;
And determining hit plugins hit by the natural language content according to the description information of each candidate plugin and the natural language content.
8. The method of claim 7, wherein the determining a hit plug-in for the natural language content hit based on the description information of each of the candidate plug-ins and the natural language content comprises:
inputting the natural language content into a pre-trained intention recognition model to obtain identification information of a hit plug-in unit output by the intention recognition model;
the intention recognition model is used for determining identification information corresponding to the natural language content through the natural language content, the description information of each candidate plug-in which is registered in advance and the identification information of each candidate plug-in which is registered.
9. The method of claim 1, further comprising:
the calling result is sent to the large language model to obtain calling reply content;
and feeding back the calling result and the calling reply content.
10. The method of claim 9, wherein the sending the call result into the large language model results in call reply content, comprising:
Obtaining a reply template corresponding to the hit plug-in;
combining the calling result with a reply template corresponding to the hit plug-in to obtain reply understanding content;
and sending the reply understanding content to the large language model to obtain call reply content.
11. A large model plug-in invocation apparatus, comprising:
the natural language content acquisition module is used for acquiring natural language content;
the plug-in matching module is used for carrying out semantic understanding on the natural language content and determining a hit plug-in hit by the natural language content;
the understanding content determining module is used for determining language understanding content according to the hit plug-in and the natural language content;
the input parameter detection module is used for sending the language understanding content to a large language model to obtain a parameter value of the input parameter of the hit plug-in;
and the plug-in calling module is used for calling the hit plug-in according to the parameter value of the input parameter of the hit plug-in to obtain a calling result.
12. The apparatus of claim 11, wherein the understanding content determination module comprises:
the input parameter detection unit is used for acquiring a prompt template corresponding to the hit plug-in; the prompt template corresponding to the hit plug-in comprises input parameters corresponding to the hit plug-in;
And the understanding content detection unit is used for combining the natural language content with the prompt template corresponding to the hit plug-in to obtain language understanding content.
13. The apparatus of claim 12, further comprising:
the plug-in description information acquisition module is used for acquiring description information of the alternative plug-ins;
the input parameter extraction module is used for extracting input parameters of the alternative plug-in from the description information of the alternative plug-in;
and the prompt template generation module is used for combining the input parameters of the alternative plug-in unit with the plug-in universal template to obtain the prompt template corresponding to the alternative plug-in unit.
14. The apparatus of claim 11, wherein the input parameter detection module comprises:
the parameter collection unit is used for acquiring dialogue content fed back by the large language model and feeding back to a user under the condition that the current input parameter collection of the hit plug-in is missing, so as to prompt the user to provide the parameter value of the input parameter of the hit plug-in;
a new natural language acquisition unit, configured to acquire new natural language content provided by the user;
a new understanding content acquisition unit for determining new language understanding content based on the new natural language content and transmitting the new language understanding content to the large language model;
And the parameter determining unit is used for acquiring parameter values of the input parameters of the hit plugin fed back by the large language model under the condition that the current input parameters of the hit plugin are determined to be collected.
15. The apparatus of claim 14, wherein the parameter determination unit comprises:
the parameter verification subunit is used for verifying the input parameters and the parameter values fed back by the large language model according to the description information of the hit plug-in;
the parameter re-collection subunit is used for responding to the event of verification failure, sending the language understanding content to the large language model to obtain new input parameters and new parameter values, and verifying the input parameters and the parameter values fed back by the large language model;
and the parameter acquisition subunit is used for responding to the event of successful verification and obtaining the parameter value of the input parameter of the hit plug-in.
16. The apparatus of claim 14, wherein the new understanding content acquisition unit comprises:
and the new understanding content adding subunit is used for adding the new natural language content into the language understanding content to obtain the new language understanding content.
17. The apparatus of claim 11, wherein the plug-in matching module comprises:
a plug-in registration information acquisition unit for acquiring description information of a pre-registered alternative plug-in;
and the plug-in hit unit is used for determining hit plug-ins hit by the natural language content according to the description information of each alternative plug-in and the natural language content.
18. The apparatus of claim 17, wherein the plug-in hit unit comprises:
the intention recognition subunit is used for inputting the natural language content into a pre-trained intention recognition model to obtain the identification information of the hit plugin output by the intention recognition model; the intention recognition model is used for determining identification information corresponding to the natural language content through the natural language content, the description information of each candidate plug-in which is registered in advance and the identification information of each candidate plug-in which is registered.
19. The apparatus of claim 11, further comprising:
the reply content acquisition module is used for sending the calling result to the large language model to obtain calling reply content;
and the plug-in call feedback module is used for feeding back the call result and the call reply content.
20. The apparatus of claim 19, wherein the reply content acquisition module comprises:
the reply template acquisition unit is used for acquiring a reply template corresponding to the hit plug-in;
the reply understanding content determining unit is used for combining the calling result and the reply template corresponding to the hit plug-in unit to obtain reply understanding content;
and the call reply content receiving unit is used for sending the reply understanding content to the large language model to obtain call reply content.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the large model plug-in invocation method of any of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the large model plug-in invocation method according to any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the large model plug-in invocation method according to any of claims 1-10.
CN202311109649.2A 2023-08-30 2023-08-30 Large model plug-in calling method, device, equipment and medium Pending CN117112065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311109649.2A CN117112065A (en) 2023-08-30 2023-08-30 Large model plug-in calling method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311109649.2A CN117112065A (en) 2023-08-30 2023-08-30 Large model plug-in calling method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117112065A true CN117112065A (en) 2023-11-24

Family

ID=88808885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311109649.2A Pending CN117112065A (en) 2023-08-30 2023-08-30 Large model plug-in calling method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117112065A (en)

Similar Documents

Publication Publication Date Title
CN109308357B (en) Method, device and equipment for obtaining answer information
CN109760041B (en) Chat robot-based cloud management system and operation method thereof
US11200886B2 (en) System and method for training a virtual agent to identify a user&#39;s intent from a conversation
WO2015141700A1 (en) Dialogue system construction support apparatus and method
CN110268472B (en) Detection mechanism for automated dialog system
US20220358292A1 (en) Method and apparatus for recognizing entity, electronic device and storage medium
CN113836925B (en) Training method and device for pre-training language model, electronic equipment and storage medium
CN112579733B (en) Rule matching method, rule matching device, storage medium and electronic equipment
CN116737908A (en) Knowledge question-answering method, device, equipment and storage medium
US20220050968A1 (en) Intent resolution for chatbot conversations with negation and coreferences
CN108306813B (en) Session message processing method, server and client
CN112612462A (en) Method and device for adjusting phone configuration, electronic equipment and storage medium
CN112767916A (en) Voice interaction method, device, equipment, medium and product of intelligent voice equipment
CN113111658B (en) Method, device, equipment and storage medium for checking information
EP3843090B1 (en) Method and apparatus for outputting analysis abnormality information in spoken language understanding
CN113591463A (en) Intention recognition method and device, electronic equipment and storage medium
CN117370520A (en) Method, device, equipment and medium for processing split dialogue
CN116975336A (en) Image processing method, device, equipment and storage medium based on artificial intelligence
CN117112065A (en) Large model plug-in calling method, device, equipment and medium
CN114860910A (en) Intelligent dialogue method and system
CN113868396A (en) Task intelligent dialogue construction method and system based on knowledge graph
CN113554062A (en) Training method, device and storage medium of multi-classification model
CN114118937A (en) Information recommendation method and device based on task, electronic equipment and storage medium
CN117112064A (en) Large model plug-in calling method, device, equipment and medium
CN114202363A (en) Artificial intelligence based call method, device, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination