CN116738476A - Safe interaction method and device based on large language model - Google Patents

Safe interaction method and device based on large language model Download PDF

Info

Publication number
CN116738476A
CN116738476A CN202310540066.9A CN202310540066A CN116738476A CN 116738476 A CN116738476 A CN 116738476A CN 202310540066 A CN202310540066 A CN 202310540066A CN 116738476 A CN116738476 A CN 116738476A
Authority
CN
China
Prior art keywords
language model
information
text
sensitive data
large language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310540066.9A
Other languages
Chinese (zh)
Inventor
黄建庭
宋荣鑫
倪思勇
王海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qiyue Information Technology Co Ltd
Original Assignee
Shanghai Qiyue Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qiyue Information Technology Co Ltd filed Critical Shanghai Qiyue Information Technology Co Ltd
Priority to CN202310540066.9A priority Critical patent/CN116738476A/en
Publication of CN116738476A publication Critical patent/CN116738476A/en
Pending legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)

Abstract

The invention discloses a safe interaction method and a device based on a large language model, wherein the method comprises the following steps: receiving and identifying requester information; if the information of the requesting party is identified to contain sensitive data, the sensitive data is input into a public large language model after desensitization operation is carried out on the sensitive data; and/or directly inputting the identified sensitive data into the pre-trained private large language model; if the information of the requesting party is identified to not contain sensitive data, the information of the requesting party is directly input into a public large language model; identifying and outputting a target business category through a public large language model and/or a private large language model; and executing corresponding target business process operation according to the target business category, and outputting an execution result to the requesting party. The invention can meet the requirements of natural interaction with users in different application scenes and simultaneously completing business process handling, improves the convenience and satisfaction of the users, improves the user experience, ensures the safety and privacy of the user data and reduces the development cost.

Description

Safe interaction method and device based on large language model
Technical Field
The invention relates to the technical field of intelligent interaction, in particular to a safe interaction method and device based on a large language model.
Background
With the development of the internet, more and more enterprises and institutions interact with users by developing their application programs (APP), websites, etc. to provide related product, service or business transactions.
The existing service resource configuration interaction mode mainly adopts card type interaction, and has the problems of lengthy and complex interaction process, hidden function entrance, long interaction link and the like, thereby influencing user experience. Particularly, for the old or the users with difficult internet operation, effective service handling or business handling cannot be performed, which causes great inconvenience. In order to solve the above problems, some enterprises and institutions try to introduce the related technology of artificial intelligence into the process of resource allocation interaction, but the whole development process is too high in cost, and the leakage of business data and business confidentiality is extremely easy to cause, so that the safety is low, and the experience of safety interaction and the development of related business are seriously affected.
Disclosure of Invention
Accordingly, the present invention is directed to a method and apparatus for secure interaction based on a large language model, so as to at least partially solve at least one of the above-mentioned problems.
In order to solve the technical problem, a first aspect of the present invention provides a secure interaction method based on a large language model, the method comprising:
Receiving and identifying requester information;
if the sensitive data are identified to be contained in the requester information, the sensitive data are input into a public large language model after desensitization operation is carried out on the sensitive data; and/or directly inputting the identified sensitive data into the pre-trained private large language model; if the requester information is identified to not contain sensitive data, the requester information is directly input into a public large language model;
identifying and outputting a target business category through a public large language model and/or a private large language model;
and executing corresponding target business process operation according to the target business category, and outputting an execution result to a requester.
According to a preferred embodiment of the present invention, the identification requester information includes:
configuring a sensitive data identification mechanism;
identifying whether the requester information relates to sensitive data according to a sensitive data identification mechanism;
if so, sensitive data is contained in the requester information.
According to a preferred embodiment of the present invention, the performing a desensitization operation on the sensitive data includes:
configuring a mapping relation between sensitive data and a data template;
mapping the sensitive data into a data template according to the mapping relation;
Before outputting the traffic class, the method further comprises:
and mapping the data templates in the business categories into sensitive data according to the mapping relation.
In accordance with a preferred embodiment of the present invention, identifying and outputting target traffic categories by a common large language model includes:
forming a plurality of text blocks similar to the requester information into a candidate text set;
adding the candidate text set into the context of the public large language model, adding text information corresponding to the information of the requesting party into the prompt word of the public large language model, and identifying and outputting the intention of the requesting party through the large language model;
and determining and outputting the target business category according to the intention of the requester.
According to a preferred embodiment of the present invention, the grouping of a plurality of text blocks similar to the requester information into a candidate text set includes:
inputting text information corresponding to the information of the requesting party into an embedded model to obtain a target vector;
and querying a plurality of candidate vectors similar to the target vector, and forming a candidate text set by text blocks corresponding to all the candidate vectors.
According to a preferred embodiment of the present invention, before inputting the text information corresponding to the requester information into the embedding model, the method further includes:
Cutting an original text in a text corpus into a plurality of text blocks, wherein each text block comprises text processing units, and the number of the text processing units is smaller than or equal to the number of the text processing units limited by the embedded model;
and inputting the text blocks into the embedded model one by one to obtain a vector corresponding to each text block, and storing each text block and the corresponding vector.
According to a preferred embodiment of the present invention, the executing the corresponding target business process according to the target business category, and outputting the execution result to the requester includes:
creating business process operations corresponding to each business category, and generating a business process operation set;
searching a target business process operation corresponding to the target business class from the business process operation set;
and sequentially executing each sub-operation according to the sub-operation included in the target business process operation and the execution sequence of each sub-operation, and outputting a final execution result to the requester.
According to a preferred embodiment of the present invention, the creating the business process operation corresponding to each business category, and generating the business process operation set includes:
pre-packaging interfaces of the public large language model and/or the private large language model, and an expansion tool for executing each sub-operation;
Configuring sub-operations included in business process operations corresponding to each business category and the execution sequence of each sub-operation;
in each business process operation, the interfaces of the public large language model and/or the private large language model and the input and output of the expansion tool for executing each sub-operation are configured according to the execution sequence of each sub-operation.
To solve the above technical problem, a second aspect of the present invention provides a secure interaction device based on a large language model, the device comprising:
the first identification module is used for receiving and identifying the information of the requesting party;
the first input module is used for inputting the sensitive data into a public large language model after the sensitive data are subjected to desensitization operation if the sensitive data are identified to be contained in the requester information; and/or directly inputting the identified sensitive data into the pre-trained private large language model; if the requester information is identified to not contain sensitive data, the requester information is directly input into a public large language model;
the second identifying module is used for identifying and outputting the target business category through the public large language model and/or the private large language model;
and the execution output module is used for executing the corresponding target business flow operation according to the target business category and outputting an execution result to a requester.
According to a preferred embodiment of the present invention, the first identification module includes:
the first configuration module is used for configuring a sensitive data identification mechanism;
the sub-recognition module is used for recognizing whether the information of the requesting party relates to the sensitive data according to the sensitive data recognition mechanism;
and the sub-determination module is used for including sensitive data in the requester information if the requester information is related.
According to a preferred embodiment of the present invention, the first input module includes:
the second configuration module is used for configuring the mapping relation between the sensitive data and the data template;
the first mapping module is used for mapping the sensitive data into a data template according to the mapping relation;
the apparatus further comprises:
and the second mapping module is used for mapping the data templates in the business categories into sensitive data according to the mapping relation.
According to a preferred embodiment of the invention, the second identification module comprises:
the similar text recognition module is used for forming a plurality of text blocks similar to the requester information into a candidate text set;
the intention recognition module is used for adding the candidate text set into the context of the public large language model, adding text information corresponding to the information of the requester into the prompt word of the public large language model, and recognizing and outputting the intention of the requester through the large language model;
And the service identification module is used for determining and outputting the target service category according to the intention of the requester.
According to a preferred embodiment of the present invention, the similar text recognition module includes:
the sub-input module is used for inputting text information corresponding to the information of the requesting party into the embedded model to obtain a target vector;
and the query module is used for querying a plurality of candidate vectors similar to the target vector and forming a candidate text set by text blocks corresponding to all the candidate vectors.
According to a preferred embodiment of the invention, the device further comprises:
the cutting module is used for cutting an original text in the text corpus into a plurality of text blocks, and each text block comprises text processing units of which the number is smaller than or equal to the number of the text processing units limited by the embedded model;
and the storage module is used for inputting the text blocks into the embedded model one by one to obtain a vector corresponding to each text block, and storing each text block and the corresponding vector.
According to a preferred embodiment of the present invention, the execution output module includes:
the creation module is used for creating business flow operation corresponding to each business category and generating a business flow operation set;
the searching module is used for searching the target business process operation corresponding to the target business class from the business process operation set;
And the execution module is used for sequentially executing each sub-operation according to the sub-operation included in the target business process operation and the execution sequence of each sub-operation and outputting a final execution result to the requester.
According to a preferred embodiment of the present invention, the creation module includes:
the packaging module is used for pre-packaging the interfaces of the public large language model and/or the private large language model and an expansion tool for executing each sub-operation;
the third configuration module is used for configuring sub-operations included in the business process operation corresponding to each business category and the execution sequence of each sub-operation;
and a fourth configuration module, configured to configure, in each business process operation, an interface of the public large language model and/or the private large language model and an input and an output of an extension tool for executing each sub-operation according to an execution sequence of each sub-operation.
In summary, when the requester information is identified to contain sensitive data, the method performs desensitization operation on the sensitive data and then inputs the desensitized data to a public large language model; and/or directly inputting the identified sensitive data into the pre-trained private large language model, so that the safety of the sensitive data in the interaction process is ensured, and potential safety hazards caused by sensitive data leakage of personal privacy, business confidentiality, business data and the like are prevented. Identifying and outputting a target business category through a public large language model and/or a private large language model; and executing corresponding target business process operation according to the target business category, and outputting an execution result to a requester. Therefore, the ultra-strong natural understanding capability of the large language model is combined with the process operation, the service resource configuration interaction is realized, the development cost is saved, and the automatic handling of the service process is completed while the interactive experience of the generated dialogue with the requesting party is realized. The method and the device meet the requirements of naturally interacting with the requester in different application scenes and completing business process handling, improve convenience and satisfaction, promote the use experience of the requester, guarantee the safety and privacy of data of the requester and reduce development cost.
Drawings
In order to make the technical problems solved by the present invention, the technical means adopted and the technical effects achieved more clear, specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted, however, that the drawings described below are merely illustrative of exemplary embodiments of the present invention and that other embodiments of the drawings may be derived from these drawings by those skilled in the art without undue effort.
FIG. 1 is a flow chart of a method for secure interaction based on a large language model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a sensitive data monitoring model according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for secure interaction based on a large language model according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of privacy detection and privacy filtering of requester information in a second embodiment of the present invention;
FIG. 5 is a flow chart of inputting requester information into a public large language model for processing and outputting a processing result in the second embodiment of the present invention;
FIG. 6 is a schematic diagram of a framework of an RPA docking LLM model and an extension tool in a second embodiment of the present invention;
FIG. 7 is a flow chart of inputting the information of the requester into the public large language model for processing and outputting the processing result in the third embodiment of the present invention;
FIG. 8 is a flow chart of a method for secure interaction based on a large language model according to a fifth embodiment of the present invention;
fig. 9 is a schematic structural diagram of a secure interaction device based on a large language model according to a sixth embodiment of the present invention.
Detailed Description
The structures, capabilities, effects, or other features described in a particular embodiment may be incorporated in one or more other embodiments in any suitable manner without departing from the spirit of the present invention.
In describing particular embodiments, specific details of construction, performance, effects, or other features are set forth in order to provide a thorough understanding of the embodiments by those skilled in the art. It is not excluded, however, that one skilled in the art may implement the present invention in a particular situation in a solution that does not include the structures, properties, effects, or other characteristics described above. The flow diagrams in the figures are merely exemplary flow illustrations and do not represent that all of the elements, operations, and steps in the flow diagrams must be included in the aspects of the present invention, nor that the steps must be performed in the order shown in the figures.
Example 1
Referring to fig. 1, fig. 1 is a diagram illustrating a secure interaction method based on a large language model according to an embodiment of the present invention, where, as shown in fig. 1, the method includes:
S1, receiving and identifying information of a requester;
in this embodiment, the information of the requesting party may be a question of the requesting party about the expertise of a certain field, a consultation or transaction of the requesting party about certain products, services or businesses, a chat of the requesting party, etc. The requester can input the requester information in different modes such as text, language, image, video and the like on the interactive interface so as to facilitate different input requirements of various requesters and improve user experience. The requester information may include only text information, only non-text information, or both text information and non-text information. Wherein: the requesting party can be the old, the child, the staff working in business, the scientific research personnel, the teacher and other users, and the invention is not limited in particular.
The large language model of the embodiment mainly processes text information, so if the requester information only includes non-text information; or, it includes both text information and non-text information, which needs to be converted into text information after receiving the requester information. Wherein: the non-text information may be audio information, the audio information may be converted to text information by speech recognition technology (Automatic Speech Recognition, ASR); the non-text information may be image information containing text, the text information in the image information may be extracted by an optical character recognition technique (Optical Character Recognition, OCR); the non-text information can also be video information with audio information, and text information in the audio information can be extracted through ASR; or the video information is provided with the audio information and the text information at the same time, the text information in the audio information can be extracted by ASR, and the text information in the video image can be extracted by OCR.
In order to prevent the disclosure of sensitive information, the request party information may contain sensitive information such as privacy information related to personal privacy of a user, confidential information related to business confidentiality and the like, and meanwhile, the compliance requirement of data security is met. After receiving the information of the requesting party, the embodiment of the invention further identifies the information of the requesting party so as to identify whether the information of the requesting party contains sensitive information or not, and determines the processing mode of the information of the requesting party and the input large language model according to the identification result, thereby ensuring the safety of the sensitive data.
S2, if the sensitive data are identified to be contained in the information of the requesting party, the sensitive data are subjected to desensitization operation and then input into a public large language model; and/or directly inputting the identified sensitive data into the pre-trained private large language model; if the requester information is identified to not contain sensitive data, the requester information is directly input into a public large language model;
wherein: the large language model (Large Language Model, LLM) is a language model consisting of a neural network of a large number of parameters. The development mainly comprises three stages: the first is that the sequence model is used for the natural language processing (Natural Language Processing, NLP) phase. The second is a large language model such as a generated Pre-training transformation model (GPT-trained Transformer) and BERT (Bidirectional Encoder Representation from Transformers), which are formed based on a transducer model. The third formula is GPT-3 based chatGPT. The chatGPT can carry out dialogue by understanding and learning human language, can also carry out interaction according to the chat context, really carries out chat and communication like human, and even can complete the tasks of writing mails, video scripts, texts, translation, codes, writing papers and the like.
In this embodiment, the public large language model is an already open large language model for public use, and because the public can use the model, there is a hidden danger of sensitive data leakage. The private large language model is a large language model which is deployed by privatization and is only used by appointed users, all data cannot go out of the network, and sensitive data can be protected. Therefore, in this embodiment, the processing manner adopted for protecting the sensitive data is different according to the input large language model class. Such as: inputting the data into a public large language model, and then performing desensitization operation on the identified sensitive data; the sensitive data is input into the private large language model and the public large language model, the identified sensitive data can be directly input into the private large language model, and the requester information which does not contain the sensitive data is input into the public large language model; and inputting the private large language model, and directly inputting the requester information into the private large language model without identifying whether the requester information contains sensitive data.
For example, as shown in fig. 2, whether the requester information includes the sensitive information may be identified by a sensitive data monitoring model, and the requester data may be processed according to the identification result, where the sensitive data monitoring model may include: the sensitive data identification module and the desensitization operation module, wherein:
The sensitive data identification module identifies whether the information of the requesting party contains sensitive data or not, and inputs the information of the requesting party containing the sensitive data to the desensitization operation module, and the desensitization operation module performs desensitization operation on the sensitive data and then inputs the desensitized data to the public large language model; directly inputting requester information which does not contain sensitive data into a public large language model;
or alternatively, the process may be performed,
the sensitive data identification module identifies whether the information of the requesting party contains sensitive data or not, and directly inputs the information of the requesting party containing the sensitive data into the private large language model; the requester information that does not contain sensitive data is directly input into the common large language model.
S3, identifying and outputting a target business category through a public large language model and/or a private large language model;
in this embodiment, the public large language model and/or the private large language model can identify the service class from the information of the requesting party, so as to execute the corresponding service flow operation subsequently, thereby meeting the requirements of the user for handling the service flow while interacting with the user naturally in different application scenarios, improving the convenience and satisfaction of the user, improving the user experience, and guaranteeing the safety and privacy of the user data.
Optionally, the public large language model can be finely tuned by adopting labeled training data according to different service scenes in advance, so that the public large language model can output accurate service categories meeting service requirements, and user interaction experience is improved. And/or training the private large language model by using the marked training data, so that the trained private large language model can output accurate business categories meeting business requirements, and user interaction experience is improved.
Wherein: the traffic class may be specific to the particular traffic, which may be: chat, new user applications, old user applications, goods/service purchases, goods/service after sales, personal data processing, professional questions and answers, data queries, data translation, and the like, the invention is not particularly limited. In this embodiment, during the interaction with the user, multiple service classes may be identified, for example: chat is started, and then a new user applies for the chat, and corresponding business flow operation is executed according to different target business categories, namely: chat with the user is performed firstly, and then the operation is changed into the operation of applying for new users. Therefore, the business process handling is completed while the natural interaction with the user is realized under different business scenes.
S4, executing corresponding target business flow operation according to the target business category, and outputting an execution result to a requester.
Preferably, business process operations corresponding to each business category can be created in advance, and a business process operation set is generated; in this step, the corresponding target business process operation may be found from the business process operation set according to the target business class, and then the target process operation may be executed.
Optionally, the business process operation may be composed of a plurality of sub-operations according to a predetermined sequence, where each sub-operation may be performed by a corresponding expansion tool, and then the embodiment may interface the LLM model and the expansion tool by using a robot process automation (Robotic Process Automation, RPA) to complete different business process operations, so that the business system may complete corresponding business logic by using a dialogue manner, and output an execution result. For example, business process operations corresponding to each business category may be created in advance in the RPA, and a business process operation set may be generated. The RPA searches the corresponding target business flow operation according to the target business category, sequentially starts the corresponding expansion tool to execute the corresponding operation, and outputs the final execution result to the user.
Wherein: the expansion tool may include: query tools, question and answer tools, interface invocation tools, etc. Other services or databases may be invoked through the interface invocation tool.
In the embodiment, the large language model is combined with the PRA, so that service resource configuration interaction is realized, and development cost is saved; after the target business category is identified efficiently through the large language model, the target business category is related with the corresponding target flow operation through the RPA, and the flow handling of different business scenes is completed; under different service scenes, the service flow is transacted while the user is interacted naturally, so that the convenience and satisfaction of the user are improved, and the user experience is improved.
Example two
In the second embodiment of the invention, a public large language model is taken as an example to provide a safe interaction method based on the large language model, sensitive data in the information of a requester is firstly identified, then the sensitive data in the information of the requester is subjected to desensitization operation and then is input into the public large language model, so that the sensitive data does not enter the public large language model, and the safety of the sensitive data is ensured. As shown in fig. 3, the method includes:
s301, receiving and identifying requester information;
for example, if the sensitive data is included in the requester information, the sensitive data identifying mechanism may be configured to identify whether the requester information relates to the sensitive data according to the sensitive data identifying mechanism, and if so, the sensitive data is included in the requester information.
In one example, the sensitive data identification mechanism may include a sensitive keyword, and determine whether the sensitive keyword is included in the requester information, and if so, include the sensitive data in the requester information. Wherein: sensitive keywords may include: name, identification card, employee card, bank card number, address, cell phone number, mailbox number, date of birth, etc.
In another example, in order to more accurately identify the sensitive data, the sensitive data identification mechanism may include a sensitive keyword and a text feature after the sensitive keyword, then determine whether the requester information includes the sensitive keyword, and then determine whether the text feature after the sensitive keyword is the same as the configured text feature, if so, the requester information includes the sensitive data. Such as: one sensitive data identification mechanism configured includes: privacy keywords: the text feature behind the identity card and the privacy key word is 18 digits; and, for example: one sensitive data identification mechanism configured includes: privacy keywords: the text following the privacy key is characterized by a character string containing @.
S302, if the sensitive data are identified to be contained in the information of the requesting party, the sensitive data are input into a public large language model after desensitization operation is carried out on the sensitive data;
Optionally, the performing a desensitization operation on the requester information may include: firstly, configuring a mapping relation between sensitive data and a data template; mapping the sensitive data into a data template according to the mapping relation, and inputting the data template into a public large language model; and mapping the data templates in the business categories into sensitive data according to the mapping relation after the public large language model processes the information of the requesting party and when the business categories are not output, and outputting the business categories.
For example, the mapping relationship between the sensitive data and the data template may be configured according to text features of the sensitive data, where the text features may include: text type, text length, special words in text, etc. Such as: the text feature of the identification card number is 18 digits, so that the mapping relation between the identification card number and the data template 111111111111111111 can be configured, and the identification card number in the information of the requesting party is mapped to the data template 111111111111111111 according to the mapping relation in fig. 4; and after the public large language model processes the information of the requester and when the service class is not output, mapping the data template 111111111111111111 in the service class into the identification card number in the information of the requester, and outputting the service class. And, for example: and configuring a mapping relation between the name and the data template xxx by 2-4 words of the text feature bits of the name, mapping the name in the information of the requester to the data template xxx through the mapping relation, mapping the data template xxx in the service class to the name in the information of the requester after the public large language model processes the information of the requester and when the service class is not output, and outputting the service class.
S303, if the fact that the requester information does not contain sensitive data is identified, the requester information is directly input into a public large language model.
S304, identifying and outputting the target business category through the public large language model.
In this embodiment, the sensitive data is mapped to the data template through the desensitization operation, so that the sensitive data does not enter the public large language model, and therefore, whether the sensitive data is contained in the information of the requesting party or not is the same as the mode that the public large language model recognizes and outputs the service class. In one example, as depicted in FIG. 5, identifying and outputting the target business category by the common large language model includes:
s401, forming a plurality of text blocks similar to the requester information into a candidate text set;
in this embodiment, the large language model has a limitation on text processing, and the text block chunk is a piece of text that can be processed by the large language model, and includes the number of tokens of the minimum text processing unit token that is less than or equal to the number of tokens limited by the large language model.
For example, similar text blocks may map text to a vector space, as determined by comparing the similarity of text vectors. This step may include:
s41, inputting text information corresponding to the information of the requesting party into an embedded model to obtain a target vector;
Wherein: the embedded model is also called an Embedding model, and is a language processing model, and can map texts to multidimensional vector space, generate vectors corresponding to the texts, calculate the similarity of the texts through the vectors, and further determine the relevance of the texts. The Embeddings model has a call interface and a corresponding limit on the number of minimum text processing units (also called tokens).
In this embodiment, if the requester information may include text information and non-text information, the non-text information is converted into text information in advance, and the text information corresponding to the requester information is input into the embedding model in this step to obtain a target vector corresponding to the requester information.
S42, inquiring a plurality of candidate vectors similar to the target vector, and forming a candidate text set by text blocks corresponding to all the candidate vectors;
by way of example, a query may be made in a local vector database, wherein: the local vector database stores vectors and text blocks corresponding to the vectors, and is provided with a query interface for querying similar vectors. The method further comprises, prior to this step:
s211, cutting an original text in a text corpus into a plurality of chunks, wherein each chunk comprises the number of tokens which is less than or equal to the number of tokens limited by an Embeddings model;
S212, inputting the chunk into the Embeddings model one by one to obtain a vector corresponding to each chunk, and storing each chunk and the corresponding vector into a local vector database.
Optionally, the vector database may adopt a Milvus database and/or a pinacone database, and in this step, the similarity between the target vector and other vectors is queried in the Milvus database and/or the pinacone database, the vector with the front N of similarity is extracted as a candidate vector according to the sequence of the similarity, and chunks corresponding to all the candidate vectors form a candidate text set.
S402, adding the candidate text set into the context of the public large language model, adding text information corresponding to the requester information into the prompt word of the public large language model, and outputting the requester intention through the public large language model.
Wherein: the public large language model is a large language model that has been opened for public use. In this embodiment, a GPT model is adopted, and in this step, the candidate text set is used as a context of the GPT model interface, and a text corresponding to the response request is used as a Prompt word Prompt of the GPT model, so as to output the intention of the requester through the GPT model.
Wherein: prompt is used to indicate what action the model should take or generate what output when performing a particular task, letting the model know what it needs to do.
Sensitive data identification is carried out on the information of the requesting party, and the public generated language model is adopted to accurately identify the intention of the requesting party.
S403, determining and outputting a target business category according to the intention of the requester;
in this embodiment, a correspondence relationship between the intention of the requester and the traffic class may be created in advance, where: one traffic class may correspond to one or more requestor intents. The target business category can be found in the correspondence between the requester intention and the business category according to the output requester intention.
S305, searching a target business flow operation corresponding to the target business category from a business flow operation set;
preferably, before this step, business flow operations corresponding to each business category may be created in advance, so as to generate a business flow operation set. For example, the interfaces of the public large language model and/or the private large language model, and the extension tools for performing the respective sub-operations may be pre-packaged, thereby establishing a business process operation framework of the system. Wherein: the expansion tool may include: the query tool performs the sub-operations of: the query, requesting tool, sub-operations performed are: sending a request, and calling tools by the interface, wherein the sub-operations are as follows: invoking other devices, and so on. Other devices may be: servers, databases, etc.
Configuring sub-operations included in the business process operation corresponding to each business category according to business needs and executing sequences of the sub-operations; finally, in each business process operation, the interfaces of the public large language model and/or the private large language model and the input and output of the expansion tool for executing each sub-operation are configured according to the execution sequence of each sub-operation. Therefore, on the basis of a business flow operation framework, business flow operations of different business scenes can be created by flexibly configuring interfaces of the public large language model and/or the private large language model and input and output of an extension tool for executing each sub-operation, and the requirements of the different business scenes are met.
S306, each sub-operation is executed in sequence according to the sub-operation included in the target business process operation and the execution sequence of each sub-operation, and the final execution result is output to the requesting party.
The secure interaction method of the present embodiment may be implemented by constructing an RPA docking LLM model and a framework of an extension tool to complete flow operations corresponding to different service scenarios, so as to complete corresponding service logic in a conversational manner, and output an execution result to a requester. As shown in fig. 6, the frame may include: a large language model module, an expansion tool module and a flow automation control RPA module. Wherein: the RPA module may create, in advance, business process operations corresponding to each business category, and generate a set of business process operations, where each business process operation includes an extension tool that performs a sub-operation and an execution order of each sub-operation. The large language model module is packaged with large language model interfaces of different types, and the expansion tool module is packaged with various expansion tools and is connected with external devices such as a server, a database, a terminal and the like.
Wherein: different service types can correspond to different application scenes, and corresponding service flow operation is configured according to the application scenes, so that automatic flow operation of the different service scenes is realized. Such as: the service type is the application data submitted, the service handling scene of the new user can be corresponding, and further the service flow operation corresponding to the service handling scene of the new user is configured, wherein the service flow operation can comprise: the query tool, the request tool and the writing tool are executed in the following sequence: firstly, a query tool is called to query whether the database has user related data, if not, a request tool is called to send a request for submitting the data to a server, and finally, a writing tool is called to write the application data provided by the user into the database. And, for example: the service type is modification application data, so that a service handling scene of the old user can be configured, and a service flow operation corresponding to the service handling scene of the old user can be configured, wherein the service flow operation can comprise: the execution sequence of the query tool, the request tool, the deletion tool and the writing tool is as follows: firstly, a query tool is called to query whether the database has user related data, if so, a request tool is called to send a request for modifying the data by a user to a server, a deletion tool is called to delete the data to be modified by the user in the database, and finally a writing tool is called to write the modified data provided by the user into the database.
In addition, the service type can be personal data processing, and the personal data processing result can be output to the user corresponding to the application scene of the personal assistant; the service type can also be professional questions and answers, and the application scenario of generating the professional field questions and answers based on the disclosure text can be corresponded, and the professional answers can be output to the user; the service type can also be chat, and can correspond to the application scene of the chat robot, and chat replies are output to the user; the service type can also be used for data query, and can correspond to an application scene of class form data query, and a query result is output to a user; the service type can also be data translation, and can correspond to the translated application scene, and a translation result is output to a user; the service type can also be data summary, and then the service type can correspond to the application scene of the data summary; and outputting the data summarization result to the user.
Example III
The third embodiment of the invention provides a secure interaction method based on a large language model, which is different from the second embodiment in that: the processing result is a classification result of the requester information, and the public big language model is used for identifying and outputting an accurate classification result of the requester information, as shown in fig. 7, identifying and outputting the classification result of the requester information through the public big language model and/or the private big language model includes:
S701, inputting text information corresponding to the information of the requesting party into an embedded model to obtain a target vector;
the step is the same as step S41 of the second embodiment, and will not be described again.
S702, inquiring a plurality of candidate vectors similar to the target vector, and forming a candidate text set by text blocks corresponding to all the candidate vectors; inquiring classification results corresponding to each candidate vector;
by way of example, candidate vectors may be queried in a local vector database, wherein: the local vector database stores vectors and corresponding chunk, and is provided with a query interface for querying similar vectors. Querying a local database for the classification result, wherein: the local database stores the trunk and the corresponding standard classification result thereof, and the database is provided with a query interface for querying the corresponding standard classification result thereof according to the ID of the trunk. The method further comprises, prior to this step:
s311, configuring standard classification results for original texts in text corpus, cutting the original texts into a plurality of chunk, wherein each chunk comprises the number of token less than or equal to the number of token limited by an Embeddings model, and storing the standard classification results of the original texts where each chunk is located into a database as the standard classification results corresponding to the chunk;
S312, inputting the chunk into the Embeddings model one by one to obtain a vector corresponding to each chunk, and storing each chunk and the corresponding vector into a local vector database;
correspondingly, the step may include:
s51, inquiring the similarity between the target vector and other vectors in a local vector database, extracting the vector with the front N of the similarity as a candidate vector according to the sequence of the similarity, and forming a candidate text set by using chunks corresponding to all the candidate vectors.
S52, searching standard classification results corresponding to all candidate texts in the candidate text set in a local database to serve as classification results corresponding to each candidate vector.
S703, adding the candidate text set into the context of the public large language model, adding the classification result corresponding to each candidate vector into the prompt word of the public large language model, and outputting the information classification result of the requesting party through the public large language model.
Wherein: the information classification result of the requesting party can be different application scenes such as new user data application, old user data modification, service/commodity purchase, after-sales processing, personal data processing, professional question and answer, chat, data query, data translation, data summarization and the like, and then the corresponding target business process operation is directly searched and executed according to the information classification result of the requesting party, and the final execution result is output to the user.
It should be noted that: the second embodiment of the invention describes the process of inputting the information of the requesting party into the large language model for processing and outputting the service type in detail; the third embodiment of the invention describes the process of inputting the information of the requesting party into the large language model for processing and outputting the classification result of the information of the requesting party in detail; based on the description of implementation two and three, a person of ordinary skill in the art can make the large language model generate other processing results by creating the promt of the large language model under the premise of knowing the technical concept of the invention; on the basis, the business process operation of different application scenes is configured for different processing results, and the requirements of business process handling in different application scenes when the business process operation is naturally interacted with a user are met.
Example IV
Based on the protection of the sensitive data of the user, the invention can also directly process the information input of the requester containing the sensitive data through the private large language model, thereby ensuring the safety of the sensitive data. Based on this, the fourth embodiment of the present invention provides a secure interaction method based on a large language model, which is different from the second embodiment in that: step S202' is employed instead of step S202 of the two embodiments.
S202', if the requester information is identified to contain sensitive data; directly inputting the identified sensitive data into a pre-trained private large language model;
the private large language model is trained through a large amount of marked data in advance, so that the data safety and the accuracy of service type identification are improved, and the user interaction experience is improved.
Example five
It is contemplated that the requestor information may vary in nature, such as: a user's question of expertise in a field, a user's consultation or handling of certain products, services or businesses, chat with a user, etc. The method and the device have different processing modes corresponding to different pieces of requester information, the embodiment can also identify whether the requester information is a response request or a service request, and the large voice model can generate response information according to the response request and identify a target service class according to the service request so as to meet diversified requirements of requester response and service handling. Based on this, a fifth embodiment of the present invention provides a secure interaction method based on a large language model, as shown in fig. 8, where the method includes:
s801, receiving and identifying requester information;
in order to meet the diversified requirements of user response and service handling at the same time, the step further identifies whether the information of the requesting party is a response request or a service request based on the step S201 of the second embodiment;
Wherein: the answer request refers to requester information that needs to be answered, such as: a user's question of expertise in a field, a user's consultation with certain products, services or businesses, chat with a user, etc. The service request refers to the information of a requester who needs to perform a flow operation, for example: purchase of a product or service, return of goods, application of a certain qualification, handling of a certain business, etc.
For example, a trained classification model or preset classification rules may be used to identify whether the requestor information is a reply request or a service request. Wherein: the classification model can be trained by adopting marked response requests and business requests in advance, and the trained classification model can classify the input requester information. The classification model can specifically adopt a clustering model, a classifier and other models. The preset classification rule may be preconfigured, for example: keywords of response requests (such as what, how, where, etc.) and keywords of service requests (such as want, apply, transact, etc.) may be configured separately, the requester information is the response request if the keywords of response requests are matched, and the requester information is the service request if the keywords of service requests are matched.
S802, if the sensitive data are identified to be contained in the information of the requesting party, the sensitive data are input into a public large language model after desensitization operation is carried out on the sensitive data; and/or directly inputting the identified sensitive data into the pre-trained private large language model; and if the requester information is identified to not contain sensitive data, directly inputting the requester information into a public large language model.
S803, if the information of the requesting party is a service request, identifying and outputting a target service category through a public large language model and/or a private large language model; and executing corresponding target business process operation according to the target business category, and outputting an execution result to a requester.
Specific reference may be made to steps S2 to S4 of the first embodiment; alternatively, steps S304 to S306 of the second embodiment.
S804, if the information of the requesting party is a response request, outputting the response information to the requesting party through the public large language model and/or the private large language model.
Wherein: the public large language model can finely adjust the GPT or chat GPT and other generated language models according to the labeling data, so that the public large language model can generate response information according to the response request, and automatic interaction with a user is realized. The private large language model can be trained through annotation data, so that the private large language model can generate response information according to response requests. The response information may include: answers to product/service consultations, answers to business processes, answers to professional questions, chat, extraction of material, summarization, translation, and the like.
In this embodiment, the identification of whether the requester information includes sensitive data and whether the requester information is a response request or a service request may be performed synchronously or asynchronously, which is not limited in the present invention. In one example, the request party information is preferentially identified as a response request or a service request, and correspondingly, the sensitive data monitoring model in fig. 2 may also identify the service request, where the sensitive data monitoring model further includes: the service request identification module is used for identifying whether the information of the requesting party is a service request or a response request, and inputting the identification result into the public large voice model;
the sensitive data identification module identifies whether the information of the requesting party contains sensitive data or not, the information of the requesting party containing the sensitive data is input into the desensitization operation module, and the desensitization operation module carries out desensitization operation on the sensitive data and then inputs the desensitized data into the public large language model; directly inputting requester information which does not contain sensitive data into a public large language model; or, directly inputting the requester information containing the sensitive data into the private large language model; the requester information that does not contain sensitive data is directly input into the common large language model.
If the information of the requesting party is a service request, identifying and outputting a target service class by a public large language model and/or a private large language model; and executing corresponding target business process operation according to the target business category, and outputting an execution result to a requester.
If the requester information is a response request, the public large language model and/or the private large language model outputs the response information to the requester.
In summary, when the requester information is identified to contain sensitive data, the method performs desensitization operation on the sensitive data and then inputs the desensitized data to a public large language model; and/or directly inputting the identified sensitive data into the pre-trained private large language model, so that the safety of the sensitive data in the interaction process is ensured, and potential safety hazards caused by sensitive data leakage of personal privacy, business confidentiality, business data and the like are prevented. Identifying and outputting a target business category through a public large language model and/or a private large language model; and executing corresponding target business process operation according to the target business category, and outputting an execution result to a requester. Therefore, the ultra-strong natural understanding capability of the large language model is combined with the process operation, the service resource configuration interaction is realized, the development cost is saved, and the automatic handling of the service process is completed while the interactive experience of the generated dialogue with the requesting party is realized. The method and the device meet the requirements of naturally interacting with the requester in different application scenes and completing business process handling, improve the convenience and satisfaction of the requester, promote the experience of the requester, ensure the safety and privacy of data of the requester and reduce development cost. Compared with the prior art, the invention has at least the following beneficial effects:
1. Combining the large language model with the flow operation, realizing conversational interaction service by utilizing the super natural language understanding capability of the large language model, and completing the flow handling of different service scenes through flow automatic control; therefore, the requirements of the business process handling in different application scenes are met while the business process handling with the requester is completed, the convenience and satisfaction of the requester are improved, and the experience of the requester is improved.
2. By identifying sensitive data in the information of the requester and performing desensitization operation on the sensitive data, the user sensitive data is ensured not to enter the public large language model, so that the security of sensitive data such as private data of the requester, business confidential data and the like is protected. Or, the private large language model is used for processing the information of the requester containing the sensitive data, so that the safety of the sensitive data is protected.
3. The large language model is connected with the business system in series through the flow operation, so that business resource configuration interaction is realized, and development cost is saved; and meanwhile, an automatic flow operation oriented to different application scenes is established, so that customized service scenes are realized.
Example six
Fig. 9 is a secure interaction device based on a large language model according to a sixth embodiment of the present invention, as shown in fig. 9, where the device includes:
A first identifying module 91 for receiving and identifying the requester information;
the first input module 92 is configured to, if it is identified that the requester information includes sensitive data, perform a desensitization operation on the sensitive data, and then input the desensitized data into a public large language model; and/or directly inputting the identified sensitive data into the pre-trained private large language model; if the requester information is identified to not contain sensitive data, the requester information is directly input into a public large language model;
a second identifying module 93, configured to identify and output a target service class through the public large language model and/or the private large language model;
the execution output module 94 is configured to execute the corresponding target business process operation according to the target business class, and output an execution result to the requester.
In one embodiment, the first identification module 91 includes:
the first configuration module is used for configuring a sensitive data identification mechanism;
the sub-recognition module is used for recognizing whether the information of the requesting party relates to the sensitive data according to the sensitive data recognition mechanism;
and the sub-determination module is used for including sensitive data in the requester information if the requester information is related.
The first input module 92 includes:
The second configuration module is used for configuring the mapping relation between the sensitive data and the data template;
the first mapping module is used for mapping the sensitive data into a data template according to the mapping relation;
the apparatus further comprises:
and the second mapping module is used for mapping the data templates in the business categories into sensitive data according to the mapping relation.
The second recognition module 93 includes:
the similar text recognition module is used for forming a plurality of text blocks similar to the requester information into a candidate text set;
the intention recognition module is used for adding the candidate text set into the context of the public large language model, adding text information corresponding to the information of the requester into the prompt word of the public large language model, and recognizing and outputting the intention of the requester through the large language model;
and the service identification module is used for determining and outputting the target service category according to the intention of the requester.
Optionally, the similar text recognition module includes:
the sub-input module is used for inputting text information corresponding to the information of the requesting party into the embedded model to obtain a target vector;
and the query module is used for querying a plurality of candidate vectors similar to the target vector and forming a candidate text set by text blocks corresponding to all the candidate vectors.
Further, the device further comprises:
the cutting module is used for cutting an original text in the text corpus into a plurality of text blocks, and each text block comprises text processing units of which the number is smaller than or equal to the number of the text processing units limited by the embedded model;
and the storage module is used for inputting the text blocks into the embedded model one by one to obtain a vector corresponding to each text block, and storing each text block and the corresponding vector.
The execution output module 94 includes:
the creation module is used for creating business flow operation corresponding to each business category and generating a business flow operation set;
the searching module is used for searching the target business process operation corresponding to the target business class from the business process operation set;
and the execution module is used for sequentially executing each sub-operation according to the sub-operation included in the target business process operation and the execution sequence of each sub-operation and outputting a final execution result to the requester.
Optionally, the creating module includes:
the packaging module is used for pre-packaging the interfaces of the public large language model and/or the private large language model and an expansion tool for executing each sub-operation;
the third configuration module is used for configuring sub-operations included in the business process operation corresponding to each business category and the execution sequence of each sub-operation;
And a fourth configuration module, configured to configure, in each business process operation, an interface of the public large language model and/or the private large language model and an input and an output of an extension tool for executing each sub-operation according to an execution sequence of each sub-operation.
In one embodiment, the first identification module further comprises:
the service request identification module is used for identifying whether the information of the requesting party is a response request or a service request;
the corresponding code is used to determine the position of the object,
the second identifying module 93 is configured to identify and output a target service class through a public large language model and/or a private large language model when the information of the requester is a service request; when the information of the requesting party is a response request, the response information is output to the requesting party through the public large language model and/or the private large language model.
It will be appreciated by those skilled in the art that the modules in the embodiments of the apparatus described above may be distributed in an apparatus as described, or may be distributed in one or more apparatuses different from the embodiments described above with corresponding changes. The modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
In summary, the present invention may be implemented in a method, apparatus, electronic device, or computer readable medium that executes a computer program. Some or all of the functions of the present invention may be implemented in practice using a general purpose data processing device such as a microprocessor or Digital Signal Processor (DSP).
The above-described specific embodiments further describe the objects, technical solutions and advantageous effects of the present invention in detail, and it should be understood that the present invention is not inherently related to any particular computer, virtual device or electronic apparatus, and various general-purpose devices may also implement the present invention. The foregoing description of the embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (16)

1. A method of secure interaction based on a large language model, the method comprising:
receiving and identifying requester information;
if the sensitive data are identified to be contained in the requester information, the sensitive data are input into a public large language model after desensitization operation is carried out on the sensitive data; and/or directly inputting the identified sensitive data into the pre-trained private large language model; if the requester information is identified to not contain sensitive data, the requester information is directly input into a public large language model;
identifying and outputting a target business category through a public large language model and/or a private large language model;
And executing corresponding target business process operation according to the target business category, and outputting an execution result to a requester.
2. The method of claim 1, wherein the identifying the requestor information comprises:
configuring a sensitive data identification mechanism;
identifying whether the requester information relates to sensitive data according to a sensitive data identification mechanism;
if so, sensitive data is contained in the requester information.
3. The method of claim 1, wherein said performing a desensitization operation on said sensitive data comprises:
configuring a mapping relation between sensitive data and a data template;
mapping the sensitive data into a data template according to the mapping relation;
before outputting the traffic class, the method further comprises:
and mapping the data templates in the business categories into sensitive data according to the mapping relation.
4. The method of claim 1, wherein identifying and outputting the target traffic class via the common large language model comprises:
forming a plurality of text blocks similar to the requester information into a candidate text set;
adding the candidate text set into the context of the public large language model, adding text information corresponding to the information of the requesting party into the prompt word of the public large language model, and identifying and outputting the intention of the requesting party through the large language model;
And determining and outputting the target business category according to the intention of the requester.
5. The method of claim 4, wherein the grouping a plurality of text blocks similar to the requestor information into a candidate text set comprises:
inputting text information corresponding to the information of the requesting party into an embedded model to obtain a target vector;
and querying a plurality of candidate vectors similar to the target vector, and forming a candidate text set by text blocks corresponding to all the candidate vectors.
6. The method of claim 5, wherein prior to entering the text information corresponding to the requestor information into the embedding model, the method further comprises:
cutting an original text in a text corpus into a plurality of text blocks, wherein each text block comprises text processing units, and the number of the text processing units is smaller than or equal to the number of the text processing units limited by the embedded model;
and inputting the text blocks into the embedded model one by one to obtain a vector corresponding to each text block, and storing each text block and the corresponding vector.
7. The method of claim 1, wherein the performing the corresponding target business process operation according to the target business class and outputting the execution result to the requester comprises:
Creating business process operations corresponding to each business category, and generating a business process operation set;
searching a target business process operation corresponding to the target business class from the business process operation set;
and sequentially executing each sub-operation according to the sub-operation included in the target business process operation and the execution sequence of each sub-operation, and outputting a final execution result to the requester.
8. The method of claim 7, wherein creating business process operations corresponding to each business category, generating a set of business process operations comprises:
pre-packaging interfaces of the public large language model and/or the private large language model, and an expansion tool for executing each sub-operation;
configuring sub-operations included in business process operations corresponding to each business category and the execution sequence of each sub-operation;
in each business process operation, the interfaces of the public large language model and/or the private large language model and the input and output of the expansion tool for executing each sub-operation are configured according to the execution sequence of each sub-operation.
9. A secure interaction device based on a large language model, the device comprising:
The first identification module is used for receiving and identifying the information of the requesting party;
the first input module is used for inputting the sensitive data into a public large language model after the sensitive data are subjected to desensitization operation if the sensitive data are identified to be contained in the requester information; and/or directly inputting the identified sensitive data into the pre-trained private large language model; if the requester information is identified to not contain sensitive data, the requester information is directly input into a public large language model;
the second identifying module is used for identifying and outputting the target business category through the public large language model and/or the private large language model;
and the execution output module is used for executing the corresponding target business flow operation according to the target business category and outputting an execution result to a requester.
10. The apparatus of claim 9, wherein the first identification module comprises:
the first configuration module is used for configuring a sensitive data identification mechanism;
the sub-recognition module is used for recognizing whether the information of the requesting party relates to the sensitive data according to the sensitive data recognition mechanism;
and the sub-determination module is used for including sensitive data in the requester information if the requester information is related.
11. The apparatus of claim 9, wherein the first input module comprises:
the second configuration module is used for configuring the mapping relation between the sensitive data and the data template;
the first mapping module is used for mapping the sensitive data into a data template according to the mapping relation;
the apparatus further comprises:
and the second mapping module is used for mapping the data templates in the business categories into sensitive data according to the mapping relation.
12. The apparatus of claim 9, wherein the second identification module comprises:
the similar text recognition module is used for forming a plurality of text blocks similar to the requester information into a candidate text set;
the intention recognition module is used for adding the candidate text set into the context of the public large language model, adding text information corresponding to the information of the requester into the prompt word of the public large language model, and recognizing and outputting the intention of the requester through the large language model;
and the service identification module is used for determining and outputting the target service category according to the intention of the requester.
13. The apparatus of claim 12, wherein the similar text recognition module comprises:
the sub-input module is used for inputting text information corresponding to the information of the requesting party into the embedded model to obtain a target vector;
And the query module is used for querying a plurality of candidate vectors similar to the target vector and forming a candidate text set by text blocks corresponding to all the candidate vectors.
14. The apparatus of claim 13, wherein the apparatus further comprises:
the cutting module is used for cutting an original text in the text corpus into a plurality of text blocks, and each text block comprises text processing units of which the number is smaller than or equal to the number of the text processing units limited by the embedded model;
and the storage module is used for inputting the text blocks into the embedded model one by one to obtain a vector corresponding to each text block, and storing each text block and the corresponding vector.
15. The apparatus of claim 9, wherein the execution output module comprises:
the creation module is used for creating business flow operation corresponding to each business category and generating a business flow operation set;
the searching module is used for searching the target business process operation corresponding to the target business class from the business process operation set;
and the execution module is used for sequentially executing each sub-operation according to the sub-operation included in the target business process operation and the execution sequence of each sub-operation and outputting a final execution result to the requester.
16. The apparatus of claim 15, wherein the creation module comprises:
the packaging module is used for pre-packaging the interfaces of the public large language model and/or the private large language model and an expansion tool for executing each sub-operation;
the third configuration module is used for configuring sub-operations included in the business process operation corresponding to each business category and the execution sequence of each sub-operation;
and a fourth configuration module, configured to configure, in each business process operation, an interface of the public large language model and/or the private large language model and an input and an output of an extension tool for executing each sub-operation according to an execution sequence of each sub-operation.
CN202310540066.9A 2023-05-12 2023-05-12 Safe interaction method and device based on large language model Pending CN116738476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310540066.9A CN116738476A (en) 2023-05-12 2023-05-12 Safe interaction method and device based on large language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310540066.9A CN116738476A (en) 2023-05-12 2023-05-12 Safe interaction method and device based on large language model

Publications (1)

Publication Number Publication Date
CN116738476A true CN116738476A (en) 2023-09-12

Family

ID=87914182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310540066.9A Pending CN116738476A (en) 2023-05-12 2023-05-12 Safe interaction method and device based on large language model

Country Status (1)

Country Link
CN (1) CN116738476A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311798A (en) * 2023-11-28 2023-12-29 杭州实在智能科技有限公司 RPA flow generation system and method based on large language model
CN117993018A (en) * 2024-03-29 2024-05-07 蚂蚁科技集团股份有限公司 Access method of third party large language model and gateway server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311798A (en) * 2023-11-28 2023-12-29 杭州实在智能科技有限公司 RPA flow generation system and method based on large language model
CN117993018A (en) * 2024-03-29 2024-05-07 蚂蚁科技集团股份有限公司 Access method of third party large language model and gateway server

Similar Documents

Publication Publication Date Title
US20210232613A1 (en) Automatically generating natural language responses to users' questions
WO2020228376A1 (en) Text processing method and model training method and apparatus
US11501080B2 (en) Sentence phrase generation
WO2020077896A1 (en) Method and apparatus for generating question data, computer device, and storage medium
CN116738476A (en) Safe interaction method and device based on large language model
US20150172294A1 (en) Managing user access to query results
CN111368548A (en) Semantic recognition method and device, electronic equipment and computer-readable storage medium
CN112580352B (en) Keyword extraction method, device and equipment and computer storage medium
US20240119268A1 (en) Data processing method and related device
CN111309881A (en) Method and device for processing unknown questions in intelligent question answering, computer equipment and medium
CN112287090A (en) Financial question asking back method and system based on knowledge graph
CN117271724A (en) Intelligent question-answering implementation method and system based on large model and semantic graph
CN115438149A (en) End-to-end model training method and device, computer equipment and storage medium
CN112084779A (en) Entity acquisition method, device, equipment and storage medium for semantic recognition
CN111460783A (en) Data processing method and device, computer equipment and storage medium
CN111783425B (en) Intention identification method based on syntactic analysis model and related device
CN117312535A (en) Method, device, equipment and medium for processing problem data based on artificial intelligence
CN111767720B (en) Title generation method, computer and readable storage medium
CN111931503B (en) Information extraction method and device, equipment and computer readable storage medium
CN116662522B (en) Question answer recommendation method, storage medium and electronic equipment
CN113704452B (en) Data recommendation method, device, equipment and medium based on Bert model
CN115964384A (en) Data query method and device, electronic equipment and computer readable medium
CN115640378A (en) Work order retrieval method, server, medium and product
CN112487154B (en) Intelligent search method based on natural language
CN114780678A (en) Text retrieval method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 1109, No. 4, Lane 800, Tongpu Road, Putuo District, Shanghai, 200062

Applicant after: Shanghai Qiyue Information Technology Co.,Ltd.

Address before: Room a2-8914, 58 Fumin Branch Road, Hengsha Township, Chongming District, Shanghai, 201500

Applicant before: Shanghai Qiyue Information Technology Co.,Ltd.

Country or region before: China

CB02 Change of applicant information