CN111552779A - Man-machine conversation method, device, medium and electronic equipment - Google Patents

Man-machine conversation method, device, medium and electronic equipment Download PDF

Info

Publication number
CN111552779A
CN111552779A CN202010350529.1A CN202010350529A CN111552779A CN 111552779 A CN111552779 A CN 111552779A CN 202010350529 A CN202010350529 A CN 202010350529A CN 111552779 A CN111552779 A CN 111552779A
Authority
CN
China
Prior art keywords
dialog
parameter
text information
platform
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010350529.1A
Other languages
Chinese (zh)
Inventor
邹倩霞
徐国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010350529.1A priority Critical patent/CN111552779A/en
Priority to PCT/CN2020/103847 priority patent/WO2021217915A1/en
Publication of CN111552779A publication Critical patent/CN111552779A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Abstract

The disclosure relates to the field of artificial intelligence data processing, and discloses a man-machine conversation method, a man-machine conversation device, a man-machine conversation medium and electronic equipment. The method is executed by a framework with configurable parameter extraction functions, and comprises the following steps: after the dialogue platform starts dialogue with the user, executing a step of acquiring text information, wherein the step of acquiring the text information comprises the steps of acquiring dialogue text information of the user provided by the dialogue platform, and the dialogue text information is generated by the dialogue platform according to dialogue data and provided to a frame; extracting parameters of the dialogue text information based on a pre-configured parameter extraction function; sending the extracted parameters of the dialog text information to a dialog platform, and enabling the dialog platform to continue dialog with the user according to the parameters; and executing the step of acquiring the text information until the conversation between the conversation platform and the user is stopped. Under the method, the use efficiency of the man-machine conversation platform is improved, the conversation platform can be used more conveniently, and the requirements of more scenes are met.

Description

Man-machine conversation method, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence data processing technologies, and in particular, to a human-machine interaction method, device, medium, and electronic device.
Background
In the prior art, when a plurality of rounds of man-machine conversation platforms are applied, the conversation information of a user needs to be processed and processed, and corresponding key information is extracted, so that accurate man-machine conversation is realized, however, the types of key information which can be extracted by the current man-machine conversation platform are limited, and the requirement of extracting more kinds of key information cannot be met; if more kinds of key information need to be extracted, for example, extracting the related key information in the financial field and the macroscopic economic direction, developers need to continuously develop new functions to meet the requirements of different services in different fields, so that the current man-machine conversation platform is low in use efficiency and inconvenient to use.
Disclosure of Invention
In the technical field of artificial intelligence data processing, in order to solve the technical problems, the present disclosure provides a human-computer interaction method, device, medium, and electronic device.
According to an aspect of the present disclosure, there is provided a human-machine conversation method performed by a framework having a configurable parameter extraction function, the method including:
after a dialogue platform and a target user start to dialogue, executing a text information obtaining step, wherein the text information obtaining step comprises obtaining dialogue text information of the target user, which is provided by the dialogue platform, and the dialogue text information is generated by the dialogue platform according to dialogue data of the target user and is provided to the frame with the configurable parameter extraction function;
extracting parameters of the dialog text information based on a pre-configured parameter extraction function;
sending the extracted parameters of the dialog text information to the dialog platform so that the dialog platform can continue to dialog with the target user according to the parameters;
and executing the step of acquiring the text information until the conversation between the conversation platform and the target user is stopped.
According to another aspect of the present disclosure, there is provided a human-machine conversation method, the method being performed by a conversation platform, the method including:
when a conversation is carried out with a target user, obtaining conversation data of the target user, and generating conversation text information based on the conversation data;
providing the dialog text information to a frame with a configurable parameter extraction function, so that the frame with the configurable parameter extraction function extracts the parameters of the dialog text information based on a pre-configured parameter extraction function, and receives the parameters of the dialog text information extracted by the frame;
and continuing to carry out the dialogue with the target user according to the parameters, and continuing to execute the steps of obtaining the dialogue data of the target user and generating dialogue text information based on the dialogue data until the dialogue with the target user stops.
According to yet another aspect of the present disclosure, there is provided a human-machine interaction device including a framework having a configurable parameter extraction function, the device including:
the system comprises an acquisition module, a parameter extraction module and a parameter extraction module, wherein the acquisition module is configured to execute a text information acquisition step after a conversation platform starts a conversation with a target user, the text information acquisition step comprises the step of acquiring the conversation text information of the target user, which is provided by the conversation platform, and the conversation text information is generated by the conversation platform according to the conversation data of the target user and is provided to the frame with the configurable parameter extraction function;
the parameter extraction module is configured to extract parameters of the dialog text information based on a preconfigured parameter extraction function;
the sending module is configured to send the extracted parameters of the dialog text information to the dialog platform so that the dialog platform can continue to have a dialog with the target user according to the parameters;
an execution module configured to execute the step of obtaining text information until the dialog between the dialog platform and the target user is stopped.
According to another aspect of the present disclosure, there is provided a computer readable program medium storing computer program instructions which, when executed by a computer, cause the computer to perform the method as previously described.
According to another aspect of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method as previously described.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the man-machine conversation method provided by the disclosure is executed by a framework with a configurable parameter extraction function, and comprises the following steps: after a dialogue platform and a target user start to dialogue, executing a text information obtaining step, wherein the text information obtaining step comprises obtaining dialogue text information of the target user, which is provided by the dialogue platform, and the dialogue text information is generated by the dialogue platform according to dialogue data of the target user and is provided to the frame with the configurable parameter extraction function; extracting parameters of the dialog text information based on a pre-configured parameter extraction function; sending the extracted parameters of the dialog text information to the dialog platform so that the dialog platform can continue to dialog with the target user according to the parameters; and executing the step of acquiring the text information until the conversation between the conversation platform and the target user is stopped.
Under the method, a pre-established frame with a configurable parameter extraction function is utilized to extract parameters of text information from a dialogue platform, the specific parameter extraction function of the frame can be configured and completed according to requirements before use, the frame integrates various parameter extraction functions and supports the configuration of the various parameter extraction functions, so that developers are basically not required to re-develop parameter extraction codes of corresponding types when the dialogue data is subjected to parameter extraction, therefore, the use efficiency of a man-machine dialogue platform can be improved, the dialogue platform is more convenient to use, and the requirements of more scenes can be met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a system architecture diagram illustrating a human-machine dialog method in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a human-machine dialog method in accordance with an exemplary embodiment;
FIG. 3 is a flowchart illustrating the details of steps before step 220, after step 230, and after step 250 according to one embodiment illustrated in the corresponding embodiment of FIG. 2;
FIG. 4 is a flow diagram illustrating a human-machine dialog method in accordance with another exemplary embodiment;
FIG. 5 is a block diagram illustrating a human-machine dialog device in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an example of an electronic device implementing the human-machine dialog method described above, according to an example embodiment;
fig. 7 is a computer-readable storage medium implementing the above-described man-machine conversation method, according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
The present disclosure first provides a man-machine conversation method. In the technical scheme of the disclosure, the main form of the man-machine conversation is that a conversation platform carries out conversation with human beings, and the conversation platform receives conversation data fed back by the human beings and sends corresponding conversation data to the human beings according to the conversation data so as to carry out the conversation with the human beings. The dialogue platform may comprise a variety of modules, models or algorithms for enabling dialogue with humans, such as may comprise speech synthesis modules, language models, acoustic models, etc., and the dialogue platform may enable text, speech or video voice dialogue with humans. The man-machine conversation method provided by the disclosure can be executed by a frame with a configurable parameter extraction function, a data transmission channel can be established between the frame and a conversation platform, the conversation text information sent by the conversation platform can be correspondingly extracted by pre-configuring the parameter extraction function required by the frame, and the extracted parameters can be fed back to the conversation platform, so that the conversation platform can better perform man-machine conversation with human. The man-machine conversation method disclosed by the invention can be applied to various scenes, such as a financial scene, and a specific example of the application of the man-machine conversation method to the financial scene can be to audit whether loan is allowed to be issued for a user through a conversation platform or to communicate with the user to acquire historical behavior data so as to evaluate the financial risk or the personal financial aspect.
The implementation terminal of the present disclosure may be any device having computing, processing, and communication functions, which may be connected to an external device for receiving or sending data, and specifically may be a portable mobile device, such as a smart phone, a tablet computer, a notebook computer, a pda (personal Digital assistant), or the like, or may be a fixed device, such as a computer device, a field terminal, a desktop computer, a server, a workstation, or the like, or may be a set of multiple devices, such as a physical infrastructure of cloud computing or a server cluster.
Optionally, the implementation terminal of the present disclosure may be a server or a physical infrastructure of cloud computing.
FIG. 1 is a system architecture diagram illustrating a human-machine dialog method in accordance with an exemplary embodiment. As shown in fig. 1, the system architecture includes a server 110, a desktop computer 120 and a user terminal 130,
the desktop computer 120 and the server 110, and the user terminal 130 and the desktop computer 120 are connected by communication links, so that data can be received and transmitted. The desktop computer 120 is fixedly provided with a session platform, and the server 110 is fixedly provided with a framework with a configurable parameter extraction function, so in this embodiment, the server 110 may be an implementation terminal of the present disclosure. When the man-machine conversation method provided by the present disclosure is applied to the system architecture shown in fig. 1, a specific process may be as follows: firstly, a user initiates a session to a dialog platform fixed on the desktop computer 120 by using the user terminal 130, the dialog platform on the desktop computer 120 receives the dialog data in the voice form sent by the user terminal 130, converts the dialog data in the voice form into dialog text information, and sends the dialog text information to the server 110; because the frame fixed on the server 110 has the parameter extraction function configured in advance, the parameters in the dialog text information can be extracted quickly by using the frame, the server 110 sends the extracted parameters to the dialog platform on the desktop computer 120, and the dialog platform can determine how to further perform dialog with the user by using the parameters, that is, the dialog platform can really know the meaning of the dialog data fed back by the user last time, so that the man-machine dialog with the user can be continued.
It is worth mentioning that fig. 1 is only one embodiment of the present disclosure. Although the implementation terminal in this embodiment is a server, in other embodiments, the implementation terminal may be various terminals or devices as described above; although in the present embodiment, the frame with the configurable parameter extraction function and the dialog platform are respectively and fixedly installed on different terminals, in other embodiments or specific applications, the frame with the configurable parameter extraction function and the dialog platform may be installed on the same terminal, which is not limited by the present disclosure, and the scope of the present disclosure should not be limited thereby.
FIG. 2 is a flow diagram illustrating a human-machine dialog method in accordance with an exemplary embodiment. In a physical layer, the man-machine conversation method provided by the embodiment can be executed by a server; in a logic level, the man-machine interaction method provided by the embodiment is executed by a framework with a configurable parameter extraction function, as shown in fig. 2, and includes the following steps:
step 220, after the dialog platform starts dialog with the target user, executing a step of acquiring text information, wherein the step of acquiring text information includes acquiring dialog text information of the target user provided by the dialog platform.
And the dialog text information is generated by the dialog platform according to the dialog data of the target user and is provided to the framework with the configurable parameter extraction function.
The dialog platform will first obtain the dialog data of the target user and then generate the dialog text information according to the dialog data.
The parameter is an element or information item recorded in the dialog text message, and for example, the parameter may be a keyword in a piece of dialog text message.
The dialogue data of the target user can be voice data, text data and audio-video data.
When the dialog data of the target user is text data, the text data may be directly used as the dialog text information.
When the dialog data of the target user is voice data, the process of generating dialog text information by the dialog platform according to the dialog data may include:
and converting the dialogue data into text information as the generated dialogue text information.
This conversion process may be implemented using models or algorithms such as speech recognition.
And step 230, extracting the parameters of the dialog text information based on a pre-configured parameter extraction function.
In one embodiment, the configurable parameter extraction functionality of the framework includes one or more of:
parameter extraction functions of different parameter items respectively corresponding to a plurality of preset extraction functions;
a parameter extraction function based on a regular expression corresponding to at least one parameter item, wherein the regular expression is realized based on a preset configuration format;
the external link-based parameter extraction function is used for introducing an external parameter extraction function, and the external parameter extraction function corresponds to the parameter extraction function of at least one parameter item;
and the additional vocabulary function is used for extracting other parameters with similar semanteme of a basic extraction result corresponding to at least one extraction function, wherein the basic extraction result is the parameter with the most identical expression forms in the parameters extracted by using the corresponding extraction function.
A parameter item is the name of a particular parameter. For different dialog text messages, different parameters can usually be extracted for the same parameter item. The parameter items of each parameter that the framework can extract correspond to one parameter type, and there may be one or more parameter items corresponding to the same parameter type. The parameters of different parameter items that can be extracted by the extraction function corresponding to each extraction function may belong to the same or different parameter types. For example, the types of parameters that the framework can extract may include: date type, numeric type, character string type, address type, boolean type, wherein numeric type may include again: integer type, decimal type.
For example, if the parameter item of the numerical type is a salary date, the basic extraction result corresponding to the extraction function may be XX month XX day, which both use the same format and thus are of the same expression, and XX month XX day is of the same expression because the extracted parameters of such expression are the most, so that XX month XX day is of the most expression. While other parameters similar to the semantics may include: early, middle, late, last, middle, last, etc.
Figure BDA0002471668970000071
TABLE 1
For example, referring to table 1, the parameter extraction functions of the parameter items corresponding to two extraction functions are shown, where the two extraction functions are { $ birthday } and { $ day }, the corresponding parameter items are date of birth and date of payment, the parameter types corresponding to the extracted parameters are date type and integer type, respectively, where the chinese parameter name is a chinese name set by the user autonomously, the english parameter name is an english name of a parameter convenient for system internal management, the functions of the two extraction functions are integrated inside the framework, the table 1 also shows additional vocabularies, which are similar to the semantics of the basic extraction result corresponding to the { $ day } extraction function, and the contents shown in table 1 are all configured in the framework.
ID Parameter name Rule of regulation Type of output variable
1 Extracting the first 6 digits of the identity card //d{6} Character string type
2 Extracting mobile phone number (13|14|15|18)\\d{9} Character string type
TABLE 2
Referring to table 2, for the function of extracting parameters based on the regular expression, ID is a system number, a parameter name is a name of the regular rule, the regular rule is a regular expression supported by java programming, a writing method refers to a writing rule of the regular expression, and an output variable type is a data type of extracted data.
Figure BDA0002471668970000072
Figure BDA0002471668970000081
TABLE 3
Referring to table 3, a parameter extraction function based on an external link is shown, in which ID is a system number; the parameter name is the name of the linked parameter extraction function; URL is the address for calling the external link; inputting a Json field name corresponding to the Json information field which is required to extract the parameter by the user's dialect; other input fields are other required information for realizing successful calling of the link, such as user name and password information; the returned result corresponding field is the field name corresponding to the result returned by the link.
In one embodiment, the framework further comprises a test validation function for validating the reliability of the regular expression based parameter extraction function and/or the external link based parameter extraction function to ensure that the regular expression based parameter extraction function and/or the external link based parameter extraction function are available.
Specifically, the test verification function may be implemented by inputting preset dialog text information marked with correct extracted target parameters into the frame, configuring, in advance, a parameter extraction function based on external links corresponding to the target parameters and a parameter extraction function based on a regular expression corresponding to parameter items, respectively, then inputting the dialog text information into the frame, extracting parameters of the dialog text information by using the parameter extraction function based on external links of the frame and the parameter extraction function based on the regular expression, respectively, and then comparing the extracted parameters with the correct extracted target parameters, i.e., verifying whether the parameter extraction function based on the expression and the parameter extraction function based on external links are available.
In one embodiment, the types of parameters that the framework may extract include a string type or a boolean type, and the configurable parameter extraction function further comprises:
and a supplementary vocabulary function corresponding to at least one of the character string type parameter item or the boolean type parameter item for extracting a parameter corresponding to the parameter item and belonging to the supplementary vocabulary.
In one embodiment, the extracting the parameters of the dialog text information based on the preconfigured parameter extracting function includes:
extracting parameters which correspond to the parameter items and belong to supplementary vocabularies from the dialogue text information based on a pre-configured supplementary vocabulary function corresponding to the character string type parameter items;
and replacing the parameters with standardized parameters corresponding to the parameters to be used as the parameters extracted from the dialogue text information.
In one embodiment, the types of parameters that the framework may extract include boolean types, the configurable parameter extraction function further comprising:
an intent recognition function corresponding to at least one Boolean-type parameter item for extracting parameters that are semantically similar to the parameter item.
Referring to table 4, table 4 is different from table 1 in that it shows a parameter extraction function for boolean and string type parameter items, in which a parameter item of a name can be extracted by an extraction function, and is different from the function of extracting a parameter of a date type and an integer type in table 1 in that parameter items of marital and public deposit can be extracted by a vocabulary supplement, parameter items of the first 6 digits of an identification card can be extracted by a regular expression, parameter items of a unit name can be extracted by an external link, and extraction results of marital and public deposit parameter items can be standardized, for example, whether the parameter extracted for a parameter item of marital is not married or is single, standardized "not married" is used as the extracted parameter; the term of the parameter item of the accumulation fund can be extracted by intention recognition, namely, for the term of the accumulation fund, although the dialogue text information does not comprise the supplementary vocabulary corresponding to the parameter item, the term with the similar semanteme to the parameter item of the accumulation fund can be recognized.
Figure BDA0002471668970000091
Figure BDA0002471668970000101
TABLE 4
And step 250, sending the extracted parameters of the dialog text information to the dialog platform so that the dialog platform can continue to have a dialog with the target user according to the parameters.
After the parameters are obtained, the dialogue platform is equivalent to obtaining the key information fed back by the target user last time, and the dialogue platform can select a proper expression mode to continue the dialogue with the target user according to the key information.
Step 260, executing the step of obtaining text information until the conversation between the conversation platform and the target user stops.
Unless the dialog between the dialog platform and the target user is stopped, the step of obtaining text information, step 220, is performed, thereby looping through steps 220 and beyond.
Fig. 3 is a flowchart illustrating the details of steps before step 220, after step 230, and after step 250 according to one embodiment illustrated in the corresponding embodiment of fig. 2. In the embodiment shown in fig. 3, the foregoing parameter is a second parameter, and the framework may further be configured with a parameter comparison function, and as shown in fig. 3, the method may include the following steps:
in step 210, a first parameter corresponding to a target user provided by a dialog platform is obtained.
The first parameter is extracted by the dialogue platform based on the personal information submitted by the target user.
The framework with the configurable parameter extraction function and the dialogue platform can be located on the same terminal or different terminals respectively. When the first parameter is located at the same terminal, the dialogue platform writes the first parameter corresponding to the target user into the memory or the hard disk, and then the frame reads the first parameter from the memory or the hard disk, so that the first parameter can be obtained. When located at a different terminal, the framework may obtain the first parameter from the dialog platform via the communication link.
And 240, comparing the first parameter and the second parameter of the same type based on a preconfigured parameter comparison function to verify the reliability of the second parameter of the target user, so as to obtain a verification result corresponding to each second parameter.
For example, the dialog platform stores the age of the target user in advance, and the age of the target user can be used as the first parameter; and then, by means of dialogue with the target user, the extracted second parameter is also the age of the target user, and the ages of the target user obtained in the two times are compared, so that whether the age of the target user of the extracted second parameter is reliable can be judged.
And 250', sending the extracted second parameter of the dialog text message and the corresponding verification result to the dialog platform so that the dialog platform can continue to dialog with the target user according to the second parameter and the verification result.
The parameter comparison function can automatically judge the difference between the user information input by the system and the user information extracted in the conversation, manual participation judgment is not needed, labor cost is reduced, efficiency is improved, and the multi-turn conversation platform can be applied to more complex financial scenes.
In one embodiment, the framework configurable parameter comparison function comprises:
a general comparison function, wherein the comparison mode of the general comparison function comprises equal to, greater than, less than, including, not including;
and each parameter comparison function corresponds to the parameter comparison function of different parameter items.
As previously mentioned, the types of parameters that the framework may extract may include: date type, numeric type, string type, boolean type, wherein numeric type may include again: integer type, decimal type.
In one embodiment, the equal, greater, and less of the comparison modes of the general comparison function correspond to parameter types of a date type and/or a numerical type, and the equal, included, and excluded of the comparison modes of the general comparison function correspond to parameter types of a string type and/or a boolean type.
Figure BDA0002471668970000111
Figure BDA0002471668970000121
TABLE 5
For example, referring to Table 5, the "equal" and parametric contrast functions of the general contrast function are shown. ID is system code; the comparison parameter name is the English name of the parameter item for recording the comparison result; the Chinese name is the Chinese name of the parameter item; the parameter extraction parameters are English names of parameter items extracted by the frame in the previous embodiment and are parameter items extracted by the system through the parameter frame; the input parameters are known information parameter item names in the input system; the preset comparison mode is a comparison function of some special parameters provided by the system, and the comparison function is a { $ comparative birthday } function in the table 5; the comparison mode is as follows: equal means equal or not. Other comparison modes such as greater than, less than and the like can also be selected; whether the error is allowed: true is the allowable error, false is the unallowable error, i.e. there is an allowable error range for comparison based on the comparison function of the set error range. As shown in the figure, the comparison of the number of the staff members allows 10% of errors, and the staff members are considered to be consistent within the error range; and (3) comparing the results: there are 3, satisfy, i.e. two data are consistent; unsatisfied, i.e. the two data are not consistent; the reason why the comparison is impossible is that the comparison cannot be performed because the data of the two is insufficient. And (3) normalizing the result: the comparison results may be normalized to correspond one-to-one to the previous values.
In one embodiment, the types of parameters that the framework compares via the configurable parameter comparison function include: numerically, the framework-configurable parameter comparison function further comprises:
and a comparison function based on the set error range, wherein the comparison function based on the set error range is used in cooperation with a comparison mode equal to the set error range.
The comparison of the number of employees shown in table 5 allows a 10% error, which is a comparison function based on the set error range.
In one embodiment, the types of parameters that the framework compares via the configurable parameter comparison function include: string-type, the framework-configurable parameter comparison function further comprising:
a pinyin-based contrast function, wherein the pinyin-based contrast function is used in conjunction with an equal contrast mode.
In one embodiment, the comparing, based on a preconfigured parameter comparison function, the first parameter and the second parameter of the same type to check the reliability of the second parameter of the target user, so as to obtain a check result corresponding to each second parameter, includes:
comparing the first parameter and the second parameter of the same type based on a pre-configured parameter comparison function to obtain a comparison result;
and replacing each comparison result with a corresponding standardized result to serve as a verification result corresponding to each second parameter.
In the embodiment, the comparison result is replaced by the standardized result, so that the consistency and the standardization of the verification result are ensured, and the accuracy of conversation is improved.
Figure BDA0002471668970000131
Figure BDA0002471668970000141
TABLE 6
Referring to table 6, a diagram illustrating two comparison functions, equal and parametric comparison functions, of the general comparison function is shown. ID is system code; the comparison parameter name is the English name of the parameter item for recording the comparison result; the Chinese name is the Chinese name of the parameter item; the parameter extraction parameters are English names of parameter items of the parameters extracted by the frame in the previous embodiment and are parameters extracted by the system through the parameter frame; the input parameters are known information parameter item names in the input system; the preset comparison mode provides a comparison function of some special parameters for the system. If the identity numbers are consistently compared, the preset function has a plurality of comparison results, which are { $ compoareID } functions in Table 5. As shown in the figure. The comparison mode is as follows: equal means equal or not, and other comparison methods such as continin (inclusive) and continated (inclusive) can be selected; whether pinyin comparison is allowed, namely, a comparison function based on pinyin: true is allowed and false is not allowed. Example (c): if character comparison is carried out, the two are not consistent, and if pinyin comparison is allowed, the two are consistent, namely the comparison result is satisfied. And (3) comparing the results: there are 3 general, and in the example of table 6, only the corresponding equals (i.e. whether equal) comparison with the companamee is shown. If system preset, the results are manifold. And (3) standardization: the comparison results can be standardized and correspond to the previous comparison results one by one.
In summary, according to the man-machine conversation method provided in the embodiment of fig. 2, a pre-established framework with a configurable parameter extraction function is used to extract parameters of text information from a conversation platform, and a specific parameter extraction function of the framework can be configured and completed as required before use, and the framework integrates multiple parameter extraction functions and supports configuration of the multiple parameter extraction functions, so that developers are basically not required to re-develop parameter extraction codes of corresponding types when parameter extraction is performed on conversation data, and therefore, the use efficiency of the man-machine conversation platform can be improved, the conversation platform can be used more conveniently, and the needs of more scenes can be met.
Referring to fig. 4, the present disclosure also provides a human-machine conversation method, which is performed by a conversation platform, including:
step 410, when a dialog is performed with a target user, obtaining dialog data of the target user, and generating dialog text information based on the dialog data.
Step 420, providing the dialog text information to a framework with a configurable parameter extraction function, so that the framework with the configurable parameter extraction function extracts the parameters of the dialog text information based on a pre-configured parameter extraction function, and receives the parameters of the dialog text information extracted by the framework.
And 430, continuing to perform a dialogue with the target user according to the parameters, and continuing to execute the steps of obtaining the dialogue data of the target user and generating dialogue text information based on the dialogue data until the dialogue with the target user stops.
Steps 410-430 correspond to the steps shown in the embodiment of fig. 2, and details of the steps 410-430 are please refer to the embodiment of fig. 2, which will not be described herein.
The disclosure also provides a man-machine conversation device, and the following are device embodiments of the disclosure.
FIG. 5 is a block diagram illustrating a human-machine dialog device including a framework with configurable parameter extraction functionality according to an example embodiment. As shown in fig. 5, the apparatus 500 includes:
an obtaining module 510, configured to execute a step of obtaining text information after a dialog platform starts a dialog with a target user, where the step of obtaining text information includes obtaining dialog text information of the target user provided by the dialog platform, where the dialog text information is generated by the dialog platform according to dialog data of the target user and provided to the framework with the configurable parameter extraction function;
a parameter extraction module 520 configured to extract parameters of the dialog text information based on a preconfigured parameter extraction function;
a sending module 530, configured to send the extracted parameters of the dialog text information to the dialog platform, so that the dialog platform continues a dialog with the target user according to the parameters;
an executing module 540 configured to execute the step of obtaining text information until the dialog between the dialog platform and the target user is stopped.
According to a third aspect of the present disclosure, there is also provided an electronic device capable of implementing the above method.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, and a bus 630 that couples the various system components including the memory unit 620 and the processing unit 610.
Wherein the storage unit stores program code that is executable by the processing unit 610 such that the processing unit 610 performs the steps according to various exemplary embodiments of the present invention as described in the section "example methods" above in this specification.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)621 and/or a cache memory unit 622, and may further include a read only memory unit (ROM) 623.
The storage unit 620 may also include a program/utility 624 having a set (at least one) of program modules 625, such program modules 625 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 800 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
According to a fourth aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-mentioned method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
Referring to fig. 7, a program product 700 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A human-machine dialog method, the method being performed by a framework having configurable parameter extraction functionality, the method comprising:
after a dialogue platform and a target user start to dialogue, executing a text information obtaining step, wherein the text information obtaining step comprises obtaining dialogue text information of the target user, which is provided by the dialogue platform, and the dialogue text information is generated by the dialogue platform according to dialogue data of the target user and is provided to the frame with the configurable parameter extraction function;
extracting parameters of the dialog text information based on a pre-configured parameter extraction function;
sending the extracted parameters of the dialog text information to the dialog platform so that the dialog platform can continue to dialog with the target user according to the parameters;
and executing the step of acquiring the text information until the conversation between the conversation platform and the target user is stopped.
2. The method of claim 1, wherein the framework-configurable parameter extraction functions comprise one or more of:
parameter extraction functions of different parameter items respectively corresponding to a plurality of preset extraction functions;
a parameter extraction function based on a regular expression corresponding to at least one parameter item, wherein the regular expression is realized based on a preset configuration format;
the external link-based parameter extraction function is used for introducing an external parameter extraction function, and the external parameter extraction function corresponds to the parameter extraction function of at least one parameter item;
and the additional vocabulary function is used for extracting other parameters with similar semanteme of a basic extraction result corresponding to at least one extraction function, wherein the basic extraction result is the parameter with the most identical expression forms in the parameters extracted by using the corresponding extraction function.
3. The method of claim 1 or 2, wherein the framework further comprises a test validation function for validating the reliability of the regular expression based parameter extraction function and/or the external link based parameter extraction function to ensure that the regular expression based parameter extraction function and/or the external link based parameter extraction function are available.
4. The method of claim 1 or 2, wherein the types of parameters that the framework can extract include a string type or a boolean type, and wherein the configurable parameter extraction function further comprises: and a supplementary vocabulary function corresponding to at least one of the character string type parameter item or the boolean type parameter item for extracting a parameter corresponding to the parameter item and belonging to the supplementary vocabulary.
5. The method of claim 4, wherein the types of parameters that the framework may extract include a Boolean type, and wherein the configurable parameter extraction function further comprises:
an intent recognition function corresponding to at least one Boolean-type parameter item for extracting parameters that are semantically similar to the parameter item.
6. The method of claim 1 or 2, wherein the parameter is a second parameter, and the framework is further configurable with a parameter comparison function;
before executing the step of obtaining text information, the method further comprises obtaining a first parameter corresponding to a target user and provided by a dialogue platform, wherein the first parameter is extracted by the dialogue platform based on personal information submitted by the target user;
after extracting the parameters of the dialog text information based on a preconfigured parameter extraction function, the method further comprises comparing the first parameters and the second parameters of the same type based on a preconfigured parameter comparison function so as to verify the reliability of the second parameters of the target user and obtain verification results corresponding to the second parameters;
the step of sending the extracted parameters of the dialog text information to the dialog platform so that the dialog platform can continue to have a dialog with the target user according to the parameters comprises the step of sending the extracted second parameters of the dialog text information and the corresponding verification result to the dialog platform so that the dialog platform can continue to have a dialog with the target user according to the second parameters and the verification result.
7. A human-machine dialog method, the method being performed by a dialog platform, the method comprising:
when a conversation is carried out with a target user, obtaining conversation data of the target user, and generating conversation text information based on the conversation data;
providing the dialog text information to a frame with a configurable parameter extraction function, so that the frame with the configurable parameter extraction function extracts the parameters of the dialog text information based on a pre-configured parameter extraction function, and receives the parameters of the dialog text information extracted by the frame;
and continuing to carry out the dialogue with the target user according to the parameters, and continuing to execute the steps of obtaining the dialogue data of the target user and generating dialogue text information based on the dialogue data until the dialogue with the target user stops.
8. A human-machine dialog device, the device comprising a framework with configurable parameter extraction functionality, the device comprising:
the system comprises an acquisition module, a parameter extraction module and a parameter extraction module, wherein the acquisition module is configured to execute a text information acquisition step after a conversation platform starts a conversation with a target user, the text information acquisition step comprises the step of acquiring the conversation text information of the target user, which is provided by the conversation platform, and the conversation text information is generated by the conversation platform according to the conversation data of the target user and is provided to the frame with the configurable parameter extraction function;
the parameter extraction module is configured to extract parameters of the dialog text information based on a preconfigured parameter extraction function;
the sending module is configured to send the extracted parameters of the dialog text information to the dialog platform so that the dialog platform can continue to have a dialog with the target user according to the parameters;
an execution module configured to execute the step of obtaining text information until the dialog between the dialog platform and the target user is stopped.
9. A computer-readable program medium, characterized in that it stores computer program instructions which, when executed by a computer, cause the computer to perform the method according to any one of claims 1 to 7.
10. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of any of claims 1 to 7.
CN202010350529.1A 2020-04-28 2020-04-28 Man-machine conversation method, device, medium and electronic equipment Pending CN111552779A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010350529.1A CN111552779A (en) 2020-04-28 2020-04-28 Man-machine conversation method, device, medium and electronic equipment
PCT/CN2020/103847 WO2021217915A1 (en) 2020-04-28 2020-07-23 Human-machine dialog method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350529.1A CN111552779A (en) 2020-04-28 2020-04-28 Man-machine conversation method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111552779A true CN111552779A (en) 2020-08-18

Family

ID=71998254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350529.1A Pending CN111552779A (en) 2020-04-28 2020-04-28 Man-machine conversation method, device, medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN111552779A (en)
WO (1) WO2021217915A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113064987A (en) * 2021-04-30 2021-07-02 中国工商银行股份有限公司 Data processing method, apparatus, electronic device, medium, and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1357850A (en) * 2000-11-03 2002-07-10 奥森图雷公司 Secret and safe financial trade system and method
CN107463301A (en) * 2017-06-28 2017-12-12 北京百度网讯科技有限公司 Conversational system construction method, device, equipment and computer-readable recording medium based on artificial intelligence
CN109002510A (en) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 A kind of dialog process method, apparatus, equipment and medium
CN109726239A (en) * 2018-12-25 2019-05-07 厦门市美亚柏科信息股份有限公司 The method, apparatus and readable storage medium storing program for executing that a kind of pair of forensic data is analyzed
WO2019133694A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs
CN110347817A (en) * 2019-07-15 2019-10-18 网易(杭州)网络有限公司 Intelligent response method and device, storage medium, electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014086101A1 (en) * 2012-12-07 2014-06-12 Wan Jihua Method for translating natural language into computer language, semantic analyzer and human-computer dialogue system
CN105068661B (en) * 2015-09-07 2018-09-07 百度在线网络技术(北京)有限公司 Man-machine interaction method based on artificial intelligence and system
US10574597B2 (en) * 2017-09-18 2020-02-25 Microsoft Technology Licensing, Llc Conversational log replay with voice and debugging information
CN110555095B (en) * 2018-05-31 2024-04-16 北京京东尚科信息技术有限公司 Man-machine conversation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1357850A (en) * 2000-11-03 2002-07-10 奥森图雷公司 Secret and safe financial trade system and method
CN107463301A (en) * 2017-06-28 2017-12-12 北京百度网讯科技有限公司 Conversational system construction method, device, equipment and computer-readable recording medium based on artificial intelligence
WO2019133694A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs
CN109002510A (en) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 A kind of dialog process method, apparatus, equipment and medium
CN109726239A (en) * 2018-12-25 2019-05-07 厦门市美亚柏科信息股份有限公司 The method, apparatus and readable storage medium storing program for executing that a kind of pair of forensic data is analyzed
CN110347817A (en) * 2019-07-15 2019-10-18 网易(杭州)网络有限公司 Intelligent response method and device, storage medium, electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113064987A (en) * 2021-04-30 2021-07-02 中国工商银行股份有限公司 Data processing method, apparatus, electronic device, medium, and program product

Also Published As

Publication number Publication date
WO2021217915A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
US11775572B2 (en) Directed acyclic graph based framework for training models
CN109002510B (en) Dialogue processing method, device, equipment and medium
US11164564B2 (en) Augmented intent and entity extraction using pattern recognition interstitial regular expressions
CN110825431B (en) Interface document processing method, device, system, storage medium and electronic equipment
CN112487157A (en) Template-based intent classification for chat robots
CN112711581B (en) Medical data checking method and device, electronic equipment and storage medium
CN116724305A (en) Integration of context labels with named entity recognition models
CN110705235B (en) Information input method and device for business handling, storage medium and electronic equipment
CN110046806B (en) Method and device for customer service order and computing equipment
CN114528044B (en) Interface calling method, device, equipment and medium
CN112836521A (en) Question-answer matching method and device, computer equipment and storage medium
CN116635862A (en) Outside domain data augmentation for natural language processing
CN116547676A (en) Enhanced logic for natural language processing
CN114140947A (en) Interface display method and device, electronic equipment, storage medium and program product
CN111552779A (en) Man-machine conversation method, device, medium and electronic equipment
CA3203549A1 (en) Unified verification method, device, equipment and storage medium
US20220198153A1 (en) Model training
CN114511393B (en) Financial data processing method and system
CN111556096B (en) Information pushing method, device, medium and electronic equipment
CN111401009B (en) Digital expression character recognition conversion method, device, server and storage medium
CN113505595A (en) Text phrase extraction method and device, computer equipment and storage medium
US20230237180A1 (en) Systems and methods for linking a screen capture to a user support session
CN117270934A (en) Deep learning-based auxiliary code submitting method, device, equipment and medium
CN117252163A (en) Implementation method and device of electronic manual, electronic equipment and readable medium
CN117111957A (en) Code generation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination