CN117077790A - Reply content processing method and device, electronic equipment and storage medium - Google Patents

Reply content processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117077790A
CN117077790A CN202311120151.6A CN202311120151A CN117077790A CN 117077790 A CN117077790 A CN 117077790A CN 202311120151 A CN202311120151 A CN 202311120151A CN 117077790 A CN117077790 A CN 117077790A
Authority
CN
China
Prior art keywords
text
node
directed acyclic
acyclic graph
code sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311120151.6A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202311120151.6A priority Critical patent/CN117077790A/en
Publication of CN117077790A publication Critical patent/CN117077790A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/041Abduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Algebra (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The disclosure relates to the technical field of information processing, and in particular relates to a method and a device for processing reply content, electronic equipment and a storage medium, wherein the method for processing the reply content comprises the following steps: acquiring a text to be identified; generating a code sequence corresponding to the text to be recognized according to the text to be recognized; and generating and storing the directed acyclic graph corresponding to the code sequence. The processing method of the reply content can record the code sequence based on the directed acyclic graph, and is beneficial to tracing the reply content.

Description

Reply content processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of information processing, and in particular relates to a reply content processing method, device, electronic equipment and storage medium.
Background
The task-based dialog system may generate appropriate reply content to complete the task based on the task presented by the user. Task dialog systems generally include four modules: the method comprises the steps of a natural language understanding module (or NLU module, which is called Natural Language Understanding), a dialogue management module (or DM module, which is called Dialogue Management) and a natural language generating module (or NLG module, which is called Natural Language Generation), and the reply content can be generated through the sequential calling of the modules. Task-based dialog systems are widely used in human-computer interaction scenarios. Therefore, how to better construct the task type dialogue system so as to better generate the reply content is a technical problem which needs to be solved by the developer.
Disclosure of Invention
The disclosure provides a processing technical scheme for replying contents.
According to an aspect of the present disclosure, there is provided a processing method of reply content, the processing method including: acquiring a text to be identified; the text to be recognized is obtained through voice conversion to be recognized or electronic equipment, and is used for representing a current reply request; generating a code sequence corresponding to the text to be recognized according to the text to be recognized; the code sequence is a sequence for representing the current reply request through code elements, wherein the code elements comprise one or more of categories, functions and operators; generating and storing a directed acyclic graph corresponding to the code sequence; the directed acyclic graph is used for representing reply content of a current reply request corresponding to the text to be identified.
In a possible implementation manner, the generating the directed acyclic graph corresponding to the code sequence includes: generating a node corresponding to each category and a node corresponding to each function according to each category and each function in the code sequence; according to the class and function, function and logic processing sequence between class and class in the code sequence, making directed connection of correspondent node as the directed acyclic graph correspondent to the code sequence; the connection lines between nodes in the directed acyclic graph correspond to input parameters transferred between the nodes.
In one possible implementation, the node includes: root node, leaf node, common node; the root node is a node corresponding to a first function in the logic processing sequence; the leaf nodes are nodes corresponding to the last function in the logic processing sequence; the common nodes are other nodes except root nodes and leaf nodes.
In a possible implementation manner, the generating the directed acyclic graph corresponding to the code sequence further includes: calling a function corresponding to each node from the root node to the leaf node; taking the calling result corresponding to the leaf node as a result subgraph corresponding to the leaf node; the result subgraph is used for storing the generation process of the attribute values of the categories corresponding to the leaf nodes; and saving the result subgraph to the directed acyclic graph.
In a possible implementation manner, between a first node and a second node which are connected in the directed acyclic graph, the first node transparently transmits a target attribute value corresponding to a target category to the second node; the target attribute value is an attribute value which is not subjected to numerical change in at least one attribute value of a target class by the first node; the first node points to the second node.
In one possible embodiment, the processing method further includes: and under the condition of carrying out context reply content query on the text to be identified, querying a directed acyclic graph corresponding to each text before or after the text to be identified, and taking the directed acyclic graph as the context reply content corresponding to the text to be identified.
In one possible embodiment, the processing method further includes: and under the condition that the text to be identified is to update a target node in a target directed acyclic graph, acquiring the target directed acyclic graph, and executing any one of the following operations: updating the target node of the target directed acyclic graph; or copying the target directed acyclic graph, updating target nodes of the copied target directed acyclic graph, and taking the updated target directed acyclic graph as the directed acyclic graph corresponding to the text to be identified.
In one possible embodiment, the processing method further includes: and deleting the directed acyclic graph according to a preset rule under the condition that the total number of the generated directed acyclic graphs is larger than a preset total number or the size of the storage space occupied by the generated directed acyclic graph is larger than the preset space.
In a possible implementation manner, the text to be recognized comprises at least one phrase, and the phrase comprises at least one entity information; the code sequence corresponding to the text to be identified comprises the following steps: the function corresponding to each phrase in the at least one phrase comprises a category corresponding to each entity information in the phrase; wherein, the same entity information in different phrases corresponds to the same category, and the functions corresponding to the phrases of different texts to be identified are the same or different.
In one possible embodiment, the processing method further includes: for each function in the code sequence, determining an instance corresponding to at least one category in the function through an application programming interface corresponding to each function or corresponding entity information in the text to be identified; wherein the instance is a category in which at least one attribute value is set; and sequentially executing functions with examples in the code sequence, and generating reply content of the current reply request.
In one possible implementation, the categories are used to represent categories of entity information in text, the functions are used to represent operations for the categories, and the operators are used to represent constraints for the categories.
In a possible implementation manner, the generating a code sequence corresponding to the text to be recognized according to the text to be recognized includes: obtaining a code sequence corresponding to the text to be recognized according to the text to be recognized and the trained generation model; the trained generation model is obtained through training a training text and a training code sequence corresponding to the training text, or is obtained through training a training text and a training translation text corresponding to the training text; the training translation text is used for representing the content description of the training code sequence corresponding to the training text.
According to an aspect of the present disclosure, there is provided a processing apparatus of reply content, the processing apparatus including: the text acquisition module is used for acquiring a text to be identified; the text to be recognized is obtained through voice conversion to be recognized or electronic equipment, and is used for representing a current reply request; the code sequence generation module is used for generating a code sequence corresponding to the text to be identified according to the text to be identified; the code sequence is a sequence for representing the current reply request through code elements, wherein the code elements comprise one or more of categories, functions and operators; the diagram generating module is used for generating and storing a directed acyclic diagram corresponding to the code sequence; the directed acyclic graph is used for representing reply content of a current reply request corresponding to the text to be identified.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a text to be identified can be obtained, then a code sequence corresponding to the text to be identified is generated according to the text to be identified, and finally a directed acyclic graph corresponding to the code sequence is generated and stored. The processing method of the reply content provided by the embodiment of the disclosure can record the code sequence based on the directed acyclic graph, and is beneficial to tracing the reply content.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of a method for processing reply content according to an embodiment of the present disclosure.
Fig. 2 shows a block diagram of a processing apparatus for replying to content according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of an electronic device provided in accordance with an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Referring to fig. 1, fig. 1 shows a flowchart of a method for processing reply content according to an embodiment of the disclosure, and in conjunction with fig. 1, the method includes:
step S100, obtaining a text to be recognized. The text to be recognized is obtained through voice conversion to be recognized or electronic equipment, and is used for representing a current reply request. In one example, text to be recognized may be manually entered by a user. In another example, step S100 may include: and acquiring the voice to be recognized. And then converting the voice to be recognized into a text to be recognized. The embodiment of the disclosure can support application scenes of text input and voice input. The text to be recognized can be obtained through the input of a user in a text box in a display interface. The voice to be recognized can be obtained through a voice collecting device of the electronic device in the related art, the voice to be recognized can be converted into the text to be recognized through a recognition model and a recognition algorithm in the related art, and the embodiments of the disclosure are not repeated here.
Step S200, generating a code sequence corresponding to the text to be recognized according to the text to be recognized. The code sequence is a sequence representing the current reply request through code elements, and the code elements comprise one or more of categories (or classes), functions and operators. In one possible implementation, the text to be recognized includes at least one phrase including at least one entity information. The code sequence corresponding to the text to be identified comprises the following steps: the function corresponding to each phrase in the at least one phrase comprises a category corresponding to each entity information in the phrase. Wherein, the same entity information in different phrases corresponds to the same category, and the functions corresponding to the phrases of different texts to be identified are the same or different. For example: the phrase 1 can be a new meeting, the phrase 2 can be a modified meeting, the categories corresponding to the meeting can be the same, and in addition, different texts to be identified can also multiplex the same function, so that long-tail representation of a large amount of text information can be realized through a small number of code elements. In one example, the category is used to represent the category of entity information in text (definition of entity information may refer to related art), the function is used to represent operations on the category, and the operator is used to represent constraints on the category, or the operator makes the category have specific content (e.g., attribute values) as an instance. Illustratively, the above categories may be represented as person categories, datetime categories, event categories, etc., corresponding to the types of entity information that may appear in text, for example: the properties of the person category may include: surname, first name, contact phone, mailbox, etc., a specific person instance may be obtained by assigning an operator to each attribute, for example: and (5) surname: plums, names: 2. contact phone: 11111. mailbox 11.Com can be used as a specific person instance, corresponding to entity information in text. While functions may appear as different operations performed on various categories, such as: a call may be made to a person instance (e.g., by accessing the person instance "Li Er" to obtain the contact phone 11111 "Li Er" and then calling a call-related API of the electronic device, where the API is referred to as Application Programming Interface, i.e., an application programming interface), obtaining information about a person instance, obtaining a year of a datatime instance, etc. the call to 11111 may be made. Specifically, the functions may be nested to form a complex processing logic, as may be practical by a developer. It should be understood that the above categories, functions, and operators are preset by a developer according to different application scenarios, and specific configurations can be determined by the developer. Taking personal calendar management as an example, in this application scenario, the categories may include: event categories to represent a schedule, attributes may include start time, end time, duration, place, meeting topic, etc. Person category to represent participants of a calendar, attributes may include name, mailbox, phone, etc. The DateTime class is used for representing the start time of the schedule and the end time of the schedule, and the attributes may include year, month, date, hour, minute, etc. (the start time and the end time in the Event class may be represented by the DateTime class, in other words, the classes may be nested to realize multi-level data representation). Duration is used to represent the Duration of the calendar, and attributes may include Duration with seconds as a unit of measure (Duration in Event category may be represented by Duration category), and so on. The Location category is used to represent the Location of the calendar, and the attributes may include Location, location description, etc. (locations in the Event category may be represented by the Location category). In this application scenario, the functions may include: createEvent function to create a calendar. DeleteEvent function to delete schedules. An UpdateEvent function to update a calendar (for a created calendar). RemindEvent function to alert a calendar. The operators may be size decisions for instance specific parameter values, type decisions for attributes in the instance, and so forth. It should be appreciated that a function may process categories through operators to implement the function's functionality. Illustratively, the CreateEvent function may include Event categories (only illustrative representations are made herein, the number of categories that each function may call is practical by the developer), and then attributes in the Event categories are assigned (e.g., by operator assignment) by constraints in the text to be identified (e.g., constraint [ category ] function that may limit the attribute values of attributes in the categories, or what category is to be processed), to obtain a specific Event instance, which is the schedule created by the CreateEvent function. The code sequences described above can be obtained by combining functions, categories, operators. For example: the text to be identified is "eight-point in the morning" and the attribute value of the attribute corresponding to the start time in the Event instance in the generated code sequence is limited to "eight-point in the morning" and the attribute value of the attribute corresponding to the meeting topic in the Event instance is limited to "illustrative". It should be understood that the attribute value of each attribute corresponding to each category need not have a specific value, and if there is no restriction description of the attribute in the text to be identified, the attribute value of the attribute may be set to be null. In other words, the user may not need to describe for each attribute when inputting the text to be recognized, that is, "the eight-point opening example in the morning today" may not need to describe as "the eight-point to ten-point opening example in the morning today for two hours in the eight floors" so as to conform to the natural language habit of the user, that is, the user does not need to pay attention to how each instance is specifically configured when inputting the text to be recognized in an actual scene, and the information amounts of the above two kinds of expressions may be expressed as a code sequence to be normally executed, although they are different. For another example: the text to be identified is "today's weather is not hot", then its corresponding code sequence may include the following functions: ishot function, this function compares with the temperature value of input through the temperature threshold value, in order to judge whether the temperature value of input belongs to cold or hot condition, above-mentioned temperature threshold value can be set up according to actual conditions by the developer. The temperature value to be input in the IsHot function is obtained through a weather query Api (weather query API) function. The wealth API function is used to call external API input location instance, time instance, and output temperature instance (i.e. the temperature value above). The location instance to be input by the weather query api function is obtained by an atPlace (acquisition location) function, and the time instance is obtained by a Today (Today) function. The atPlace function may obtain the location instance above, which may be, for example, the current location of the electronic device. The Today function may obtain the time instance above, which may be, for example, the current time of the electronic device. The place instance and the time instance called in the function are instances corresponding to the place category and the time category, and the assignment process of the numerical value in the instances is represented by an operator.
In one possible implementation, step S200 may include: and obtaining a code sequence corresponding to the text to be recognized according to the text to be recognized and the trained generation model. The trained generation model is obtained through training a training text and a training code sequence corresponding to the training text. In this case, step S200 may include: and inputting the text to be identified into the trained generation model to obtain a code sequence corresponding to the text to be identified. The training code sequence corresponding to the training text is obtained by labeling by a developer, and the generating model is trained based on the training code sequence, so that the generating model capable of converting the text to be recognized into the code sequence can be obtained. It should be understood that the specific model structure, parameter setting, etc. of the generated model may be determined according to the actual requirements of the developer, and the embodiments of the present disclosure are not limited herein. Illustratively, the loss function of the generative model may also be set by the developer, for example: the loss value can be determined according to the difference between the training code sequence and the prediction code sequence generated according to the training text, the generated model can use the reduced loss value as a training target, and model parameters of the model can be continuously adjusted to obtain the generated model after training. The training text can comprise historical chat information in a plurality of actual scenes, namely, the life model can generate a corresponding code sequence aiming at the text to be identified in the plurality of actual scenes.
In one possible implementation manner, the trained generation model is obtained through training texts and training translation texts corresponding to the training texts. The training translation text is used for representing the content description of the training code sequence corresponding to the training text. The training translation text is used for representing the content description of the training code sequence corresponding to the training text. For example: the training text is a human oriented natural language, the training code sequence is a machine oriented modular language, and the training translation text is a human oriented and machine oriented modular natural language interposed therebetween to establish an association between the training text and the code sequence. For example, a modular natural language may be represented as translating functions in a code sequence into representations in the natural language, e.g., training translation text corresponding to the CreateEvent function above is "CreateSchedule", and training translation text corresponding to the DeleteEvent function is "delete Schedule". The training text corresponding to the training translation text "creation schedule" may be "meeting", "about", or the like. Step S200 may include: and inputting the text to be identified into the trained generation model to obtain the translation text corresponding to the text to be identified. And then determining a code sequence corresponding to the translation text as the code sequence corresponding to the text to be identified according to a preset corresponding relation between the code sequence and the translation text. For example, the correspondence may be set by a developer according to practical situations, and the corresponding code sequence may be determined according to the translation text, which is not limited in the embodiments of the present disclosure. In combination with an actual scenario, naming of functions and classes is usually set by a developer, and has a certain subjectivity, wherein English shorthand is not used, semantic expressive ability is poor, in this case, if a generated model directly predicts a code sequence, training difficulty is higher, and accuracy cannot be guaranteed, so in the embodiment of the disclosure, by establishing a corresponding relation between a training translation text and a training code sequence (the corresponding relation can be understood as a corresponding relation between the translation text and the code sequence, and the corresponding relation can be changed according to the actual situation, for example, after a function name is changed, the corresponding relation can be re-established by changing the original function name in the corresponding relation, without retraining the generated model itself, and in a mode that the generated model can predict the translation text as a training target and train the generated model, not only can reduce training difficulty, but also can improve the generating accuracy of the finally generated code sequence.
With continued reference to fig. 1, step S300 generates and stores a directed acyclic graph corresponding to the code sequence. The directed acyclic graph is used for representing reply content of a current reply request corresponding to the text to be identified.
In one possible implementation, the code sequence includes: category, function. Step S300 may include: and generating a node corresponding to each category and a node corresponding to each function according to each category and each function in the code sequence. And then, according to the class and the function in the code sequence, the function and the function, and the logic processing sequence between the class and the class, performing directed connection on the corresponding nodes to serve as a directed acyclic graph corresponding to the code sequence. The connection lines between nodes in the directed acyclic graph correspond to input parameters transferred between the nodes. The above logical processing order may be determined by, for example, functions and functions, categories and categories, and nested relationships between functions and categories in the code sequence. For example: the Yield function (function for outputting the result) outputs a reply content by obtaining a cold or hot result through the IsHot function. The IsHot function obtains table parameters through the WeatherQueryApi function, and the cold or hot result is obtained through comparing the temperature value in the table parameters with the temperature threshold value. The weather query API function obtains a place parameter through an atPlace function, obtains a time parameter through a Constraint [ DateTmeie ] function, and can query a table parameter through a place indicated by the parameter, a time indicated by the time parameter and an API related to weather, wherein the table parameter is used for representing information such as temperature, humidity, air pollution degree and the like of the place at the time. The Constraint DateTnie function generates a time instance of a time category by means of the Today function, which is "Today" if the text to be identified is "Today's weather is not hot". The atom function generates a place instance of a place category through the heat function, and if the text to be identified is "today's weather is not hot", the place instance is the current place of the electronic device, for example: shanghai. In the example, each function and each class correspond to one node, a directed acyclic graph corresponding to a code sequence can be obtained by performing directed connection on the logic processing sequence, and table parameters, place parameters and time parameters transferred between the functions are the input parameters, so that the functions are pushed to generate an operation result in sequence.
In one possible implementation, the node includes: root node, leaf node, common node. The root node is a node corresponding to a first function in the logic processing sequence, the leaf node is a node corresponding to a last function in the logic processing sequence, and the common node is a root node and other nodes of the leaf node. And (3) receiving the upper example, wherein the node corresponding to the Yield function in the upper example is called for the first time in the logic processing sequence, and the node corresponding to the Yield function is the root node in the upper example. And finally calling the nodes corresponding to the Today function and the heat function in the logic processing sequence, wherein the Today function and the heat function are leaf nodes. The nodes corresponding to the IsHot function, the WeatherQueryApi function, the AtPlace function and the Constraint [ DateTmeie ] function are not the nodes which are called first and last, namely the other nodes in the example. In this case, step S300 may include: and calling the function corresponding to each node from the root node to the leaf node, and taking the calling result corresponding to the leaf node as a result subgraph corresponding to the leaf node. And the result subgraph is used for storing the generation process of the attribute values of the categories corresponding to the leaf nodes. And finally, storing the result subgraph to the directed acyclic graph. For the above example, leaf nodes corresponding to Today function and more function need to establish "Today" time instance according to time category and "Shanghai" place instance according to place category respectively. The procedure for each example was as follows: and establishing a category, acquiring a specific value of the attribute corresponding to the category, and giving the specific value to the category to obtain the instance corresponding to the category. According to the embodiment of the disclosure, the generation process of the attribute value of the category corresponding to the leaf node is saved as the result subgraph into the directed acyclic graph, so that the data storage breadth of the directed acyclic graph can be improved, and the function executed according to the directed acyclic graph has traceability. In a possible implementation manner, between a first node and a second node which are connected in the directed acyclic graph, the first node transparently transmits a target attribute value corresponding to a target category to the second node. The target attribute value is an attribute value which is not subjected to numerical change in at least one attribute value of a target class by the first node. The first node points to the second node. For example, the node corresponding to the Yield function may be regarded as the second node, the node corresponding to the IsHot function may be regarded as the first node, the IsHot function obtains a specific result of "hot" or "unheated", and transmits the specific result to the Yield function to generate reply content, for example, the Yield function may generate an answer of "today unheated" or "today heated" which is more fit to natural language. It should be understood that in this example, the IsHot function may also be used as the second node, where the first node corresponding to the IsHot function is a node corresponding to the weather query api, and the weather query api obtains the table parameter and transmits the table parameter to the node corresponding to the IsHot function, where the IsHot function compares the temperature value in the table parameter with the temperature threshold value, so as to obtain the cold or hot result. In other words, one node may act as both a first node and a second node, depending on the direction between the nodes, embodiments of the disclosure are not limited herein.
In one possible embodiment, the processing method further includes: and under the condition of carrying out context reply content query on the text to be identified, querying a directed acyclic graph corresponding to each text before or after the text to be identified, and taking the directed acyclic graph as the context reply content corresponding to the text to be identified. According to the embodiment of the disclosure, the reply content can be stored in a directed acyclic graph mode, and the reply content corresponding to each text has traceability in a directed acyclic graph mode corresponding to each text. The above-mentioned context reply content query may be invoked by a task dialogue system in the prior art, and the embodiments of the present disclosure are not described herein in detail, for example: can be used for assisting in understanding the text to be identified through the context reply content, modifying the context reply content and the like.
In one possible embodiment, the processing method further includes: and under the condition that the text to be identified is to update a target node in a target directed acyclic graph, acquiring the target directed acyclic graph, and executing any one of the following operations: and updating the target node of the target directed acyclic graph or copying the target directed acyclic graph, updating the target node of the copied target directed acyclic graph, and taking the updated target directed acyclic graph as the directed acyclic graph corresponding to the text to be identified. For example, in the case where it is determined that the text to be identified has the requested content to change the target directed acyclic graph (e.g., determined by a trained machine learning model or by detecting keywords), the target node of the target directed acyclic graph may be updated in a manner that is simpler and faster to implement without increasing the total amount of directed acyclic graph. The target directed acyclic graph can also be copied, the target node of the copied target directed acyclic graph is updated, the updated target directed acyclic graph is used as the directed acyclic graph corresponding to the text to be identified, and each modification can be traced back in the mode, for example: the first text to be identified is "today goes to Shanghai meeting", the second text to be identified is "no", or goes to Suzhou ", the third text to be identified is" no, does not go to Suzhou, or is the original place ", in this example, the split meeting place is modified many times, if the modification is directly performed on the directed acyclic graph corresponding to the first text to be identified, the subsequent user modifies on the basis of the first text to be identified, and actually modifies on the directed acyclic graph corresponding to the second text to be identified, and the" original place "is discarded when the directed acyclic graph of the second text to be identified is generated, so that information is lost. And after copying, the information can be modified without losing, thereby being beneficial to improving the traceability of the reply content.
In one possible embodiment, before step S100, the processing method may include: an initial recognition text is obtained. Wherein the initial recognition text represents an initial reply request. The initial recognition text is obtained through voice conversion to be recognized or received by the electronic equipment and is used for representing an initial reply request. In one example, the initial recognition text may be manually entered by a user. In another example, it may include: an initial recognition voice is obtained. The initial recognition speech is then converted to initial recognition text. The embodiment of the disclosure can support application scenes of text input and voice input. The initial recognition text may be obtained by user input within a text box in the display interface. The initial recognition voice can be obtained through a voice collecting device of the electronic device in the related art, the initial recognition voice can be converted into the initial recognition text through a recognition model and a recognition algorithm in the related art, and the embodiments of the present disclosure are not described herein. And then acquiring a corresponding first directed acyclic graph of the initial recognition text. The first directed acyclic graph is used for representing reply content of a reply request corresponding to the initial recognition text, and is generated through a code sequence corresponding to the initial recognition text.
The directed acyclic graph is used for representing reply content corresponding to text and aiming at a reply request. The association category is used for representing a category with association with an initial recognition text, and the initial recognition text is a text before the text to be recognized. Illustratively, the directed acyclic graph is formed by sequentially directing a plurality of nodes through a logical process, each node corresponding to a function or class. The embodiments of the present disclosure are not limited herein with respect to a specific manner of generating the association category. In one example, the association category is a category existing in a code sequence corresponding to the initial recognition text, and the processing method further includes any one of the following: and taking the text which is the same as the preset field in the text to be identified as the association category, inputting the text to be identified into the association category detection model, and taking the output field as the association category. For example, in the case that a specific text appears, the specific text may be used as an association category by a rule matching algorithm, for example, a text that references the front such as "that day", "that place", etc. (that is, a referencing operation described later), or a text that appears after the specific text may be used as an association category, for example, a text that modifies the front such as "modify it" is used as an association category (for example, "10 points" in "modify it to 10 points" are association categories), or an association category in the text to be identified may be output according to a model (a process of outputting the association category by the model, which will be described later).
In a possible implementation manner, step S300 may further include: and determining a second directed acyclic graph corresponding to the text to be recognized according to the association category of the initial recognition text and the text to be recognized and the first directed acyclic graph. Wherein the second directed acyclic graph is used to represent reply content of the current reply request.
In one possible implementation, the association class is a class with no attribute value set, and step S400 may include: the association class is matched with each node in the first directed acyclic graph. In one example, matching the association category with each node in the first directed acyclic graph includes: and comparing the association category with each node through a preset operator, and taking the target node as a successfully matched node under the condition that the association category and the target node are in the same category. The preset operator may be "=" or "? And when any node is successfully matched and the operation corresponding to the text to be identified is determined to be the reference operation, taking the attribute value corresponding to the successfully matched node as the attribute value corresponding to the association category. Wherein the referencing operation indicates that the text to be identified references the initial identification text. For example, if the text to be recognized input by the user is "how weather is that day", the initial recognition text is "6 months and 2 days to get in a meeting", the association category in the text to be recognized is "that day". If only the text to be recognized is referred to, the "day" is only one time category, and not one time instance, that is, there is no specific attribute value, in other words, the electronic device cannot obtain the specific date of the "day" (that is, the association category has no set attribute value, and the operation corresponding to the text to be recognized is the referencing operation). In this case, a node query of one time class may be performed on the first directed acyclic graph, and if a node corresponding to one time class is queried, an attribute value corresponding to the node is taken as an attribute value of "that day" in the above. For example: the directed acyclic graph corresponding to the "6 month 2 day meeting" may correspond to a node corresponding to the "6 month 2 day," where the "6 month 2 day" is the node corresponding to the time category in this example, and then the "6 month 2 day" may be used as the "day". For example, the matching success may be expressed as that the association class is the same as or has a correspondence with the class corresponding to one node. For example: the matching may be performed by input parameters between nodes, and/or by categories to which the nodes correspond. For another example: and if the association category is a person category, searching the nodes in the first directed acyclic graph until a node corresponding to the person category is searched, and then, judging that the matching is successful. And regarding the searched person category as a category with successfully matched association category, and assigning an attribute value corresponding to the searched person category (which can be an attribute value of a person instance corresponding to the person category) to the association category so as to symbolize the association category and complete the quotation operation of the text to be identified. In another example, in the event of a match failure, a prompt may be generated to inform the user that some or all of the attributes of the associated category need to be entered for the corresponding attribute value. And finally, obtaining a second directed acyclic graph according to the attribute value corresponding to the association category and the first code sequence corresponding to the text to be identified.
In one possible implementation manner, the association category is a category of a set attribute value, and the obtaining a second directed acyclic graph according to at least one initial recognition text includes: and matching the association category with each node in at least one directed acyclic graph corresponding to at least one initial recognition text, and copying a target directed acyclic graph in the at least one directed acyclic graph corresponding to a target node which is successfully matched under the condition that the association category is successfully matched with any node and the operation corresponding to the text to be recognized is determined to be a modifying operation, wherein the modifying operation indicates that the text to be recognized modifies the initial recognition text. And setting the attribute value corresponding to the target node in the copied target directed acyclic graph as the attribute value corresponding to the association category, and then taking the attribute value as a second directed acyclic graph. For example: the text to be identified is "or" going to place B "(in this example," place B "is the associated category and is the place category for which the attribute value has been set), the initial identification text is" going to place a today ", and the initial identification text corresponds to a directed acyclic graph a (i.e., the target directed acyclic graph described above). The directed acyclic graph a is copied to obtain a directed acyclic graph B, and the attribute value of the node corresponding to the "place a" in the directed acyclic graph B (i.e., the attribute value corresponding to the target node) is updated to be the directed acyclic graph with the "place B" as "or the" place B ". In the embodiment of the disclosure, when the operation corresponding to the text to be identified is a modification operation, the copied directed acyclic graph can be modified in a copying manner without changing the directed acyclic graph corresponding to the text to be identified, so that the modification operation does not cause information loss. For example, the information of the place A is reserved, and when the user inputs a new text to be recognized, namely, the new text to be recognized is calculated or a place bar is last, the new text to be recognized can be queried for the place A and a directed acyclic graph can be generated based on the new text to be recognized. The matching process may refer to the above, and the embodiments of the disclosure are not described herein.
In a possible implementation manner, the operation of determining that the text to be identified corresponds to includes: and inputting the text to be identified and the initial identification text into the trained classification model to obtain the operation identification corresponding to the text to be identified. The trained classification model is obtained through training texts, the texts before the training texts and operation identifiers corresponding to the training texts. The operation is identified as any one of the following: reference operation, modification operation, no reference operation, no modification operation, no reference operation, and modification operation. And finally, determining the operation corresponding to the text to be identified according to the operation identifier corresponding to the text to be identified. The operation identifier may be calibrated by a developer, for example. The embodiment of the disclosure is not limited to the model structure and training mode of the classification model, and a developer can set according to actual conditions. For example: the training text and the text before the training text can be input into the classification model to obtain a prediction operation identifier, and the difference between the prediction operation identifier and the operation identifier is reduced to be used as a training target for training, so that the trained classification model can be obtained. According to the method and the device for determining the operation corresponding to the text to be recognized, the operation corresponding to the text to be recognized can be determined through the trained classification model, and therefore the determination accuracy of the operation corresponding to the text to be recognized is improved.
In one possible embodiment, the processing method further includes: and deleting the directed acyclic graph according to a preset rule under the condition that the total number of the generated directed acyclic graphs is larger than a preset total number or the size of the storage space occupied by the generated directed acyclic graph is larger than the preset space. Illustratively, the preset rules may include: the method and the device have the advantages that after the generation time of the directed acyclic graphs is ordered, the directed acyclic graphs with the earliest generation time are preferentially deleted, after the application frequency of the directed acyclic graphs is ordered, the directed acyclic graphs with the lowest application frequency are preferentially deleted, and the like. The specific values of the preset total number and the preset space size can also be determined by a developer according to actual requirements.
In one possible implementation, step S300 may include: and determining an instance corresponding to at least one category in each function in the code sequence through an application programming interface corresponding to each function or corresponding entity information in the text to be identified. Wherein the instance is a category in which at least one attribute value is set. And sequentially executing the functions with the examples in the code sequence to generate reply content. For example, the function with the instance may direct the electronic device to accurately perform the corresponding operation, such that the function may implement the function predetermined by the developer. For example: "today 10-point meeting", a function is a function for creating a meeting, which includes a function for querying "today" time (an application programming interface related to time query may be invoked to generate an instance corresponding to "today" (category is time, attribute values may include year, month, day, hour, etc.), and here, taking "today" as an example of 10-month 13 in 2020, a function for creating a meeting may implement a function for creating a meeting at 10-point of 10-month 13 in 2020.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any method for processing reply content provided by the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Referring to fig. 2, fig. 2 shows a block diagram of a processing apparatus for replying to content according to an embodiment of the present disclosure, and in combination with fig. 2, the processing apparatus 100 includes: a text acquisition module 110, configured to acquire a text to be identified; the text to be recognized is obtained through voice conversion to be recognized or electronic equipment, and is used for representing a current reply request; the code sequence generating module 120 is configured to generate a code sequence corresponding to the text to be identified according to the text to be identified; the code sequence is a sequence for representing the current reply request through code elements, wherein the code elements comprise one or more of categories, functions and operators; a graph generating module 130, configured to generate and store a directed acyclic graph corresponding to the code sequence; the directed acyclic graph is used for representing reply content of a current reply request corresponding to the text to be identified.
In a possible implementation manner, the generating the directed acyclic graph corresponding to the code sequence includes: generating a node corresponding to each category and a node corresponding to each function according to each category and each function in the code sequence; according to the class and function, function and logic processing sequence between class and class in the code sequence, making directed connection of correspondent node as the directed acyclic graph correspondent to the code sequence; the connection lines between nodes in the directed acyclic graph correspond to input parameters transferred between the nodes.
In one possible implementation, the node includes: root node, leaf node, common node; the root node is a node corresponding to a first function in the logic processing sequence; the leaf nodes are nodes corresponding to the last function in the logic processing sequence; the common nodes are other nodes except root nodes and leaf nodes.
In a possible implementation manner, the generating the directed acyclic graph corresponding to the code sequence further includes: calling a function corresponding to each node from the root node to the leaf node; taking the calling result corresponding to the leaf node as a result subgraph corresponding to the leaf node; the result subgraph is used for storing the generation process of the attribute values of the categories corresponding to the leaf nodes; and saving the result subgraph to the directed acyclic graph.
In a possible implementation manner, between a first node and a second node which are connected in the directed acyclic graph, the first node transparently transmits a target attribute value corresponding to a target category to the second node; the target attribute value is an attribute value which is not subjected to numerical change in at least one attribute value of a target class by the first node; the first node points to the second node.
In one possible embodiment, the processing device is further configured to: and under the condition of carrying out context reply content query on the text to be identified, querying a directed acyclic graph corresponding to each text before or after the text to be identified, and taking the directed acyclic graph as the context reply content corresponding to the text to be identified.
In one possible embodiment, the processing device is further configured to: and under the condition that the text to be identified is to update a target node in a target directed acyclic graph, acquiring the target directed acyclic graph, and executing any one of the following operations: updating the target node of the target directed acyclic graph; or copying the target directed acyclic graph, updating target nodes of the copied target directed acyclic graph, and taking the updated target directed acyclic graph as the directed acyclic graph corresponding to the text to be identified.
In one possible embodiment, the processing device is further configured to: and deleting the directed acyclic graph according to a preset rule under the condition that the total number of the generated directed acyclic graphs is larger than a preset total number or the size of the storage space occupied by the generated directed acyclic graph is larger than the preset space.
In a possible implementation manner, the text to be recognized comprises at least one phrase, and the phrase comprises at least one entity information; the code sequence corresponding to the text to be identified comprises the following steps: the function corresponding to each phrase in the at least one phrase comprises a category corresponding to each entity information in the phrase; wherein, the same entity information in different phrases corresponds to the same category, and the functions corresponding to the phrases of different texts to be identified are the same or different.
In one possible embodiment, the processing device is further configured to: for each function in the code sequence, determining an instance corresponding to at least one category in the function through an application programming interface corresponding to each function or corresponding entity information in the text to be identified; wherein the instance is a category in which at least one attribute value is set; and sequentially executing functions with examples in the code sequence, and generating reply content of the current reply request.
In one possible implementation, the categories are used to represent categories of entity information in text, the functions are used to represent operations for the categories, and the operators are used to represent constraints for the categories.
In a possible implementation manner, the generating a code sequence corresponding to the text to be recognized according to the text to be recognized includes: obtaining a code sequence corresponding to the text to be recognized according to the text to be recognized and the trained generation model; the trained generation model is obtained through training a training text and a training code sequence corresponding to the training text, or is obtained through training a training text and a training translation text corresponding to the training text; the training translation text is used for representing the content description of the training code sequence corresponding to the training text.
The method has specific technical association with the internal structure of the computer system, and can solve the technical problems of improving the hardware operation efficiency or the execution effect (including reducing the data storage amount, reducing the data transmission amount, improving the hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system which accords with the natural law.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The electronic device may be provided as a terminal device, a server or other form of device.
Referring to fig. 3, fig. 3 illustrates a block diagram of an electronic device 1900 provided in accordance with an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server or terminal device. Referring to FIG. 3, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output interface 1958. Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (15)

1. A method of processing reply content, the method comprising:
acquiring a text to be identified; the text to be recognized is obtained through voice conversion to be recognized or electronic equipment, and is used for representing a current reply request;
generating a code sequence corresponding to the text to be recognized according to the text to be recognized; the code sequence is a sequence for representing the current reply request through code elements, wherein the code elements comprise one or more of categories, functions and operators;
generating and storing a directed acyclic graph corresponding to the code sequence; the directed acyclic graph is used for representing reply content of a current reply request corresponding to the text to be identified.
2. The processing method of claim 1, wherein the generating the directed acyclic graph corresponding to the code sequence comprises:
generating a node corresponding to each category and a node corresponding to each function according to each category and each function in the code sequence;
according to the class and function, function and logic processing sequence between class and class in the code sequence, making directed connection of correspondent node as the directed acyclic graph correspondent to the code sequence; the connection lines between nodes in the directed acyclic graph correspond to input parameters transferred between the nodes.
3. The processing method of claim 2, wherein the node comprises: root node, leaf node, common node; the root node is a node corresponding to a first function in the logic processing sequence; the leaf nodes are nodes corresponding to the last function in the logic processing sequence; the common nodes are other nodes except root nodes and leaf nodes.
4. The processing method of claim 3, wherein generating the directed acyclic graph corresponding to the code sequence further comprises:
Calling a function corresponding to each node from the root node to the leaf node;
taking the calling result corresponding to the leaf node as a result subgraph corresponding to the leaf node; the result subgraph is used for storing the generation process of the attribute values of the categories corresponding to the leaf nodes;
and saving the result subgraph to the directed acyclic graph.
5. The processing method of claim 1, wherein, between a first node and a second node connected in the directed acyclic graph, the first node transparently transmits a target attribute value corresponding to a target class to the second node; the target attribute value is an attribute value which is not subjected to numerical change in at least one attribute value of a target class by the first node; the first node points to the second node.
6. The processing method according to claim 1, characterized in that the processing method further comprises: and under the condition of carrying out context reply content query on the text to be identified, querying a directed acyclic graph corresponding to each text before or after the text to be identified, and taking the directed acyclic graph as the context reply content corresponding to the text to be identified.
7. The processing method according to claim 1, characterized in that the processing method further comprises: and under the condition that the text to be identified is to update a target node in a target directed acyclic graph, acquiring the target directed acyclic graph, and executing any one of the following operations: updating the target node of the target directed acyclic graph; or copying the target directed acyclic graph, updating target nodes of the copied target directed acyclic graph, and taking the updated target directed acyclic graph as the directed acyclic graph corresponding to the text to be identified.
8. The processing method according to claim 1, characterized in that the processing method further comprises: and deleting the directed acyclic graph according to a preset rule under the condition that the total number of the generated directed acyclic graphs is larger than a preset total number or the size of the storage space occupied by the generated directed acyclic graph is larger than the preset space.
9. The processing method of claim 1, wherein the text to be recognized includes at least one phrase including at least one entity information; the code sequence corresponding to the text to be identified comprises the following steps: the function corresponding to each phrase in the at least one phrase comprises a category corresponding to each entity information in the phrase; wherein, the same entity information in different phrases corresponds to the same category, and the functions corresponding to the phrases of different texts to be identified are the same or different.
10. The processing method according to claim 1, characterized in that the processing method further comprises:
for each function in the code sequence, determining an instance corresponding to at least one category in the function through an application programming interface corresponding to each function or corresponding entity information in the text to be identified; wherein the instance is a category in which at least one attribute value is set;
and sequentially executing functions with examples in the code sequence, and generating reply content of the current reply request.
11. The processing method of claim 1, wherein the category is to represent a category of entity information in text, the function is to represent an operation for the category, and the operator is to represent a constraint for the category.
12. The processing method according to any one of claims 1 to 11, wherein the generating, according to the text to be recognized, a code sequence corresponding to the text to be recognized includes: obtaining a code sequence corresponding to the text to be recognized according to the text to be recognized and the trained generation model; the trained generation model is obtained through training a training text and a training code sequence corresponding to the training text, or is obtained through training a training text and a training translation text corresponding to the training text; the training translation text is used for representing the content description of the training code sequence corresponding to the training text.
13. A processing apparatus for replying to content, the processing apparatus comprising:
the text acquisition module is used for acquiring a text to be identified; the text to be recognized is obtained through voice conversion to be recognized or electronic equipment, and is used for representing a current reply request;
the code sequence generation module is used for generating a code sequence corresponding to the text to be identified according to the text to be identified; the code sequence is a sequence for representing the current reply request through code elements, wherein the code elements comprise one or more of categories, functions and operators;
the diagram generating module is used for generating and storing a directed acyclic diagram corresponding to the code sequence; the directed acyclic graph is used for representing reply content of a current reply request corresponding to the text to be identified.
14. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method of processing reply content of any of claims 1 to 12.
15. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of processing reply content according to any one of claims 1 to 12.
CN202311120151.6A 2023-08-31 2023-08-31 Reply content processing method and device, electronic equipment and storage medium Pending CN117077790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311120151.6A CN117077790A (en) 2023-08-31 2023-08-31 Reply content processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311120151.6A CN117077790A (en) 2023-08-31 2023-08-31 Reply content processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117077790A true CN117077790A (en) 2023-11-17

Family

ID=88704028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311120151.6A Pending CN117077790A (en) 2023-08-31 2023-08-31 Reply content processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117077790A (en)

Similar Documents

Publication Publication Date Title
US10958598B2 (en) Method and apparatus for generating candidate reply message
US11394667B2 (en) Chatbot skills systems and methods
US11347783B2 (en) Implementing a software action based on machine interpretation of a language input
CN111026842B (en) Natural language processing method, natural language processing device and intelligent question-answering system
US20190103111A1 (en) Natural Language Processing Systems and Methods
CN109840089A (en) The system and method for carrying out visual analysis and programming for the session proxy to electronic equipment
KR20180008247A (en) Platform for providing task based on deep learning
CN110807566A (en) Artificial intelligence model evaluation method, device, equipment and storage medium
CN110162675B (en) Method and device for generating answer sentence, computer readable medium and electronic device
US20220051662A1 (en) Systems and methods for extraction of user intent from speech or text
WO2023142451A1 (en) Workflow generation methods and apparatuses, and electronic device
CN116737910B (en) Intelligent dialogue processing method, device, equipment and storage medium
CN112650842A (en) Human-computer interaction based customer service robot intention recognition method and related equipment
CN114328980A (en) Knowledge graph construction method and device combining RPA and AI, terminal and storage medium
CN112582073B (en) Medical information acquisition method, device, electronic equipment and medium
WO2021063089A1 (en) Rule matching method, rule matching apparatus, storage medium and electronic device
JP2023540266A (en) Concept prediction for creating new intents and automatically assigning examples in dialogue systems
CN110928995B (en) Interactive information processing method, device, equipment and storage medium
CN112784024A (en) Man-machine conversation method, device, equipment and storage medium
CN114531334A (en) Intention processing method and device, electronic equipment and readable storage medium
CN117077790A (en) Reply content processing method and device, electronic equipment and storage medium
CN117196035A (en) Reply content processing method and device, electronic equipment and storage medium
CN117171318A (en) Reply content generation method and device, electronic equipment and storage medium
CN115543428A (en) Simulated data generation method and device based on strategy template
US11941414B2 (en) Unstructured extensions to rpa

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination