WO2021135603A1 - Procédé de reconnaissance d'intention, serveur et support de stockage - Google Patents

Procédé de reconnaissance d'intention, serveur et support de stockage Download PDF

Info

Publication number
WO2021135603A1
WO2021135603A1 PCT/CN2020/125213 CN2020125213W WO2021135603A1 WO 2021135603 A1 WO2021135603 A1 WO 2021135603A1 CN 2020125213 W CN2020125213 W CN 2020125213W WO 2021135603 A1 WO2021135603 A1 WO 2021135603A1
Authority
WO
WIPO (PCT)
Prior art keywords
original sentence
sentence information
named entity
shared named
shared
Prior art date
Application number
PCT/CN2020/125213
Other languages
English (en)
Chinese (zh)
Inventor
杨瑞东
张晴
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021135603A1 publication Critical patent/WO2021135603A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to an intention recognition method, server and storage medium.
  • man-machine dialogue technology With the rapid development of artificial intelligence technology, the application of man-machine dialogue technology in daily life is becoming more and more extensive.
  • the most important thing in man-machine dialogue technology is the recognition of user intent, that is, the recognition of the intent expressed by the voice data input by the user.
  • Existing intent recognition methods usually first convert the voice data input by the user into corresponding original sentence information, and then input the original sentence information into the trained intent recognition model to obtain the user's intent category.
  • the intent category determined directly through the intent recognition model is not It must be the real intention that the user wants to express. It can be seen that when the existing intention recognition method only includes the original sentence information of the shared named entity in the process of recognizing the first round of human-machine dialogue, there is a problem that the error rate of the intention recognition is high and the accuracy of the intention recognition is low.
  • the embodiments of the present application provide an intention recognition method, server, and storage medium, which can reduce the error rate of intention recognition and improve the accuracy of intention recognition.
  • an intention recognition method including:
  • the analysis result indicates that the original sentence information only contains a shared named entity, detecting whether the target dialogue round corresponding to the original sentence information is the first round of dialogue;
  • the target dialogue round is the first round of dialogue, output the intent category corresponding to the shared named entity category to which the shared named entity belongs, and determine the target intent category selected by the user in the intent category.
  • the inputting the original sentence information into a preset shared named entity analysis engine to obtain an analysis result output by the shared named entity analysis engine includes:
  • each of the shared named entities it is analyzed whether the original sentence information only includes the shared named entity, and the analysis result is obtained.
  • the end position of one of the candidate shared named entities is the end position of the original sentence information, it is determined that only the shared named entity is included in the original sentence information.
  • the method further includes:
  • the shared naming with the starting position being the position after the ending position of any one of the candidate shared named entities is executed in a loop
  • the shared named entity whose starting position is the first position of the original sentence information is determined as the first target shared named entity, and the value of the flag corresponding to the end position of the first target shared named entity is updated to the first target shared named entity.
  • the value of the flag bit corresponding to the previous position of the start position of the second target shared named entity is the second preset value, set the value of the flag bit corresponding to the end position of the second target shared named entity The value is updated to the second preset value;
  • the original sentence information After traversing all the shared named entities, if it is detected that the value of the flag bit corresponding to the end position of the original sentence information is the first preset value, it is determined that the original sentence information does not only include the shared name entity.
  • the method further includes:
  • the target dialogue round is not the first round of dialogue, acquiring historical original sentence information of the user in the historical dialogue round before the target dialogue round;
  • the target intention category corresponding to the original sentence information is determined.
  • an embodiment of the present application provides a server, including:
  • the first obtaining unit is used to obtain the original sentence information of the user
  • the second obtaining unit is configured to input the original sentence information into a preset shared named entity analysis engine to obtain the analysis result output by the shared named entity analysis engine;
  • the first detecting unit is configured to detect whether the target dialogue round corresponding to the original sentence information is the first round of dialogue if the analysis result indicates that the original sentence information contains only shared named entities;
  • the first determining unit is configured to, if the target dialogue round is the first round of dialogue, output an intent category corresponding to the shared named entity category to which the shared named entity belongs, and determine that the user selects among the intent categories The target intent category.
  • an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the computer program when the computer program is executed.
  • the intention recognition method as described in the first aspect above.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the intention recognition as described in the first aspect is realized. method.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a server, causes the server to execute the intention recognition method described in any one of the above-mentioned first aspects.
  • the original sentence information is not directly input into the traditional intent recognition model to determine the intention category expressed by the user, but the The original sentence information is input into the preset shared named entity analysis engine, and the shared named entity analysis engine is used to analyze whether the original sentence information contains only the shared named entity, and the original sentence information contains only the shared named entity, and the original sentence information
  • the corresponding target dialogue round is the first round of dialogue
  • the user can select the expressed target intent category from the intent categories.
  • the category is obtained through further confirmation by the user, so it can reduce the error rate of intent recognition and improve the accuracy of intent recognition.
  • FIG. 1 is a schematic structural diagram of a human-machine dialogue system to which an intention recognition method provided by an embodiment of the present application is applicable;
  • FIG. 2 is a schematic flowchart of an intention recognition method provided by an embodiment of the present application.
  • FIG. 3 is a specific schematic flowchart of S22 in an intention recognition method provided by an embodiment of the present application.
  • FIG. 4 is a specific schematic flowchart of S224 in an intention recognition method provided by an embodiment of the present application.
  • FIG. 5 is a specific schematic flowchart of S224 in an intention recognition method provided by another embodiment of the present application.
  • FIG. 6 is a schematic flowchart of an intention recognition method provided by another embodiment of the present application.
  • FIG. 7 is a structural block diagram of a server provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a server provided by another embodiment of the present application.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • FIG. 1 is a schematic architecture diagram of a human-machine dialogue system to which an intention recognition method provided by an embodiment of the present application is applicable.
  • the man-machine dialogue system 100 provided in this embodiment includes a man-machine dialogue terminal 110 and a man-machine dialogue server 120.
  • the human-machine dialogue terminal 110 includes, but is not limited to, mobile phones, tablet computers, smart TVs, wearable devices, in-vehicle devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and ultra mobile devices.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computers
  • PDAs personal digital assistants
  • the embodiment of the application does not impose any restriction on the specific type of the human-machine dialogue terminal 110.
  • the man-machine dialogue terminal 110 in the man-machine dialogue system 100 can establish a wireless communication connection or a wired communication connection with the man-machine dialogue server 120, thereby realizing Wireless communication or wired communication between the man-machine dialogue terminal 110 and the man-machine dialogue server 120.
  • the man-machine dialogue terminal 110 may collect voice data from the user through its voice collection module.
  • the man-machine dialogue terminal 110 may convert the voice data from the user into corresponding original sentence information, and then send the original sentence information to the man-machine dialogue server 120 through wireless communication or wired communication; or, the man-machine dialogue terminal 110 may directly
  • the voice data from the user is sent to the man-machine dialogue server 120 through wireless communication or wired communication, and the man-machine dialogue server 120 converts the voice data from the user into corresponding original sentence information.
  • the human-machine dialogue server 120 recognizes the target intention category of the original sentence information, and feeds back the target intention category to the human-machine dialogue terminal 110 through wireless communication or wired communication.
  • FIG. 2 is a schematic flowchart of an intention recognition method provided by an embodiment of the present application.
  • the execution subject of the process is a server, and the server may specifically be a man-machine dialogue in a man-machine dialogue system. server.
  • an intention recognition method provided by this embodiment includes S21 to S24, which are described in detail as follows:
  • the user's original sentence information refers to text information obtained by translating the voice data from the user in the process of man-machine dialogue.
  • the man-machine dialogue terminal in the man-machine dialogue system can collect the user's voice data through its voice collection module.
  • the human-machine dialogue terminal can perform voice-to-text processing on the collected user’s voice data to obtain the original sentence information corresponding to the user’s voice data, and combine it with the user’s voice
  • the original sentence information corresponding to the data is sent to the man-machine dialogue server in the man-machine dialogue system, and the man-machine dialogue server receives the user's original sentence information sent by the man-machine dialogue terminal.
  • the human-machine dialogue terminal can directly send the collected voice data of the user to the human-machine dialogue server, and the human-machine dialogue server performs audio-to-text processing on the user’s voice data. Obtain the original sentence information corresponding to the user's voice data.
  • the human-machine dialogue terminal or the man-machine dialogue server may convert the voice data from the user into corresponding original sentence information based on Automatic Speech Recognition (ASR) technology.
  • ASR Automatic Speech Recognition
  • the preset shared named entity analysis engine is pre-configured with an analysis algorithm for analyzing whether the sentence information contains only shared named entities, that is, the preset shared named entity analysis engine can analyze whether the sentence information contains only shared named entities. Contains shared named entities.
  • a named entity refers to an object identified by a name, and it can be an object represented by any noun.
  • named entities can be divided into different categories such as person names, place names, organization names, and song names.
  • Each named entity category usually includes multiple named entities of the same category, for example, "place names" under the named entity category It can include multiple named entities belonging to place names, such as Beijing, Shanghai, and Guangzhou.
  • Shared named entities refer to named entities that can be included in at least two types of intents and shared by at least two types of intents. Exemplarily, because taxi intention and navigation intention usually need to know the origin and/or destination, and the origin and destination belong to the named entity of "place name", that is, the named entity of "place name” is usually included in In the taxi intent and navigation intent, therefore, the named entity of the “place name” category is a shared named entity.
  • the man-machine dialogue server inputs the user's original sentence information into the preset shared named entity analysis engine to analyze the original sentence information through the shared named entity analysis engine Whether to include only shared named entities in the file, and then get the analysis results output by the shared named entity analysis engine.
  • the shared named entity analysis engine can analyze whether the original sentence information contains only shared named entities through S221 to S224 as shown in FIG. 3, as detailed below:
  • the man-machine dialogue server analyzes whether the original sentence contains only the shared named entity through the shared named entity analysis engine, it needs to first identify the named entity contained in the original sentence information.
  • the human-machine dialogue server can perform a named entity recognition (NER) operation on the original sentence information based on a preset named entity recognition tool.
  • NER named entity recognition
  • the preset named entity recognition tool can identify all the named entities contained in the original sentence information, and can obtain the information of each named entity. It is understandable that the named entity contained in the original sentence information may be one or at least two, which is specifically determined according to the actual situation, and there is no limitation here.
  • the information of the named entity may include, but is not limited to, the category of the named entity to which the named entity belongs and the start position and end position of the named entity in the original sentence information.
  • the start position refers to the position of the first character in the named entity in the original sentence information
  • the end position refers to the position of the last character in the named entity in the original sentence information
  • the characters in the named entity are in the original sentence information.
  • the position of the character can be identified by the order of the character in the original sentence information. Exemplarily, assuming that the original sentence information is "Take a taxi to Beijing Botanical Garden", the order of the characters from left to right in the original sentence information in the original sentence information can be 0, 1, 2, 3, 4, 5.
  • the position of each character from left to right in the original sentence information can be identified by 0, 1, 2, 3, 4, 5, 6, and 7, respectively. If the preset named entity recognition tool is used to perform named entity recognition on the original sentence information of "Take a taxi to Beijing Botanical Garden", it can be recognized that the original sentence information contains "Beijing", “Botanical Garden” and "Beijing Botanical Garden”.
  • S222 Identify the shared named entity in the named entity according to the preset list of shared named entity categories, and determine the shared named entity category to which the shared named entity belongs.
  • the human-machine dialogue server can identify the shared named entity among the named entities according to a preset list of shared named entity categories.
  • the preset shared named entity category list is used to store pre-configured shared named entity categories and intent categories corresponding to each shared named entity category.
  • the preset shared named entity category list may be obtained according to a preset named entity category configuration file.
  • the human-machine dialogue system can be configured with corresponding intent categories according to the functions that can be realized by the human-machine dialogue system, where different functions correspond to different intent categories.
  • the human-machine dialogue system can realize functions such as navigation or taxiing
  • the user may express the intention of taxiing or navigating to the human-machine dialogue system when communicating with the human-machine dialogue system. Therefore, it can be a human-machine dialogue system. Configure taxi intent or navigation intent, etc.
  • the man-machine dialogue server can store the named entity category configured for each intent category in the preset named entity category configuration file, that is, the named entity category configuration file is used to store the pre-defined
  • the named entity category configured for each intent category for example, please refer to Table 1.
  • Table 1 shows part of the content stored in the named entity category configuration file, where named entity category 2 is configured in both the intent A and the intent. In B, therefore, named entity category 2 is a shared named entity category.
  • the man-machine dialogue server After the man-machine dialogue server obtains the pre-configured named entity category configuration file, it can perform the shared named entity detection on the named entity category configuration file, that is, check whether there is at least one named entity category configured in at least two of the named entity category configuration files.
  • the intent category if it is detected that at least one named entity category is configured in at least two intent categories, it is determined that the at least one named entity category is a shared named entity category.
  • named entity category 2 in Table 1 is configured at the same time In Intent A and Intent B, therefore, named entity category 2 in Table 1 is a shared named entity category.
  • the human-machine dialogue server can associate each detected shared named entity category with its corresponding at least two intent categories and store them in a preset shared named entity category list, that is, the shared named entity category list is used to store each shared named entity category
  • the intent category corresponding to it Exemplarily, please refer to Table 2.
  • Table 2 shows part of the content stored in the shared named entity category list, where the intention categories corresponding to the shared named entity category 2 include intention A and intention B.
  • the man-machine dialogue server can store a preset list of shared named entity categories in its memory.
  • the man-machine dialogue server when it recognizes the shared named entity in the named entity contained in the original sentence information, it can obtain a preset list of shared named entity categories from its memory, and then according to the preset shared named entity category
  • the shared named entity category contained in the list identifies the shared named entity among the named entities contained in the original sentence information, and determines the shared named entity category to which each shared named entity belongs. Specifically, if the first named entity contained in the original sentence information belongs to the first shared named entity category in the list of shared named entity categories, the first named entity is identified as a shared named entity, and the shared named entity to which the shared named entity belongs is determined
  • the named entity category is the first shared named entity category.
  • S223 Determine the start position and the end position of the shared named entity in the original sentence information.
  • S224 According to the start position and the end position of each of the shared named entities, analyze whether the original sentence information only includes the shared named entity, and obtain the analysis result.
  • the man-machine dialogue server determines the start position and end position of each shared named entity contained in the original sentence information in the original sentence information, it can be based on the start position of each shared named entity in the original sentence information. Start position and end position to detect whether only shared named entities are included in the original sentence information.
  • S224 may be specifically implemented through S2241 to S2244 as shown in FIG. 4, which are described in detail as follows:
  • the man-machine dialogue server when the man-machine dialogue server detects whether the original sentence information contains only the shared named entity based on the start position and end position of each shared named entity in the original sentence information, it can first detect the content contained in the original sentence information. Whether there is a shared named entity whose starting position is the first position of the original sentence information in the shared named entity.
  • the first position of the original sentence information refers to the position of the first character in the original sentence information
  • the end position of the original sentence information refers to the position of the last character in the original sentence information.
  • the first position of the original sentence information "Taking a taxi to Beijing Botanical Garden” is the position where the first character " ⁇ " is located, that is, the identification of the first position of the original sentence information "Taking a taxi to Beijing Botanical Garden” is 0;
  • the original sentence information The last position of "Taking a taxi to Beijing Botanical Garden” is the position where the last character "door” is located, that is, the identification of the last position of the original sentence information "Taking a taxi to Beijing Botanical Garden” is 7.
  • the man-machine dialogue server detects that there is a shared named entity whose starting position is the first position of the original sentence information in the shared named entity contained in the original sentence information, it determines all the shared named entities whose starting position is the first position of the original sentence information It is a candidate shared named entity, and detects whether the end position of each candidate shared named entity is the end position of the original sentence information.
  • the original sentence information is "Beijing Botanical Garden”
  • the shared named entities "Beijing” and “Beijing Botanical Garden” contained in the original sentence information are the first positions of the original sentence information
  • the The shared named entities "Beijing" and "Beijing Botanical Garden” are both identified as candidate shared named entities.
  • the man-machine dialogue server separately detects whether the end positions of "Beijing” and "Beijing Botanical Garden” in the original sentence information are the last positions of the original sentence information.
  • the end position of "Beijing” in the original sentence information is not the end position of the original sentence information
  • the end position of "Beijing Botanic Garden” in the original sentence information is the end position of the original sentence information.
  • S2242 if the man-machine dialogue server detects that the end position of a candidate shared named entity is the end position of the original sentence information, S2242 is executed; if the man-machine dialogue server detects that the end position of all candidate shared named entities is not At the end of the original sentence information, S2243 ⁇ 2244 are executed.
  • S2242 and S2243 to S2244 are parallel steps, that is, when the man-machine dialogue server executes S2242, S2243 to S2244 are not executed; that is, when the man-machine dialogue server executes S2243 to S2244, S2242 is not executed.
  • the man-machine dialogue server detects that the end position of a candidate shared named entity in the original sentence information is the end position of the original sentence information, because the candidate shared named entity is at the start position in the original sentence information It is the first position of the original sentence information, so it means that all the characters in the original sentence information constitute the candidate shared named entity, which means that the original sentence information only contains the shared named entity and does not contain other information.
  • the man-machine dialogue The server determines that only shared named entities are included in the original sentence information.
  • the human-machine dialogue server when the human-machine dialogue server detects that the end positions of all candidate shared named entities are not the end positions of the original sentence information, it indicates that none of the candidate shared named entities starts from the first position of the original sentence information to the original sentence. The end position of the information ends.
  • the man-machine dialogue server detects whether there is a shared named entity located after the candidate shared named entity and adjacent to the candidate shared named entity in the original sentence information. That is, it is detected whether there is a shared named entity whose starting position is a position after the ending position of any candidate shared named entity in the original sentence information.
  • the man-machine dialogue server detects that there is at least one shared named entity whose starting position is the end position of any candidate shared named entity in the original sentence information, it will determine the at least one shared named entity as a new candidate shared entity Named entities.
  • the human-machine dialogue server detects whether the end position of each new candidate shared named entity is the end position of the original sentence information.
  • the human-machine dialogue server detects that the end position of at least one candidate shared named entity among the new candidate shared named entities is the end position of the original sentence information, it means that the original sentence information is only composed of the new candidate shared named entity and The candidate shared named entity that is adjacent to the new candidate shared named entity and is located before the new candidate shared named entity is composed of the candidate shared named entity, which means that the original sentence information only contains the shared named entity.
  • the original sentence information is "Botanic Garden Zoo”
  • the start position of the shared named entity "Zoo" is a position after the end position of the candidate shared named entity "Botanical Garden”
  • the shared named entity "Zoo” will be shared. It is determined as a new candidate shared named entity.
  • the end position of the new candidate shared named entity "zoo" is the end position of the original sentence information, it is determined that the original sentence information "Botanical Garden Zoo” only contains the shared named entity.
  • the human-machine dialogue server detects that the ending position of all new candidate shared named entities is not the end position of the original sentence information, it will continue to loop to detect whether there is a starting position in the original sentence information that is the end position of any candidate shared named entity If there is a shared named entity in the latter position, the shared named entity whose starting position is the ending position of any candidate shared named entity is determined as a new candidate shared named entity, and each new candidate is detected Whether the end position of the shared named entity is the last position of the original sentence information, until all the shared named entities in the original sentence information are traversed, if after traversing all the shared named entities in the original sentence information, there is no candidate for sharing The end position of the named entity is the end position of the original sentence information, and the man-machine dialogue server executes S2244.
  • the man-machine dialogue server After the man-machine dialogue server has traversed all the shared named entities in the original sentence information, if it detects that the end position of none of the candidate shared named entities is the end position of the original sentence information, it means that the original sentence information except In addition to the shared named entity, it also contains other information. At this time, the man-machine dialogue server determines that the original sentence information does not only include the shared named entity.
  • the shared named entity "Beijing” can be determined as a candidate shared named entity according to S2241
  • the shared named entity "Botanical Garden” can be determined as a new candidate shared named entity according to S2243
  • the end position of the new shared named entity "Botanic garden” is not the end position of the original sentence information "How to get to Beijing Botanical Garden", because at this time all the shared named entities in the original sentence information "How to get to Beijing Botanical Garden" have been traversed
  • the end position of none of the candidate shared named entities is the end position of the original sentence information "How to get to Beijing Botanical Garden", therefore, it is determined that the end position of original sentence information "How to get to Beijing Botanical Garden” does not only include the shared named entity.
  • the human-machine dialogue server detects that there is no shared named entity whose starting position is the end position of any candidate shared named entity in the original sentence information, it indicates that the original sentence information There is no shared named entity adjacent to each candidate shared named entity, which means that there are other information between at least two shared named entities in the original sentence information.
  • the man-machine dialogue server determines that the original sentence information does not only contain shared Named entities. Exemplarily, suppose the original sentence information is "how to get from the botanical garden to the zoo", since the last position of the end position of the candidate shared named entity "botanic garden" is the position of "to”, and the start of the shared named entity "zoo" The location is the location of "Long”.
  • S224 can also be specifically implemented through S2245 to S2240 as shown in FIG. 5, which is described in detail as follows:
  • S2245 Define a flag bit array with the same length as the length of the original sentence information, and set the value of each flag bit in the flag bit array to a first preset value.
  • the human-machine dialogue server when it detects whether the original sentence information contains only shared named entities, it can first define a flag bit array with the same length as the length of the original sentence information, where each flag in the flag bit array The bits respectively correspond to the positions of the characters in the original sentence information.
  • a flag bit array with a length of 5 can be defined.
  • the first flag bit in the flag bit array is the same as the first character "North” in the original sentence information "Beijing Botanical Garden”. ”Corresponds to the position, and the second flag bit in the flag bit array corresponds to the position of the second character “ ⁇ ” in the original sentence information “Beijing Botanical Garden”.
  • the man-machine dialogue server can first set the value of each flag bit in the flag bit array to the first preset value.
  • the first preset value may be any value in the Boolean logic value.
  • the first preset value may be 0 in the Boolean logic value or 1 in the Boolean logic value.
  • this embodiment will also involve a second preset value.
  • the second preset value can also be any value of Boolean logic values, but the second preset value is different from the first preset value. When the first preset value is 0, the second preset value is 1, and when the first preset value is 1, the second preset value is 0.
  • the human-machine dialogue server also detects whether the original sentence information contains a shared named entity whose starting position is the first position of the original sentence information. If the man-machine dialogue server detects that the original sentence information contains at least one shared named entity whose starting position is the first position of the original sentence information, S2246 to S2240 are executed.
  • S2246 Determine the shared named entity whose start position is the first position of the original sentence information as the first target shared named entity, and update the value of the flag bit corresponding to the end position of the first target shared named entity Is the second preset value.
  • the man-machine dialogue server detects that the original sentence information contains at least one shared named entity whose starting position is the first position of the original sentence information, it will share all starting positions as the first position of the original sentence information.
  • the named entity is determined to be the first target shared named entity, and the values of the flag bits corresponding to the end positions of all the first target shared named entities are updated to the second preset value.
  • the named entities "Beijing” and "Beijing Botanic Garden” is determined as the first target shared named entity, and the value of the flag corresponding to the end position of "Beijing” (ie the position of " ⁇ ") is updated to the second preset value, and the value of "Beijing Botanic Garden” The value of the flag bit corresponding to the end position (that is, the position of the "door”) is updated to the second preset value.
  • S2247 Determine the shared named entity whose starting position is not the first position of the original sentence information as the second target shared named entity, and detect that each of the second target shared named entities corresponds to the previous position of the starting position Whether the value of the flag bit is the second preset value.
  • the human-machine dialogue server also determines the shared named entity whose starting position is not the first position of the original sentence information as the second target shared named entity.
  • the shared named entity "Botanical Garden” is determined Share a named entity for the second target.
  • the man-machine dialogue server After determining the second target shared named entity, the man-machine dialogue server detects whether the value of the flag bit corresponding to the previous position of the starting position of each second target shared named entity is the second preset value. If the human-machine dialogue server detects that the value of the flag corresponding to the first position of the second target shared named entity is the second preset value, it indicates that the second target shared named entity is before the start position of the shared named entity.
  • a position is the end position of a first target shared named entity, which means that the second target shared named entity is adjacent to a first target shared named entity in the original sentence information.
  • the value of the flag bit corresponding to the end position of the target shared named entity is updated to the second preset value.
  • the man-machine dialogue server After the man-machine dialogue server has traversed all the second target shared named entities, it detects whether the updated value of the flag bit corresponding to the end position of the original sentence information is the second preset value. If the man-machine dialogue server detects that the flag bit corresponding to the end position of the original sentence information is updated to the second preset value, execute S2249; if the man-machine dialogue server detects the flag bit corresponding to the end position of the original sentence information After the updated value is the first preset value, S2240 is executed.
  • the human-machine dialogue server detects that the value of the flag corresponding to the previous position of the start position of a second target shared named entity is the first preset value, it indicates that the second target The shared named entity is not adjacent to any first target shared named entity in the original sentence information. At this time, the man-machine dialogue server does not update the value of the flag bit corresponding to the end position of the second target shared named entity.
  • the human-machine dialogue server After the human-machine dialogue server has traversed all shared named entities, if it detects that the updated value of the flag bit corresponding to the end position of the original sentence information is the second preset value, it indicates that the original sentence information From the first position to the end of the original sentence information, is composed of at least one shared named entity that is adjacent to the end, that is, the original sentence information does not contain other information except the shared named entity. At this time, the man-machine dialogue server It is determined that only shared named entities are included in the original sentence information.
  • the human-machine dialogue server After the human-machine dialogue server has traversed all the shared named entities, if it detects that the updated value of the flag bit corresponding to the end position of the original sentence information is the first preset value, it indicates that the original sentence information From the first position of the original sentence to the end of the original sentence information, it is not composed of at least one shared named entity that is adjacent to the end. That is, in addition to the shared named entity, the original sentence information also contains other information. At this time, the man-machine dialogue The server determines that the original sentence information does not include only shared named entities.
  • S224 may further include the following steps:
  • the human-machine dialogue server when the human-machine dialogue server detects that there is no shared named entity whose starting position is the first position of the original sentence information in the shared named entity contained in the original sentence information, it indicates that the first character in the original sentence information is not Included in the shared named entity means that the original sentence information also contains other information besides the shared named entity. At this time, the man-machine dialogue server determines that the original sentence information does not only include the shared named entity.
  • each human-machine dialogue terminal converts the voice data from the user during each round of human-machine dialogue into corresponding original sentence information, it also records the corresponding original sentence information in each round of human-machine dialogue.
  • Dialogue rounds among them, the dialogue round includes the first round of dialogue and non-first round of dialogue, that is, all other rounds of dialogue except the first round of dialogue are non-first round of dialogues.
  • the man-machine dialogue server when the analysis result output by the shared named entity analysis engine indicates that the original sentence information only contains the shared named entity, the man-machine dialogue server further detects whether the target dialogue round corresponding to the original sentence information is the first round of dialogue. If the human-machine dialogue server detects that the target dialogue round corresponding to the original sentence information is the first round of dialogue, S24 is performed.
  • S24 If the target dialogue round is the first round of dialogue, output the intent category corresponding to the shared named entity category to which the shared named entity belongs, and determine the target intent category selected by the user in the intent category.
  • the man-machine dialogue server when the man-machine dialogue server detects that the original sentence information contains only shared named entities, and the target dialogue round corresponding to the original sentence information is the first round of dialogue, it can be based on each shared named entity contained in the original sentence information
  • the category of shared named entity to which it belongs, the intent category corresponding to the category of shared named entity to which each shared named entity belongs is obtained from the list of shared named entity categories.
  • the human-machine dialogue server obtains the intent categories corresponding to the shared named entity category to which each shared named entity contained in the original sentence information belongs, it outputs these intent categories so that the user can select the target they want to express from these intent categories Intent category.
  • the man-machine dialogue server may send the intent category corresponding to the shared named entity category to which each shared named entity contained in the original sentence information belongs to the man-machine dialogue terminal, and the man-machine dialogue terminal may generate and output corresponding intent categories based on these intent categories.
  • the man-machine dialogue terminal sends the target intent category selected by the user in these intent categories to the man-machine dialogue server, and the man-machine dialogue server obtains the user The target intent category selected among these intent categories.
  • the human-machine dialogue server after the human-machine dialogue server determines the target intent category, it can further obtain the slot information corresponding to the original sentence information, and then according to the target intent category, the original sentence information, and the slot information corresponding to the original sentence information , To determine clear user instructions.
  • the slot information refers to the necessary information type to which the shared named entity contained in the original sentence information belongs under the target intention category. Exemplarily, suppose that the target intention category is "hailing a taxi", and the intention category "hailing a taxi" usually needs to include two types of necessary information: "departure” and "destination”.
  • the dialogue server determines a clear user instruction that may be "Take a taxi to Beijing Botanical Garden".
  • the human-machine dialogue terminal in the human-machine dialogue system can obtain the necessary information type of the shared named entity contained in the original sentence information under the target intention category by asking the user. Then the slot information corresponding to the original sentence information is obtained, and the man-machine dialogue terminal can send the slot information corresponding to the original sentence information to the man-machine dialogue server.
  • the man-machine dialogue terminal in the man-machine dialogue system can collect the original sentence
  • the voice data corresponds to the information
  • the geographic location information of its current location is obtained, and the geographic location information of its current location is sent to the man-machine dialogue server.
  • the man-machine dialogue server can be based on the current location of the man-machine dialogue terminal
  • the geographic location information and the geographic location information corresponding to the shared named entity contained in the original sentence information are used to determine the slot information corresponding to the original sentence information.
  • the slot information corresponding to the original sentence information is determined as the starting place;
  • the slot information corresponding to the original sentence information is determined as the destination.
  • the geographic location information of the current location of the human-machine dialogue terminal matches the geographic location information corresponding to the shared named entity contained in the original sentence information.
  • the current geographic location of the human-machine dialogue terminal matches the original sentence.
  • the location deviation between the geographic locations corresponding to the shared named entities contained in the information is within a preset range; the geographic location information of the current location of the human-machine dialogue terminal is different from the geographic location information corresponding to the shared named entities contained in the original sentence information.
  • the matching specifically refers to that the position deviation between the current geographic location of the human-machine dialogue terminal and the geographic location corresponding to the shared named entity included in the original sentence information is not within a preset range.
  • the intention recognition method does not directly input the original sentence information of the user into the traditional intention recognition model to determine the intention category expressed by the user after obtaining the original sentence information of the user.
  • the target dialogue round corresponding to the original sentence information is the first round of dialogue, by outputting the intent category corresponding to the shared named entity category to which the shared named entity belongs, the user can select the expressed target intention from the intent category Category, since the target intention category is obtained through further confirmation by the user, it can reduce the error rate of intention recognition and improve the accuracy of intention recognition.
  • FIG. 6 is a schematic flowchart of an intention recognition method according to another embodiment of the present application.
  • an intention recognition method provided in this embodiment may further include S25 to S26 after S23, which is described in detail as follows:
  • S26 Determine the target intention category corresponding to the original sentence information according to the historical original sentence information.
  • the man-machine dialogue server detects that the original sentence information contains only the shared named entity, and the target dialogue round corresponding to the original sentence information is not the first round of dialogue.
  • the historical original sentence information of the user in the historical dialogue round before the target dialogue round is obtained, and the target intention category expressed by the original sentence information is determined based on the historical original sentence information.
  • the human-machine dialogue server after the human-machine dialogue server determines the target intent category, it can further obtain the slot information corresponding to the original sentence information, and then according to the target intent category, the original sentence information, and the slot information corresponding to the original sentence information , To determine clear user instructions. It should be noted that, in this embodiment, the man-machine dialogue server determines the specific way of definite user instructions according to the target intent category, original sentence information, and slot information corresponding to the original sentence information. You can refer to the relevant description in S24 here. No longer.
  • the original sentence information in the first round of human-machine dialogue is "I want to take a taxi”
  • the original sentence information in the second round of human-computer dialogue is "Beijing Botanical Garden”.
  • the slot information of the original sentence information "Beijing Botanical Garden” obtained by asking the user is "destination”
  • the clear user instruction is "Take a taxi to Beijing Botanical Garden”.
  • the intention recognition method when the original sentence information only contains the shared named entity, but the target dialogue round corresponding to the original sentence information is not the first round of dialogue, because the target dialogue round is before
  • the historical original sentence information in the historical dialogue round may contain necessary information that can express the user’s intentions, such as the target intention category expressed by the original sentence information. Therefore, directly pass the historical dialogue round before the target round
  • the historical original sentence information determines the target intention category expressed by the original sentence information, without the need to determine the user's target intention category through the user intention recognition model, thereby improving the efficiency of user intention recognition.
  • the human-machine dialogue server detects that the original sentence information does not only include the shared named entity, it can perform the following steps:
  • the original sentence information does not only include a shared named entity
  • the original sentence information is input into a preset intention recognition model to obtain the target intention category expressed by the original sentence information.
  • the man-machine dialogue server can directly input the original sentence information into the preset In the intention recognition model, the target intention category expressed by the original sentence information is obtained.
  • the user intent recognition model in this embodiment may be an intent recognition model based on neural networks, or an intent recognition model based on statistics, or may also be an intent recognition model of other types, which may be based on actual conditions. Demand settings.
  • the user intention recognition model receives the feature vector corresponding to the original sentence information at the input terminal, it can output the target intention category expressed by the original sentence information.
  • the human-machine dialogue server can obtain the slot information corresponding to the target intent category based on the necessary information that the target intent category needs to include, and based on The target intent category and the slot information corresponding to the target intent category are given clear user instructions.
  • the intent recognition method uses the user intent recognition model to determine the target intent class expressed by the original sentence information when the original sentence information does not only include a shared named entity, thereby improving The accuracy of user intent recognition.
  • FIG. 7 shows a structural block diagram of a server provided by an embodiment of the present application.
  • the server may specifically be a human-machine dialogue server in a human-machine dialogue system.
  • the server includes Each unit is used to execute each step in the foregoing embodiment.
  • the server 70 includes a first acquiring unit 71, a second acquiring unit 72, a first detecting unit 73, and a first determining unit 74. among them:
  • the first obtaining unit 71 is used to obtain the user's original sentence information.
  • the second obtaining unit 72 is configured to input the original sentence information into a preset shared named entity analysis engine to obtain an analysis result output by the shared named entity analysis engine.
  • the first detection unit 73 is configured to detect whether the target dialogue round corresponding to the original sentence information is the first round of dialogue if the analysis result indicates that the original sentence information only includes a shared named entity.
  • the first determining unit 74 is configured to, if the target dialogue round is the first round of dialogue, output an intent category corresponding to the shared named entity category to which the shared named entity belongs, and determine that the user selects among the intent categories The target intent category.
  • the second acquisition unit 72 specifically includes a named entity recognition unit, a shared named entity recognition unit, a location determination unit, and an analysis unit. among them:
  • the named entity recognition unit is used to recognize the named entity contained in the original sentence information.
  • the shared named entity identification unit is used to identify the shared named entity in the named entity according to a preset list of shared named entity categories, and determine the shared named entity category to which the shared named entity belongs
  • the position determining unit is used to determine the start position and the end position of the shared named entity in the original sentence information.
  • the analysis unit is used to analyze whether the original sentence information contains only the shared named entity according to the start position and the end position of each of the shared named entities, and obtain the analysis result.
  • the analysis unit specifically includes: a second determination unit and a first determination unit. among them:
  • the second determining unit is configured to determine the shared named entity whose starting position is the first position of the original sentence information as a candidate shared named entity.
  • the first determining unit is configured to determine that the original sentence information only includes the shared named entity if the end position of one of the candidate shared named entities is the end position of the original sentence information.
  • the analysis unit specifically further includes: a third determination unit and a second determination unit. among them:
  • the third determining unit is configured to, if the end positions of all the candidate shared named entities are not the end positions of the original sentence information, perform the loop execution to set the start position to the end position of any one of the candidate shared named entities.
  • the second determining unit is used to determine that the end position of all the new candidate shared named entities is not the end position of the original sentence information after traversing all the shared named entities. Contains only shared named entities.
  • the analysis unit specifically includes: a first definition unit, a first update unit, a first detection unit, a second update unit, a third determination unit, and a fourth determination unit. among them:
  • the first definition unit is used to define a flag bit array with the same length as the length of the original sentence information, and set the value of each flag bit in the flag bit array to a first preset value.
  • the first update unit is configured to determine the shared named entity whose starting position is the first position of the original sentence information as the first target shared named entity, and mark the end position of the first target shared named entity corresponding to the mark The value of the bit is updated to the second preset value.
  • the first detection unit is configured to determine the shared named entity whose starting position is not the first position of the original sentence information as the second target shared named entity, and detect the start position of each of the second target shared named entities Whether the value of the flag bit corresponding to the previous position is the second preset value.
  • the second update unit is configured to, if the value of the flag bit corresponding to the previous position of the start position of the second target shared named entity is the second preset value, set the end of the second target shared named entity The value of the flag bit corresponding to the position is updated to the second preset value.
  • the third determining unit is configured to determine the original sentence information if the value of the flag bit corresponding to the end position of the original sentence information is the second preset value after traversing all the shared named entities Contains only shared named entities.
  • the fourth determining unit is configured to determine the original sentence information if it is detected that the value of the flag bit corresponding to the end position of the original sentence information is the first preset value after traversing all the shared named entities Does not only contain shared named entities.
  • the analysis unit further includes a fifth determination unit.
  • the fifth determining unit is used for determining that the original sentence information does not only include the shared named entity if there is no shared named entity whose starting position is the first position of the original sentence information in the shared named entity.
  • the server 70 further includes: a third acquiring unit and a fourth determining unit. among them:
  • the third obtaining unit is configured to obtain historical original sentence information of the user in the historical dialogue round before the target dialogue if the target dialogue round is not the first round of dialogue.
  • the fourth determining unit is configured to determine the target intention category corresponding to the original sentence information according to the historical original sentence information.
  • the server provided by the embodiment of the present application does not directly input the original sentence information into the traditional intention recognition model to determine the intention category expressed by the user.
  • the original sentence information is input into the preset shared named entity analysis engine, and the shared named entity analysis engine is used to analyze whether the original sentence information contains only shared named entities, and the original sentence information contains only shared named entities, and
  • the target dialogue round corresponding to the original sentence information is the first round of dialogue, by outputting the intent category corresponding to the shared named entity category to which the shared named entity belongs, so that the user can select the expressed target intent category from the intent categories, Since the target intention category is obtained through further confirmation by the user, the error rate of intention recognition can be reduced, and the accuracy of intention recognition can be improved.
  • FIG. 8 is a schematic structural diagram of a server provided by another embodiment of the present application.
  • the server 800 of this embodiment includes: at least one processor 80 (only one is shown in FIG. 8), a processor, a memory 81, and a memory 81 stored in the memory 81 and available in the at least one processor.
  • a computer program 82 running on the processor 80 when the processor 80 executes the computer program 82, implements the steps in any of the above-mentioned intention recognition method embodiments.
  • the server 800 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the server may include, but is not limited to, a processor 80 and a memory 81.
  • FIG. 8 is only an example of the server 800, and does not constitute a limitation on the server 800. It may include more or less components than shown, or a combination of certain components, or different components, such as It can also include input and output devices, network access devices, and so on.
  • the so-called processor 80 may be a central processing unit (Central Processing Unit, CPU), and the processor 80 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 81 may be an internal storage unit of the server 800, such as a hard disk or a memory of the server 800. In other embodiments, the memory 81 may also be an external storage device of the server 800, for example, a plug-in hard disk equipped on the server 800, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital). Digital, SD) card, flash card, etc. Further, the storage 81 may also include both an internal storage unit of the server 800 and an external storage device.
  • the memory 81 is used to store an operating system, an application program, a boot loader (Boot Loader), data, and other programs, such as the program code of the computer program. The memory 81 can also be used to temporarily store data that has been output or will be output.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the above-mentioned intention recognition method can be realized.
  • the embodiment of the present application provides a computer program product.
  • the computer program product runs on a mobile terminal, the steps in the above-mentioned intention recognition method can be realized when the mobile terminal is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program can be stored in a computer-readable storage medium.
  • the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include at least: any entity or device capable of carrying computer program code to the server, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunications signals
  • software distribution media Such as U disk, mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be separately on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

L'invention concerne un procédé de reconnaissance d'intention, un serveur et un support de stockage, le procédé de reconnaissance d'intention comportant les étapes consistant à: acquérir des informations de phrase d'origine d'un utilisateur (S21); introduire les informations de phrase d'origine dans un moteur prédéfini d'analyse d'entités nommées partagées pour obtenir un résultat d'analyse délivré par le moteur d'analyse d'entités nommées partagées (S22); si le résultat d'analyse indique que seules des entités nommées partagées sont comprises dans les informations de phrase d'origine, détecter si un cycle cible de dialogue correspondant aux informations de phrase d'origine est un premier cycle de dialogue (S23); et si le cycle cible de dialogue est le premier cycle de dialogue, délivrer des catégories d'intention correspondant à des catégories d'entités nommées partagées auxquelles appartiennent les entités nommées partagées, et déterminer une catégorie d'intention cible sélectionnée par l'utilisateur parmi les catégories d'intention (S24). Le procédé de reconnaissance d'intention est capable de réduire le taux d'erreurs de la reconnaissance d'intention et d'améliorer la précision de la reconnaissance d'intention.
PCT/CN2020/125213 2019-12-31 2020-10-30 Procédé de reconnaissance d'intention, serveur et support de stockage WO2021135603A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911417222.2A CN111177358B (zh) 2019-12-31 2019-12-31 意图识别方法、服务器及存储介质
CN201911417222.2 2019-12-31

Publications (1)

Publication Number Publication Date
WO2021135603A1 true WO2021135603A1 (fr) 2021-07-08

Family

ID=70623964

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125213 WO2021135603A1 (fr) 2019-12-31 2020-10-30 Procédé de reconnaissance d'intention, serveur et support de stockage

Country Status (2)

Country Link
CN (1) CN111177358B (fr)
WO (1) WO2021135603A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177358B (zh) * 2019-12-31 2023-05-12 华为技术有限公司 意图识别方法、服务器及存储介质
CN111767372B (zh) * 2020-06-30 2023-08-01 北京百度网讯科技有限公司 语音查询的解析方法、解析模型的训练方法、装置、设备
CN113609266A (zh) * 2021-07-09 2021-11-05 阿里巴巴新加坡控股有限公司 资源处理方法以及装置
CN117275471A (zh) * 2022-06-13 2023-12-22 华为技术有限公司 处理语音数据的方法及终端设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063035A (zh) * 2018-07-16 2018-12-21 哈尔滨工业大学 一种面向出行领域的人机多轮对话方法
US20190206407A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for personalizing dialogue based on user's appearances
CN110597958A (zh) * 2019-09-12 2019-12-20 苏州思必驰信息科技有限公司 文本分类模型训练和使用方法及装置
CN111177358A (zh) * 2019-12-31 2020-05-19 华为技术有限公司 意图识别方法、服务器及存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377186B (zh) * 2012-04-26 2016-03-16 富士通株式会社 基于命名实体识别的Web服务整合装置、方法以及设备
US9424233B2 (en) * 2012-07-20 2016-08-23 Veveo, Inc. Method of and system for inferring user intent in search input in a conversational interaction system
US9971763B2 (en) * 2014-04-08 2018-05-15 Microsoft Technology Licensing, Llc Named entity recognition
KR20160027640A (ko) * 2014-09-02 2016-03-10 삼성전자주식회사 전자 장치 및 전자 장치에서의 개체명 인식 방법
CN107193978A (zh) * 2017-05-26 2017-09-22 武汉泰迪智慧科技有限公司 一种基于深度学习的多轮自动聊天对话方法及系统
CN109388795B (zh) * 2017-08-07 2022-11-08 芋头科技(杭州)有限公司 一种命名实体识别方法、语言识别方法及系统
CN108427722A (zh) * 2018-02-09 2018-08-21 卫盈联信息技术(深圳)有限公司 智能交互方法、电子装置及存储介质
CN110502738A (zh) * 2018-05-18 2019-11-26 阿里巴巴集团控股有限公司 中文命名实体识别方法、装置、设备和查询系统
CN110619050B (zh) * 2018-06-20 2023-05-09 华为技术有限公司 意图识别方法及设备
CN109461039A (zh) * 2018-08-28 2019-03-12 厦门快商通信息技术有限公司 一种文本处理方法及智能客服方法
CN109616108B (zh) * 2018-11-29 2022-05-31 出门问问创新科技有限公司 多轮对话交互处理方法、装置、电子设备及存储介质
CN110111787B (zh) * 2019-04-30 2021-07-09 华为技术有限公司 一种语义解析方法及服务器
CN110287283B (zh) * 2019-05-22 2023-08-01 中国平安财产保险股份有限公司 意图模型训练方法、意图识别方法、装置、设备及介质
CN110276075A (zh) * 2019-06-21 2019-09-24 腾讯科技(深圳)有限公司 模型训练方法、命名实体识别方法、装置、设备及介质
CN110502740B (zh) * 2019-07-03 2022-05-17 平安科技(深圳)有限公司 问句实体识别与链接方法、装置、计算机设备及存储介质
CN110309514B (zh) * 2019-07-09 2023-07-11 北京金山数字娱乐科技有限公司 一种语义识别方法及装置
CN110516247B (zh) * 2019-08-27 2021-11-16 湖北亿咖通科技有限公司 基于神经网络的命名实体识别方法及计算机存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190206407A1 (en) * 2017-12-29 2019-07-04 DMAI, Inc. System and method for personalizing dialogue based on user's appearances
CN109063035A (zh) * 2018-07-16 2018-12-21 哈尔滨工业大学 一种面向出行领域的人机多轮对话方法
CN110597958A (zh) * 2019-09-12 2019-12-20 苏州思必驰信息科技有限公司 文本分类模型训练和使用方法及装置
CN111177358A (zh) * 2019-12-31 2020-05-19 华为技术有限公司 意图识别方法、服务器及存储介质

Also Published As

Publication number Publication date
CN111177358B (zh) 2023-05-12
CN111177358A (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2021135603A1 (fr) Procédé de reconnaissance d'intention, serveur et support de stockage
TWI729472B (zh) 特徵詞的確定方法、裝置和伺服器
CN110069608B (zh) 一种语音交互的方法、装置、设备和计算机存储介质
WO2020077841A1 (fr) Procédé de service d'abonné basé sur la reconnaissance d'empreinte vocale, dispositif, dispositif informatique et support de stockage
EP2869298A1 (fr) Appareil et procédé d'identification d'informations
US10565986B2 (en) Extracting domain-specific actions and entities in natural language commands
CN109360572B (zh) 通话分离方法、装置、计算机设备及存储介质
CN108959247B (zh) 一种数据处理方法、服务器及计算机可读介质
TW202119288A (zh) 圖像分類模型訓練方法、影像處理方法、資料分類模型訓練方法、資料處理方法、電腦設備、儲存媒介
WO2018107953A1 (fr) Terminal intelligent et son procédé de tri d'application automatique
CN108682421B (zh) 一种语音识别方法、终端设备及计算机可读存储介质
WO2021218087A1 (fr) Procédé et appareil de reconnaissance d'intention basés sur l'intelligence artificielle et dispositif informatique
WO2020103447A1 (fr) Procédé et appareil de stockage de type à liaison pour les informations vidéo, dispositif informatique et support d'enregistrement
WO2022042297A1 (fr) Procédé et appareil de regroupement de textes, dispositif électronique et support de stockage
CN109584881B (zh) 基于语音处理的号码识别方法、装置及终端设备
CN112988753A (zh) 一种数据搜索方法和装置
CN111354354B (zh) 一种基于语义识别的训练方法、训练装置及终端设备
WO2023272616A1 (fr) Procédé et système de compréhension de texte, dispositif terminal et support de stockage
WO2022178933A1 (fr) Procédé et appareil de détection de sentiment vocal basé sur un contexte, dispositif et support de stockage
CN111949793B (zh) 用户意图识别方法、装置及终端设备
WO2021072864A1 (fr) Procédé et appareil d'acquisition de similarité de textes, et dispositif électronique et support de stockage lisible par ordinateur
US20230017449A1 (en) Method and apparatus for processing natural language text, device and storage medium
US20230186613A1 (en) Sample Classification Method and Apparatus, Electronic Device and Storage Medium
CN112802495A (zh) 一种机器人语音测试方法、装置、存储介质及终端设备
CN112069267A (zh) 一种数据处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910621

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20910621

Country of ref document: EP

Kind code of ref document: A1