CN110263346B - Semantic analysis method based on small sample learning, electronic equipment and storage medium - Google Patents

Semantic analysis method based on small sample learning, electronic equipment and storage medium Download PDF

Info

Publication number
CN110263346B
CN110263346B CN201910569780.4A CN201910569780A CN110263346B CN 110263346 B CN110263346 B CN 110263346B CN 201910569780 A CN201910569780 A CN 201910569780A CN 110263346 B CN110263346 B CN 110263346B
Authority
CN
China
Prior art keywords
information
user
relationship
dialogue
question
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910569780.4A
Other languages
Chinese (zh)
Other versions
CN110263346A (en
Inventor
龚泽熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuo Erzhi Lian Wuhan Research Institute Co Ltd
Original Assignee
Zhuo Erzhi Lian Wuhan Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuo Erzhi Lian Wuhan Research Institute Co Ltd filed Critical Zhuo Erzhi Lian Wuhan Research Institute Co Ltd
Priority to CN201910569780.4A priority Critical patent/CN110263346B/en
Publication of CN110263346A publication Critical patent/CN110263346A/en
Application granted granted Critical
Publication of CN110263346B publication Critical patent/CN110263346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a semantic analysis method based on small sample learning, electronic equipment and a storage medium. The method comprises the following steps: acquiring dialogue information; establishing a dialogue model, and analyzing the dialogue information according to the dialogue model to obtain intention information; judging whether the intention information comprises preset keywords or not; and when the intention information comprises the preset keyword, determining a question posed by the user from the intention information, searching response information corresponding to the question from a question-answer database aiming at the question, and playing the response information in a voice mode. According to the invention, the intention information is obtained by establishing the dialogue model and analyzing the dialogue information according to the dialogue model, so that the accuracy of intention analysis on a speaker is improved.

Description

Semantic analysis method based on small sample learning, electronic equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a semantic analysis method based on small sample learning, electronic equipment and a storage medium.
Background
In the existing processing technology for analyzing the intention of an interlocutor based on the small-sample conversation learning of an applicant, only by adding or averaging sample vectors of the conversation, because natural language is different along with the habits of the speakers, the conversation of the same intention type is greatly changed due to different speakers, and finally, the judgment accuracy of the type of the sample vector is not high.
Disclosure of Invention
In view of the above, it is necessary to provide a semantic analysis method, an electronic device and a computer-readable storage medium based on small sample learning to improve the accuracy of the analysis of the intention of the interlocutor.
A first aspect of the present application provides a semantic analysis method based on small sample learning, where the method includes:
acquiring dialogue information;
constructing the dialogue model based on an Encoder-instruction-relationship three-level framework, training the dialogue model by using a small sample learning method, and analyzing the dialogue information according to the dialogue model to obtain intention information;
judging whether the intention information comprises preset keywords or not; and
when the intention information comprises the preset keyword, determining a question posed by a user from the intention information, and searching response information corresponding to the question from a question-answer database aiming at the question, wherein the analyzing the dialogue information according to the dialogue model to obtain the intention information comprises:
an Encoder module in the Encoder-indexing-relationship three-level framework constructs a sentence word vector matrix according to the acquired dialogue information, encodes the word vector matrix to obtain sentence-level semantics, and obtains a sample vector in a support set of the dialogue model according to the sentence-level semantics;
an indication module in the Encode-indication-relationship three-level framework constructs a mapping process from the sample vector to a class vector to obtain a class vector of a category; and
and a relationship module in the Encode-indication-relationship three-level framework constructs an interactive relationship between each sample vector and the similar vector pair, and a full connection layer is used for scoring the interactive relationship between the sample vectors and the similar vector pair so as to analyze and obtain the intention information.
Preferably, an Encoder module in the three-level Encoder-indexing-relationship framework is modeled by using a bidirectional long and short memory network, an indexing module in the three-level Encoder-indexing-relationship framework is modeled by using a dynamic routing algorithm, and a relationship module in the three-level Encoder-indexing-relationship framework is modeled by using a neural tensor network.
Preferably, the scoring the interaction relationship between the sample vector and the class vector pair using a full connection layer to analyze the intention information includes:
trending a score between the matched sample vector and the class vector pair towards 1; and
the score between the sample vector and the class vector pair that do not match is driven towards 0.
Preferably, the method further comprises:
and setting the preset keywords through a setting interface.
Preferably, the setting the preset keyword through the setting interface includes:
when the fact that a user presses an editing button of the setting interface is detected, receiving a preset keyword edited by the user and storing the preset keyword; and
and deleting the preset keyword selected by the user when the user is detected to press a deletion button of the setting interface.
Preferably, the method further comprises:
and scoring the user according to the intention information and the questions proposed by the user, and outputting a scoring result.
Preferably, the scoring the user according to the intention information and the question posed by the user and outputting a scoring result includes:
analyzing preset scoring points contained in the intention information and the questions proposed by the user; and
according to the formula
Figure BDA0002110478880000031
And calculating to obtain the scoring result, wherein Pi is the preset scoring point, wi is the weight of the corresponding preset scoring point, and N is the number of the preset scoring points.
Preferably, the obtaining of the session information includes:
receiving a conversation starting instruction generated by pressing or touching an entity button on the terminal equipment or a virtual button displayed on a user interface by a user, and acquiring the conversation information according to the conversation starting instruction.
A second aspect of the present application provides an electronic device comprising a processor configured to implement the method for semantic analysis based on small sample learning when executing a computer program stored in a memory.
A third aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the small sample learning-based semantic analysis method.
According to the invention, the intention information is obtained by establishing the dialogue model and analyzing the dialogue information according to the dialogue model, so that the accuracy of intention analysis on a speaker is improved. According to the invention, when the intention information comprises the preset keyword, the question proposed by the user is determined from the intention information, the answer information corresponding to the question is searched from the question-answer database aiming at the question, and the answer information is played in a voice mode, so that the answer information corresponding to the question can be searched from the question-answer database aiming at the question proposed by the user, and the answer information is played in a voice mode, therefore, a company can better know the requirement of a job seeker and judge whether the job seeker meets the post requirement.
Drawings
FIG. 1 is a flowchart illustrating a semantic analysis method based on small sample learning according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a setup interface in accordance with an embodiment of the present invention.
Fig. 3 is a block diagram of a semantic analysis device based on small sample learning according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and the described embodiments are merely some, but not all embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Preferably, the semantic analysis method based on small sample learning is applied to one or more electronic devices. The electronic device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be a desktop computer, a notebook computer, a tablet computer, a cloud server, or other computing device. The device can be in man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
Example 1
FIG. 1 is a flowchart illustrating a semantic analysis method based on small sample learning according to an embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different requirements.
Referring to fig. 1, the semantic analysis method based on small sample learning specifically includes the following steps:
step S11, dialogue information is acquired.
In this embodiment, a session starting instruction generated by a user by pressing or touching an entity button on a terminal device or a virtual button displayed on a user interface is received, and the session information is acquired according to the session starting instruction. Specifically, after receiving a session starting instruction generated by a user by pressing or touching an entity button on the terminal device or a virtual button displayed on a user interface, the session information is acquired by a microphone device according to the session starting instruction.
And S12, establishing a dialogue model, and analyzing the dialogue information according to the dialogue model to obtain intention information.
In this embodiment, the establishing a dialogue model includes: and constructing the dialogue model based on the Encoder-instruction-relationship three-level framework and training the dialogue model by using a small sample learning method. In this embodiment, the Encoder module in the Encoder-indexing-relationship three-level framework is modeled by using a bidirectional long and short memory network. In other embodiments, the Encoder module may also use convolutional neural networks or Transformer structural modeling. In this embodiment, the insertion module in the Encoder-insertion-relationship three-level framework is modeled by using a dynamic routing algorithm. In this embodiment, the relationship module in the Encoder-indexing-relationship three-level framework is modeled by using a neural tensor network. In this embodiment, when the dialogue model is trained by using a small sample learning method, C classes are randomly extracted from a training set, K samples are extracted for each class, a total of C × K data is constructed as a meta-task, the meta-task is input as a support set of the dialogue model, and then a batch of samples are extracted from the C classes as prediction objects of the dialogue model. In the training process, each round of sampling obtains different meta-tasks, namely different category combinations are included, and the mechanism enables the conversation model to learn the common parts of the different meta-tasks, for example, how to extract important features and compare sample similarity, and the like, so that the related parts in the fields of the meta-tasks are ignored. In this embodiment, when the dialogue model is trained by using a small sample learning method, the representation of the category is obtained by calculating a small number of samples in the category, and then a final classification result is obtained by calculating by using a metric method.
In this embodiment, the terminal device further stores a dialogue and intention relationship table in which correspondence between a plurality of dialogue information and a plurality of intention information is stored. The method further comprises the following steps: matching the acquired dialogue information with dialogue information in the dialogue and intention relation table to determine intention information corresponding to the acquired dialogue information, and taking the determined intention information as intention information of the acquired dialogue information. In this embodiment, the method further includes: and providing the determined dialogue information for a user to confirm, and outputting the intention information after receiving an error-free confirmation instruction input by the user. In this embodiment, the constructing and analyzing module 402 is further configured to, when it is determined that there is no intention information matching the obtained dialog information in the dialog-intention relationship table, analyze the obtained dialog information according to the dialog model to obtain intention information, associate the obtained dialog information with the analyzed intention information, and store the result in the dialog-intention relationship table to learn the intention information of the user.
In this embodiment, the analyzing the dialogue information according to the dialogue model to obtain intention information includes:
a) The Encoder module constructs a sentence word vector matrix according to the acquired dialogue information, encodes the word vector matrix to obtain sentence-level semantics, and obtains sample vectors in a support set of the dialogue model according to the sentence-level semantics;
b) The indication module constructs a mapping process from the sample vector to a class vector to obtain a class vector of a category; and
c) And the relationship module constructs the interactive relationship between each sample vector and each class vector pair, and scores the interactive relationship between the sample vectors and the class vector pairs by using a full connection layer so as to analyze and obtain the intention information.
In this embodiment, in the process of mapping the sample vector to the class vector, the indication module regards the sample vector in the support set as an input capsule, and outputs the capsule as a semantic feature representation of each class after performing one-layer dynamic routing transformation. Specifically, firstly, matrix conversion is performed on all samples in the support set once to convert the semantic space of the sample level to the semantic space of the category level, and then the irrelevant information in the semantic space is filtered in a dynamic routing mode. In this embodiment, the mapping process from the sample vector to the class vector is modeled in the dynamic routing manner, so that interference information irrelevant to classification can be effectively filtered, and the class characteristics can be obtained.
In this embodiment, the scoring the interaction relationship between the sample vector and the class vector pair using a full connection layer to obtain the intention information includes: training the relationship module using least squares penalty; the score between the matched sample vector and class vector pair tends to be 1; and the score between pairs of unmatched sample vectors and class vectors is driven towards 0.
And S13, judging whether the intention information comprises preset keywords or not.
In this embodiment, the preset keyword may be preset as needed. In this embodiment, the preset keywords may be set through a setting interface 20. Referring to fig. 2, a schematic diagram of a setup interface 20 according to an embodiment of the invention is shown. The setup interface 20 includes, but is not limited to, an edit button 21 and a delete button 22. The setting of the preset keyword through a setting interface 20 includes: when the fact that the user presses the editing button 21 is detected, receiving a preset keyword edited by the user and storing the preset keyword; and deleting the preset keyword selected by the user when the deletion button 22 is detected to be pressed by the user. In this embodiment, the preset keywords may be related contents such as the requirement of a job post and payroll treatment in the field of human resources.
And S14, determining a question posed by the user from the intention information when the intention information comprises the preset keyword, and searching response information corresponding to the question from a question-answer database aiming at the question.
In this embodiment, the question-answer database is stored in the electronic device or the cloud server. The question-answer database stores a corresponding relation table of questions and response information, and searching the response information corresponding to the questions from the question-answer database aiming at the questions comprises the following steps: and determining response information matched with the question from the corresponding relation table in which the question and the response information are stored according to the question, and playing the response information through a voice player. In this embodiment, when the intention information includes preset keywords such as a requirement for a job application and a salary treatment, a question that is provided by a user and is directed to the requirement for the job application and the salary treatment is determined from the intention information, and a response message corresponding to the question is searched from a question and answer database for the question and played in a voice manner, so that a company can better understand the requirement of a job seeker and judge whether the job seeker meets the requirement for a job.
In this embodiment, the method further comprises the steps of: and scoring the user according to the intention information and the questions proposed by the user, and outputting a scoring result.
In this embodiment, the scoring the user according to the intention information and the question posed by the user and outputting the scoring result includes:
analyzing preset score points contained in the intention information and the questions proposed by the user; and
according to the formula
Figure BDA0002110478880000081
And calculating to obtain the scoring result, wherein Pi is a preset scoring point, wi is a weight corresponding to the preset scoring point, and N is the number of the preset scoring points.
In this embodiment, the preset score point and the weight corresponding to the preset score point may be set by a user as required.
In this embodiment, after step S14, the method further includes: and playing the response message by voice.
According to the invention, the intention information is obtained by establishing the dialogue model and analyzing the dialogue information according to the dialogue model, so that the accuracy of intention analysis on a speaker is improved. According to the invention, when the intention information comprises the preset keyword, the question proposed by the user is determined from the intention information, the answer information corresponding to the question is searched from the question-answer database aiming at the question, and the answer information is played in a voice mode, so that the answer information corresponding to the question can be searched from the question-answer database aiming at the question proposed by the user, and the answer information is played in a voice mode, therefore, a company can better know the requirement of a job seeker and judge whether the job seeker meets the post requirement.
Example 2
Fig. 3 is a block diagram of a semantic analyzer 40 based on small sample learning according to an embodiment of the present invention.
In some embodiments, the semantic analysis device 40 that is believed to be based on small sample learning is run in an electronic device. The semantic analysis device 40 based on small sample learning may include a plurality of functional modules composed of program code segments. The program code of each program segment in the small sample learning-based semantic analysis device 40 may be stored in a memory and executed by at least one processor.
In this embodiment, the semantic analysis device 40 based on small sample learning may be divided into a plurality of functional modules according to the functions performed by the device. Referring to fig. 3, the semantic analysis apparatus 40 based on small sample learning may include an obtaining module 401, a constructing and analyzing module 402, a determining module 403, a responding module 404, and a scoring module 405. The modules referred to herein are a series of computer program segments stored in a memory that can be executed by at least one processor and that perform a fixed function. In some embodiments, the functionality of the modules will be described in greater detail in subsequent embodiments.
The obtaining module 401 obtains the session information.
In this embodiment, the obtaining module 401 receives a session starting instruction generated by pressing or touching an entity button on the terminal device or a virtual button displayed on the user interface, and obtains the session information according to the session starting instruction. Specifically, after receiving a session starting instruction generated by a user by pressing or touching an entity button on the terminal device or a virtual button displayed on a user interface, the session information is acquired by a microphone device according to the session starting instruction.
The building and analyzing module 402 builds a dialogue model, and analyzes the dialogue information according to the dialogue model to obtain intention information.
In this embodiment, the building and analyzing module 402 for building the dialogue model includes: and constructing the dialogue model based on an Encoder-indication-relationship three-level framework and training the dialogue model by using a small sample learning method. In this embodiment, the Encoder module in the Encoder-indexing-relationship three-level framework is modeled by using a bidirectional long and short memory network. In other embodiments, the Encoder module may also use convolutional neural networks or Transformer structural modeling. In this embodiment, the insertion module in the Encoder-insertion-relationship three-level framework is modeled by using a dynamic routing algorithm. In this embodiment, the relationship module in the Encoder-indexing-relationship three-level framework is modeled by using a neural tensor network. In this embodiment, when the dialogue model is trained by using a small sample learning method, the constructing and analyzing module 402 randomly extracts C classes in a training set, each class includes K samples, and a total of C × K data is constructed as a meta-task, which is input as a support set of the dialogue model, and then extracts a batch of samples from the C classes as prediction objects of the dialogue model. In the training process, each round of sampling obtains different meta-tasks, namely different category combinations, and the mechanism enables the dialogue model to learn common parts of the different meta-tasks, such as how to extract important features and compare sample similarity, and the like, so that relevant parts in the fields of the meta-tasks are omitted. In this embodiment, when the dialogue model is trained by using a small sample learning method, the representation of the category is obtained by calculating a small number of samples in the category, and then a final classification result is obtained by calculating by using a metric method.
In this embodiment, the terminal device further stores a dialogue and intention relationship table in which correspondence between a plurality of dialogue information and a plurality of intention information is stored. The construction and analysis module 402 is further configured to match the obtained dialog information with the dialog information in the dialog and intention relationship table to determine intention information corresponding to the obtained dialog information, and use the determined intention information as intention information of the obtained dialog information. In this embodiment, the constructing and analyzing module 402 is further configured to provide the determined dialog information to the user for confirmation, and output the intention information after receiving a command input by the user to confirm without error. In this embodiment, the constructing and analyzing module 402 is further configured to, when it is determined that there is no intention information matching the obtained dialog information in the dialog-intention relationship table, analyze the obtained dialog information according to the dialog model to obtain intention information, associate the obtained dialog information with the analyzed intention information, and store the result in the dialog-intention relationship table to learn the intention information of the user.
In this embodiment, the step of analyzing the dialogue information by the constructing and analyzing module 402 according to the dialogue model to obtain intention information includes:
a) The Encoder module constructs a sentence word vector matrix according to the acquired dialogue information, encodes the word vector matrix to obtain sentence-level semantics, and obtains sample vectors in a support set of the dialogue model according to the sentence-level semantics;
b) The indication module constructs a mapping process from the sample vector to a class vector to obtain a class vector of a category; and
c) And the relationship module constructs the interactive relationship between each sample vector and each class vector pair, and scores the interactive relationship between the sample vectors and the class vector pairs by using a full connection layer so as to analyze and obtain the intention information.
In this embodiment, in the process of mapping the sample vector to the class vector, the indication module regards the sample vector in the support set as an input capsule, and outputs the capsule as a semantic feature representation of each class after performing one-layer dynamic routing transformation. Specifically, firstly, matrix conversion is performed on all samples in the support set once to convert the semantic space of the sample level to the semantic space of the category level, and then irrelevant information in the semantic space is filtered in a dynamic routing mode. In this embodiment, the mapping process from the sample vector to the class vector is modeled in the dynamic routing manner, so that interference information irrelevant to classification can be effectively filtered, and class characteristics can be obtained.
In this embodiment, the scoring the interaction relationship between the sample vector and the class vector pair by using the full connection layer to obtain the intention information includes: training the relationship module using least squares penalty; the score between the matched sample vector and class vector pair tends to be 1; and the score between pairs of unmatched sample vectors and class vectors is driven towards 0.
The determining module 403 determines whether the intention information includes a preset keyword.
In this embodiment, the preset keywords may be preset as needed. In this embodiment, the determining module 403 may set the preset keyword through a setting interface 20. Referring to fig. 2, a schematic diagram of a setting interface 20 according to an embodiment of the invention is shown. The setup interface 20 includes, but is not limited to, an edit button 21 and a delete button 22. The setting of the preset keyword through a setting interface 20 includes: when the fact that the user presses the editing button 21 is detected, receiving a preset keyword edited by the user and storing the preset keyword; and deleting the preset keyword selected by the user when the deletion button 22 is detected to be pressed by the user. In this embodiment, the preset keywords may be related contents such as the requirement of the employment post and the salary treatment in the field of human resources.
The response module 404 is configured to determine a question posed by the user from the intention information when the intention information includes the preset keyword, search response information corresponding to the question from a question-answer database for the question, and play the response information in voice.
In this embodiment, the question and answer database is stored in the electronic device or the cloud server. The question-answer database stores a corresponding relation table of questions and response information, and searching the response information corresponding to the questions from the question-answer database aiming at the questions comprises the following steps: and determining response information matched with the question from the corresponding relation table in which the question and the response information are stored according to the question, and playing the response information through a voice player. In this embodiment, when the intention information includes preset keywords such as a requirement for a job application and a salary treatment, a question that is provided by a user and is directed to the requirement for the job application and the salary treatment is determined from the intention information, and a response message corresponding to the question is searched from a question and answer database for the question and played in a voice manner, so that a company can better understand the requirement of a job seeker and judge whether the job seeker meets the requirement for a job.
In this embodiment, the scoring module 405 scores the user according to the intention information and the question posed by the user, and outputs a scoring result.
In this embodiment, the scoring module 405 scoring the user according to the intention information and the question provided by the user and outputting a scoring result includes:
analyzing preset score points contained in the intention information and the questions proposed by the user; and
according to the formula
Figure BDA0002110478880000121
And calculating to obtain the scoring result, wherein Pi is a preset scoring point, wi is a weight corresponding to the preset scoring point, and N is the number of the preset scoring points.
In this embodiment, the preset score point and the weight corresponding to the preset score point may be set by a user as required.
According to the invention, the intention information is obtained by establishing the dialogue model and analyzing the dialogue information according to the dialogue model, so that the accuracy of intention analysis on a speaker is improved. According to the invention, when the intention information comprises the preset keyword, the question proposed by the user is determined from the intention information, the answer information corresponding to the question is searched from the question-answer database aiming at the question, and the answer information is played in a voice mode, so that the answer information corresponding to the question can be searched from the question-answer database aiming at the question proposed by the user, and the answer information is played in a voice mode, therefore, a company can better know the requirement of a job seeker and judge whether the job seeker meets the post requirement.
Example 3
Fig. 4 is a schematic diagram of an electronic device 6 according to an embodiment of the invention.
The electronic device 6 comprises a memory 61, a processor 62 and a computer program 63 stored in the memory 61 and executable on the processor 62. The processor 62 implements the steps in the above embodiment of the semantic analysis method based on small sample learning, such as the steps S11 to S14 shown in fig. 1, when executing the computer program 63. Alternatively, the processor 62 implements the functions of the modules/units in the above-described small sample learning-based semantic analysis device embodiment, such as the modules 401 to 405 in fig. 3, when executing the computer program 63.
Illustratively, the computer program 63 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 62 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used for describing the execution process of the computer program 63 in the electronic device 6. For example, the computer program 63 may be divided into an acquisition module 401, a construction and analysis module 402, a judgment module 403, a response module 404 and a scoring module 405 in fig. 3, and the specific functions of each module are described in embodiment 2.
In this embodiment, the electronic device 6 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud terminal device. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 6, and does not constitute a limitation of the electronic device 6, and may include more or less components than those shown, or some components may be combined, or different components, for example, the electronic device 6 may further include an input-output device, a network access device, a bus, etc.
The Processor 62 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor 62 may be any conventional processor or the like, the processor 62 being the control center for the electronic device 6, with various interfaces and lines connecting the various parts of the overall electronic device 6.
The memory 61 may be used for storing the computer programs 63 and/or modules/units, and the processor 62 may implement various functions of the electronic device 6 by running or executing the computer programs and/or modules/units stored in the memory 61 and calling data stored in the memory 61. The memory 61 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the stored data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device 6, and the like. In addition, the memory 61 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The integrated modules/units of the electronic device 6, if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the embodiments provided in the present invention, it should be understood that the disclosed electronic device and method can be implemented in other ways. For example, the above-described embodiments of the electronic device are merely illustrative, and for example, the division of the modules is only one logical functional division, and there may be other divisions when the actual implementation is performed.
In addition, each functional module in each embodiment of the present invention may be integrated into the same processing module, or each module may exist alone physically, or two or more modules may be integrated into the same module. The integrated module can be realized in a hardware mode, and can also be realized in a mode of hardware and a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is to be understood that the word "comprising" does not exclude other modules or steps, and the singular does not exclude the plural. Several modules or electronic devices recited in the electronic device claims may also be implemented by one and the same module or electronic device by means of software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A semantic analysis method based on small sample learning, which is characterized by comprising the following steps:
acquiring dialogue information;
constructing a dialogue model based on an Encoder-instruction-relationship three-level framework, training the dialogue model by using a small sample learning method, and analyzing the dialogue information according to the dialogue model to obtain intention information;
judging whether the intention information comprises preset keywords or not; and
when the intention information comprises the preset keyword, determining a question posed by a user from the intention information, and searching response information corresponding to the question from a question-answer database aiming at the question, wherein the analyzing the dialogue information according to the dialogue model to obtain the intention information comprises:
an Encoder module in the Encoder-indexing-relationship three-level framework constructs a sentence word vector matrix according to the acquired dialogue information, encodes the word vector matrix to obtain sentence-level semantics, and obtains a sample vector in a support set of the dialogue model according to the sentence-level semantics;
an indication module in the Encode-indication-relationship three-level framework constructs a mapping process from the sample vector to a class vector to obtain a class vector of a category; and
and a relationship module in the Encode-indication-relationship three-level framework constructs an interactive relationship between each sample vector and the similar vector pair, and a full connection layer is used for scoring the interactive relationship between the sample vectors and the similar vector pair so as to analyze and obtain the intention information.
2. The semantic analysis method according to claim 1, wherein an Encoder module in the Encoder-indexing-relationship three-level framework is modeled by using a bidirectional long-short memory network, an indexing module in the Encoder-indexing-relationship three-level framework is modeled by using a dynamic routing algorithm, and a relationship module in the Encoder-indexing-relationship three-level framework is modeled by using a neural tensor network.
3. The method for semantic analysis based on small sample learning according to claim 1, wherein the scoring the interaction relationship between the sample vector and the class vector pair using a full connection layer to obtain the intention information comprises:
trending a score between the matched pair of sample vectors and class vectors toward 1; and
the score between the sample vector and the class vector pair that do not match is driven towards 0.
4. The method for semantic analysis based on small sample learning according to claim 1, further comprising:
and setting the preset keywords through a setting interface.
5. The method for semantic analysis based on small sample learning according to claim 4, wherein the setting the preset keywords through a setting interface comprises:
when the fact that a user presses an editing button of the setting interface is detected, receiving a preset keyword edited by the user and storing the preset keyword; and
and deleting the preset keyword selected by the user when the user is detected to press a deletion button of the setting interface.
6. The method for semantic analysis based on small sample learning according to claim 1, further comprising:
and scoring the user according to the intention information and the questions proposed by the user, and outputting a scoring result.
7. The method as claimed in claim 6, wherein the scoring the user according to the intention information and the questions posed by the user and outputting the scoring result comprises:
analyzing preset score points contained in the intention information and the questions proposed by the user; and
according to the formula
Figure FDA0002110478870000021
And calculating to obtain the scoring result, wherein Pi is the preset scoring point, wi is the weight of the corresponding preset scoring point, and N is the number of the preset scoring points.
8. The method for semantic analysis based on small sample learning according to claim 1, wherein the obtaining dialog information comprises:
receiving a conversation starting instruction generated by pressing or touching an entity button on the terminal equipment or a virtual button displayed on a user interface by a user, and acquiring the conversation information according to the conversation starting instruction.
9. An electronic device, characterized in that: the electronic device comprises a processor for implementing the method for semantic analysis based on small sample learning according to any one of claims 1-8 when executing a computer program stored in a memory.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements a method of semantic analysis based on small sample learning according to any one of claims 1 to 8.
CN201910569780.4A 2019-06-27 2019-06-27 Semantic analysis method based on small sample learning, electronic equipment and storage medium Active CN110263346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910569780.4A CN110263346B (en) 2019-06-27 2019-06-27 Semantic analysis method based on small sample learning, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910569780.4A CN110263346B (en) 2019-06-27 2019-06-27 Semantic analysis method based on small sample learning, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110263346A CN110263346A (en) 2019-09-20
CN110263346B true CN110263346B (en) 2023-01-24

Family

ID=67922531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910569780.4A Active CN110263346B (en) 2019-06-27 2019-06-27 Semantic analysis method based on small sample learning, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110263346B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008267B (en) * 2019-10-29 2024-07-12 平安科技(深圳)有限公司 Intelligent dialogue method and related equipment
CN114036264B (en) * 2021-11-19 2023-06-16 四川大学 Email authorship attribution identification method based on small sample learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363690A (en) * 2018-02-08 2018-08-03 北京十三科技有限公司 Dialog semantics Intention Anticipation method based on neural network and learning training method
CN108427722A (en) * 2018-02-09 2018-08-21 卫盈联信息技术(深圳)有限公司 intelligent interactive method, electronic device and storage medium
CN109271505A (en) * 2018-11-12 2019-01-25 深圳智能思创科技有限公司 A kind of question answering system implementation method based on problem answers pair
CN109446306A (en) * 2018-10-16 2019-03-08 浪潮软件股份有限公司 Task-driven multi-turn dialogue-based intelligent question and answer method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363690A (en) * 2018-02-08 2018-08-03 北京十三科技有限公司 Dialog semantics Intention Anticipation method based on neural network and learning training method
CN108427722A (en) * 2018-02-09 2018-08-21 卫盈联信息技术(深圳)有限公司 intelligent interactive method, electronic device and storage medium
CN109446306A (en) * 2018-10-16 2019-03-08 浪潮软件股份有限公司 Task-driven multi-turn dialogue-based intelligent question and answer method
CN109271505A (en) * 2018-11-12 2019-01-25 深圳智能思创科技有限公司 A kind of question answering system implementation method based on problem answers pair

Also Published As

Publication number Publication date
CN110263346A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN108897867B (en) Data processing method, device, server and medium for knowledge question answering
CN106406806B (en) Control method and device for intelligent equipment
JP2021089705A (en) Method and device for evaluating translation quality
CN106875941B (en) Voice semantic recognition method of service robot
CN111090727B (en) Language conversion processing method and device and dialect voice interaction system
CN116737908A (en) Knowledge question-answering method, device, equipment and storage medium
US11749255B2 (en) Voice question and answer method and device, computer readable storage medium and electronic device
CN110297893B (en) Natural language question-answering method, device, computer device and storage medium
WO2021218028A1 (en) Artificial intelligence-based interview content refining method, apparatus and device, and medium
CN111694940A (en) User report generation method and terminal equipment
CN110795913A (en) Text encoding method and device, storage medium and terminal
CN113535925B (en) Voice broadcasting method, device, equipment and storage medium
CN108710653B (en) On-demand method, device and system for reading book
CN110263346B (en) Semantic analysis method based on small sample learning, electronic equipment and storage medium
CN117114475A (en) Comprehensive capability assessment system based on multidimensional talent assessment strategy
CN112966076A (en) Intelligent question and answer generating method and device, computer equipment and storage medium
CN113763925B (en) Speech recognition method, device, computer equipment and storage medium
CN111444321A (en) Question answering method, device, electronic equipment and storage medium
CN114065720A (en) Conference summary generation method and device, storage medium and electronic equipment
CN117172258A (en) Semantic analysis method and device and electronic equipment
CN114218356B (en) Semantic recognition method, device, equipment and storage medium based on artificial intelligence
CN114330285B (en) Corpus processing method and device, electronic equipment and computer readable storage medium
CN111401069A (en) Intention recognition method and intention recognition device for conversation text and terminal
CN116842143A (en) Dialog simulation method and device based on artificial intelligence, electronic equipment and medium
CN115019788A (en) Voice interaction method, system, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant