CN111913563A - Man-machine interaction method and device based on semi-supervised learning - Google Patents
Man-machine interaction method and device based on semi-supervised learning Download PDFInfo
- Publication number
- CN111913563A CN111913563A CN201910377324.XA CN201910377324A CN111913563A CN 111913563 A CN111913563 A CN 111913563A CN 201910377324 A CN201910377324 A CN 201910377324A CN 111913563 A CN111913563 A CN 111913563A
- Authority
- CN
- China
- Prior art keywords
- entity
- target
- text
- relation
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000003993 interaction Effects 0.000 title claims abstract description 47
- 238000000605 extraction Methods 0.000 claims abstract description 134
- 238000012549 training Methods 0.000 claims description 26
- 238000012937 correction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000008774 maternal effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a man-machine interaction method based on semi-supervised learning, which comprises the following steps: acquiring a relation extraction model through semi-supervised learning; acquiring user voice information; recognizing user voice information, and acquiring a target entity and a target text specified by a user; acquiring text content contained in a target text according to the target text specified by a user; performing entity relation extraction on the text content through a relation extraction model to obtain entity pairs and relations contained in the target text; and according to the target entity, combining the entity pair contained in the text information and the relationship thereof, and giving corresponding feedback to the user. In addition, the invention also discloses a human-computer interaction device based on semi-supervised learning, the invention extracts the entity relationship based on semi-supervised learning, does not need mass label data, saves labor, reduces complexity, can accurately extract effective information, further provides corresponding feedback for users, and realizes good human-computer interaction.
Description
Technical Field
The invention relates to the field of human-computer interaction, in particular to a human-computer interaction method and device based on semi-supervised learning.
Background
In the various types of text information generated by the explosion of the internet, a large number of character entities and relationship information among the character entities are covered in the text information, and in the rapidly developing era, people often want to rapidly acquire effective information which people want to know.
The traditional manual reading and understanding mode for obtaining the semantic relationship is limited by the reading and understanding ability of a reader, and on the other hand, the semantic relationship is possibly numerous, the reader is difficult to obtain the desired information quickly, no intelligent device which can help a user to read and extract the effective information exists in the market, so that the effective information required by the user is quickly obtained, and the interaction with the user is realized.
Disclosure of Invention
In order to solve the technical problems, the invention provides a human-computer interaction method and a human-computer interaction device based on semi-supervised learning, and specifically, the technical scheme of the invention is as follows:
on one hand, the invention discloses a man-machine interaction method in semi-supervised learning, which comprises the following steps:
acquiring a relation extraction model through semi-supervised learning;
acquiring user voice information;
recognizing the user voice information to obtain a target entity and a target text appointed by the user;
acquiring text content contained in the target text according to the target text specified by the user;
performing entity relationship extraction on the text content through the relationship extraction model to obtain entity pairs and relationships thereof contained in the target text;
and according to the target entity, giving corresponding feedback to the user by combining the entity pair contained in the text message and the relation thereof.
Preferably, the obtaining of the relationship extraction model through semi-supervised learning comprises:
acquiring a text sample marked with an entity relationship and a text sample not marked with the entity relationship;
training an initial model through the text sample marked with the entity relationship to obtain an initial relationship extraction model;
and training and correcting the initial relation extraction model by using the text sample which is not marked with the entity relation through an iterative method to obtain a final relation extraction model.
Preferably, the human-computer interaction method based on semi-supervised learning further comprises:
and constructing an entity relation map of the target text according to the entity pair group and the relation thereof contained in the text information.
Preferably, the human-computer interaction method based on semi-supervised learning further comprises:
acquiring a target entity pair and a corresponding relation from the entity pair and the relation contained in the text information; the target entity pair comprises the target entity;
constructing a relation map of the target entity according to the target entity pair and the corresponding relation;
and feeding back the relation map of the target entity to the user.
Preferably, the extracting entity relationship of the text content by the relationship extraction model to obtain the entity pair and the relationship thereof included in the target text specifically includes:
searching for an associated sentence related to the target entity in the text content of the target text according to the target entity;
and extracting the relation of all the searched associated sentences related to the target entity through the relation extraction model to obtain the associated entities related to the target entity and the relation corresponding to the associated entities.
On the other hand, the invention also discloses a man-machine interaction device based on semi-supervised learning, which comprises:
the semi-supervised learning module is used for acquiring a relation extraction model through semi-supervised learning;
the voice acquisition module is used for acquiring voice information of a user;
the voice recognition module is used for recognizing the voice information of the user and acquiring a target entity and a target text specified by the user;
the extracted content acquisition module is used for acquiring text content contained in the target text according to the target text specified by the user;
the relation extraction module is used for extracting the entity relation of the text content through the relation extraction model to obtain the entity pair and the relation contained in the target text;
and the feedback module is used for giving corresponding feedback to the user according to the target entity by combining the entity pair contained in the text information and the relation thereof.
Preferably, the semi-supervised learning module comprises:
the sample acquisition submodule is used for acquiring a text sample marked with the entity relationship and a text sample not marked with the entity relationship;
the initial training submodule is used for training an initial model through the text sample marked with the entity relationship to obtain an initial relationship extraction model;
and the correction training sub-module is used for training and correcting the initial relation extraction model by using the text sample which is not marked with the entity relation through an iterative method to obtain a final relation extraction model.
Preferably, the human-computer interaction device based on semi-supervised learning further comprises:
and the text map building module is used for building an entity relation map of the target text according to the entity pair group and the relation thereof contained in the text information.
Preferably, the human-computer interaction device based on semi-supervised learning further comprises:
the information selection module is used for acquiring a target entity pair and a corresponding relation from the entity pair and the relation contained in the text information; the target entity pair comprises the target entity;
the target map building module is used for building a relation map of the target entity according to the target entity pair and the corresponding relation;
the feedback module is further configured to feed back the relationship map of the target entity to the user.
Preferably, the extracted content obtaining module is further configured to search, according to the target entity, an associated sentence related to the target entity in the text content of the target text;
the relation extraction module is further configured to perform relation extraction on all found associated statements related to the target entity through the relation extraction model, and obtain an associated entity related to the target entity and a relation corresponding to the associated entity.
The invention at least comprises the following technical effects:
(1) the method breaks through the limitation of the traditional manual reading, and utilizes semi-supervised learning to obtain the relation extraction model, so that the method can help the user to extract effective information from the read text, feed the effective information back to the user according to the extracted content, help the user to quickly obtain the content to be understood, and form good interaction with the user.
(2) In the prior art, supervised learning is commonly used for extracting the relationship, and although the accuracy and recall rate of a supervised relationship extraction system are high, the supervised relationship extraction system is seriously dependent on a relationship type system and a labeled data set which are established in advance, particularly a deep learning method. Due to the characteristics of the neural network, a large amount of training data is needed to obtain a good classification network model, and the labor investment and the cost are high. The accuracy of the unsupervised learning method is generally difficult to guarantee, and the relation extraction is carried out by adopting semi-supervised learning, so that on the premise of guaranteeing a certain accuracy, massive tag data is not needed, the labor is saved, and the complexity is reduced.
(3) After acquiring each entity pair and corresponding relation contained in the target text content, performing relation screening from the obtained entity pairs to acquire entities related to the target entities and acquire the relation with the entities, then constructing a relation map corresponding to the target entities, and finally feeding the relation map of the target entities back to a user, so that the user can know the relation between the target entities and each entity at a glance.
(4) Because the text content may contain more information, great difficulty is brought to the extraction of entity pairs and relationships in the text content, and in order to reduce or reduce the extraction complexity, the invention can directly search the associated sentences related to the target entity in the text content, and then extract the relationships by using the associated sentences as extraction objects through a relationship extraction model, thereby greatly reducing the extraction difficulty and complexity and improving the extraction efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart of an embodiment of a human-computer interaction method based on semi-supervised learning according to the present invention;
FIG. 2 is a flowchart of another embodiment of a human-computer interaction method based on semi-supervised learning according to the present invention;
FIG. 3 is a flowchart of another embodiment of a human-computer interaction method based on semi-supervised learning according to the present invention;
FIG. 4 is a diagram of a relationship map of a target entity of the present invention;
FIG. 5 is a flowchart of another embodiment of a human-computer interaction method based on semi-supervised learning according to the present invention;
FIG. 6 is a block diagram of a human-computer interaction device based on semi-supervised learning according to an embodiment of the present invention;
FIG. 7 is a block diagram of a human-computer interaction device based on semi-supervised learning according to another embodiment of the present invention;
FIG. 8 is a block diagram of a human-computer interaction device based on semi-supervised learning according to another embodiment of the present invention;
FIG. 9 is a block diagram of a human-computer interaction device based on semi-supervised learning according to another embodiment of the present invention.
Reference numerals:
10- -semi-supervised learning module; 20- -a voice acquisition module; 30- -a speech recognition module; 40- -extract content acquisition module; 50- -relationship extraction Module; 60- -feedback Module; 70- -text map building module; 80- -information selection module; 90- -target map building block; 11- -sample acquisition submodule; 12- -initial training submodule; 13- -modify the training submodule.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically depicted, or only one of them is labeled. In this document, "one" means not only "only one" but also a case of "more than one".
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, smart voice devices, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments the terminal device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a network creation application, a word processing application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a digital video camera application, a Web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
Fig. 1 shows a flowchart of an implementation of a human-computer interaction method based on semi-supervised learning according to the present invention, where the method may be applied to terminal devices (for example, intelligent voice devices, learning machines, etc., in this embodiment, for convenience of understanding, the intelligent voice devices are used as subject explanations, but those skilled in the art understand that the intention recognition method may also be applied to other terminal devices as long as corresponding functions can be implemented), and the human-computer interaction method based on semi-supervised learning includes the following steps:
s101, acquiring a relation extraction model through semi-supervised learning;
the existing mainstream entity relation extraction technology is divided into a supervised learning method, a semi-supervised learning method and an unsupervised learning method. In the supervised learning method, a relation extraction task is taken as a classification problem, effective features are designed according to training data so as to learn various classification models, and then a trained classifier (equivalent to the relation extraction model) is used for predicting the relation. The problem with this approach is that a large number of labeled corpora are required.
The semi-supervised learning method mainly adopts Bootstrapping to extract the relation. For the relation to be extracted, the method firstly sets a plurality of seed examples manually, and then extracts a relation template corresponding to the relation and more examples from the data in an iterative manner. The Bootstrapping algorithm, among others, refers to the reconstruction of new samples sufficient to represent the distribution of maternal samples through repeated sampling with limited sample data.
Unsupervised learning methods assume that pairs of entities with the same semantic relationship have similar context information. Therefore, the semantic relationship of each entity pair can be represented by the corresponding context information of the entity pair, and the semantic relationships of all the entity pairs can be clustered.
In the prior art, supervised learning is commonly used for extracting the relationship, and although the accuracy and recall rate of a supervised relationship extraction system are high, the supervised relationship extraction system is seriously dependent on a relationship type system and a labeled data set which are established in advance, particularly a deep learning method. Due to the characteristics of the neural network, a large amount of training data is needed to obtain a good classification network model, and the labor investment and the cost are high. The accuracy of the unsupervised learning method is generally difficult to guarantee, and the relation extraction is carried out by adopting semi-supervised learning, so that on the premise of guaranteeing a certain accuracy, massive tag data is not needed, the labor is saved, and the complexity is reduced.
S102, acquiring user voice information;
specifically, for example, when a user learns and reads, if the user wants to quickly obtain information that the user wants to know from an article or a report that is currently read, the user can ask the smart voice device for a demand through voice, and the smart voice device can collect voice information of the user through a microphone or other voice collecting devices.
S103, recognizing the user voice information to obtain a target entity and a target text specified by the user;
specifically, for example, the voice information of the user is recognized as: "I want to know personal information about first in DungGai" in the lesson book "first in DungGai". After the voice information is recognized, it can be obtained from the voice information that the target entity that the user wants to know is dunga, and the target text is: the text "Deng Gai Xian".
Of course, there is also a possibility that an explicit target text cannot be obtained from the user voice information, specifically, the step S103 identifies the user voice information, and obtaining the target entity and the target text specified by the user includes:
s1031, recognizing the user voice information, and obtaining the target entity and the target text specified by the user from the voice information;
s1032, judging whether the target text specified by the user has semantic deformity;
and S1033, when the semantic deformity of the target text designated by the user is judged, acquiring the current indication image of the user as the target text.
Specifically, for example, the voice information identifying the user is: after recognizing the information, the user-specified target entity is the character, the specified target text is the story specified by the user, and the specific story is not clear, at this time, a current indication image of the user can be shot through a camera on the intelligent voice device, wherein the indication image refers to the image of the story specified by the user, namely the image of the currently read story, and the obtained image of the story is the target text. Of course, the report may be a paper report or an electronic report, such as a report currently viewed by the user through a tablet computer.
S104, acquiring text content contained in the target text according to the target text specified by the user;
specifically, for example, if a specific specified target text is directly obtained from the voice information of the user, the corresponding text content can be obtained according to the specified target text, for example, if the target text is the text "dunga first", and then the specific content taught by the text "dunga first" can be found according to the text.
If the specified target text obtained from the user's voice information is not well defined, for example, the target text obtained from the user's voice information alone is: the report, particularly which report is ambiguous, when the target text has semantic defects, it is necessary to capture a current indication image of the user, identify the indication image, and obtain the text information contained in the indication image. Specifically, for example, if the obtained indication target text is a captured image of a story currently read by the user, if the user wants to obtain corresponding target content according to the target image, image processing needs to be performed on the image, text information included in the text image is identified, and the identified text information is text content of the specified target text.
S105, performing entity relation extraction on the text content through the relation extraction model to obtain entity pairs and relations contained in the target text;
specifically, after the text content of the target text is acquired, the entity relationship extraction is performed on the text content through the previously acquired relationship extraction model, so as to obtain the entity pair and the corresponding relationship information included in the target text. For example, the relationship between the pairs of entities extracted from a sports report and the corresponding relationship is as follows:
TABLE 1 sports reports extracted partial entity pairs and corresponding relationship table
And S106, according to the target entity, combining the entity pair and the relation thereof contained in the text message, and giving corresponding feedback to the user.
Specifically, according to the extraction result of the relation extraction model on the target text, and in combination with the target entity which the user wants to know, the answer which the user wants to know can be obtained, and then a corresponding response is given. For example, according to the information extracted from the above reports, the user can be fed back the relevant information about the particular target entity: youging is a Chinese football player who has been in Shanghai and has won the gold ball prize. The teacher in Yuming is Liu, and the wife in Yu Ming is Wang Li, which also originates from Shanghai.
In this embodiment, according to the target entity, in combination with the entity pair and the relationship thereof included in the text information, the user is given corresponding feedback. The feedback information can be information about the target entity in the extracted information directly fed back to the user, or information about the target entity can be organized by language to form a piece of short text information about the target entity and then fed back to the user, or a relation map of the target entity can be formed according to the extraction result of the target text content, or information related to the target content is selected from the extraction result to construct a relation map of the target entity and fed back to the user. The feedback can be in the form of voice broadcast or can be displayed on a display screen to the user.
In another embodiment of the present invention, on the basis of the above embodiment, how to obtain the relationship extraction model through semi-supervised learning is explained in detail, specifically, as shown in fig. 2, the man-machine interaction method based on semi-supervised learning of the embodiment includes:
s201, acquiring a text sample marked with an entity relationship and a text sample not marked with the entity relationship;
specifically, the relationship extraction model is obtained by training in a semi-supervised learning manner, and the scheme mainly adopts a Bootstrapping method (Bootstrapping) in the semi-supervised learning, but a small number of text samples labeled with entity relationships and a large number of text samples not required to be labeled are required in the semi-supervised learning.
S202, training an initial model through the text sample marked with the entity relationship to obtain an initial relationship extraction model;
specifically, the semi-supervised learning is mainly divided into two stages, wherein the first stage is to carry out supervised training through a small number of labeled samples to obtain an initial relation extraction model.
S203, training and correcting the initial relationship extraction model by using the text sample which is not marked with the entity relationship through an iterative method to obtain a final relationship extraction model;
and in the second stage, the initial relation extraction model is utilized to extract the relation of the text sample which is not subjected to the entity marking relation, so that a new entity pair and a corresponding relation (a new relation tuple) can be obtained through extraction, then the part with higher confidence coefficient is screened and used as a new marked text sample to be input into the initial relation extraction model, the initial relation extraction model is further trained and corrected, and the stage is repeated until the specified iteration number is reached, so that the final relation extraction model is obtained.
S204, acquiring user voice information;
s205, recognizing the user voice information, and acquiring a target entity and a target text specified by the user;
s206, acquiring text content contained in the target text according to the target text specified by the user;
s207, performing entity relation extraction on the text content through the relation extraction model to obtain entity pairs and relations contained in the target text;
and S208, according to the target entity, combining the entity pair and the relation thereof contained in the text message, and giving corresponding feedback to the user.
In the embodiment, the final relation extraction model is obtained based on semi-supervised learning, and the initial model is trained only by combining a small amount of labeled texts and a large amount of unlabeled texts, so that the finally used relation extraction model is obtained, and time and labor are saved.
In another embodiment of the method of the present invention, on the basis of any of the above embodiments, a step of constructing an entity relationship map of the target text is added, and specifically, after performing entity relationship extraction on text contents through a relationship extraction model to obtain an entity pair and a relationship thereof included in the target text, the method further includes: and constructing an entity relation map of the target text according to the entity pair group and the relation thereof contained in the text information.
In another embodiment of the method, after the entity pairs and the corresponding relations contained in the target text content are obtained, the relation screening is performed to obtain the entities related to the target entities and the relations with the entities, then the relation maps corresponding to the target entities are constructed, and finally the relation maps of the target entities are fed back to the user, so that the user can know the relation between the target entities and the entities at a glance. Specifically, as shown in fig. 3, the man-machine interaction method based on semi-supervised learning in this embodiment includes:
s301, acquiring a relation extraction model through semi-supervised learning;
s302, acquiring user voice information;
s303, recognizing the user voice information to obtain a target entity and a target text specified by the user;
s304, acquiring text content contained in the target text according to the target text specified by the user;
s305, performing entity relation extraction on the text content through the relation extraction model to obtain an entity pair and a relation thereof contained in the target text;
s306, acquiring a target entity pair and a corresponding relation from the entity pair and the relation contained in the text message; the target entity pair comprises the target entity;
specifically, the entity pair relationship extracted from the text information may be more, and the user needs to know about the information about the target entity, so that the information related to the target entity can be selected from the extracted text information result.
S307, constructing a relation map of the target entity according to the target entity pair and the corresponding relation;
specifically, after the extraction information related to the target entity is acquired, the relationship map of the target entity is constructed according to the information. FIG. 4 shows a schematic diagram of a relationship map of a target entity, Youging.
S308, displaying the relation map of the target entity to the user, and feeding back the relation information of the target entity to the user in a voice mode.
Specifically, the related information of the target entity may be fed back to the user according to the relationship graph of the target entity, or the relationship graph may be displayed to the user.
Another embodiment of the method of the present invention, on the basis of any of the above embodiments, performs entity relationship extraction on the text content through a relationship extraction model to obtain an entity pair and a relationship thereof included in the target text, specifically, as shown in fig. 5, the embodiment includes:
s401, acquiring a relation extraction model through semi-supervised learning;
s402, acquiring user voice information;
s403, recognizing the user voice information, and acquiring a target entity and a target text specified by the user;
s404, acquiring text content contained in the target text according to the target text designated by the user;
s405, searching for associated sentences related to the target entities in the text content of the target text according to the target entities;
specifically, since the text content contains more information, for example, a long novel, the entity pairs and relationships contained therein are very many, which brings great difficulty to the extraction of the entity pairs and relationships in the text content, and in order to reduce or reduce the extraction complexity, the embodiment can directly search the associated sentences related to the target entity in the text content, and then use the associated sentences as the extraction objects.
S406, extracting the relation of all the searched associated sentences related to the target entity through the relation extraction model to obtain the associated entities related to the target entity and the relation corresponding to the associated entities;
specifically, extracting the obtained association statement to a relationship extraction model for relationship extraction; and acquiring an entity related to the target entity and a corresponding relation.
S407, constructing a relationship map of the target entity according to the obtained associated entity related to the target entity and the relationship corresponding to the associated entity;
similarly, after the relevant information of the target entity is obtained, the relation map of the target entity can be constructed accordingly.
S408, giving corresponding feedback to the user according to the constructed relation map of the target entity.
Based on the same technical concept, the invention also discloses a human-computer interaction device based on semi-supervised learning, which can interact with a user by adopting the human-computer interaction method based on semi-supervised learning, and specifically, the human-computer interaction device based on semi-supervised learning is shown in fig. 6 and specifically comprises the following steps:
a semi-supervised learning module 10, configured to obtain a relationship extraction model through semi-supervised learning;
specifically, the existing mainstream entity relationship extraction technology is divided into a supervised learning method, a semi-supervised learning method and an unsupervised learning method. Wherein, the semi-supervised learning method mainly adopts Bootstrapping to extract the relation. For the relation to be extracted, the method firstly sets a plurality of seed examples manually, and then extracts a relation template corresponding to the relation and more examples from the data in an iterative manner. The Bootstrapping algorithm, among others, refers to the reconstruction of new samples sufficient to represent the distribution of maternal samples through repeated sampling with limited sample data.
In the prior art, supervised learning is commonly used for extracting the relationship, and although the accuracy and recall rate of a supervised relationship extraction system are high, the supervised relationship extraction system is seriously dependent on a relationship type system and a labeled data set which are established in advance, particularly a deep learning method. Due to the characteristics of the neural network, a large amount of training data is needed to obtain a good classification network model, and the labor investment and the cost are high. The accuracy of the unsupervised learning method is generally difficult to guarantee, and the relation extraction is carried out by adopting semi-supervised learning, so that on the premise of guaranteeing a certain accuracy, massive tag data is not needed, the labor is saved, and the complexity is reduced.
A voice obtaining module 20, configured to obtain user voice information;
specifically, for example, when a user learns and reads, if the user wants to quickly obtain information that the user wants to know from an article or a report that is currently read, the user can ask the smart voice device for a demand through voice, and the smart voice device can collect voice information of the user through a microphone or other voice collecting devices.
A voice recognition module 30, configured to recognize the user voice information, and obtain a target entity and a target text specified by the user;
specifically, for example, the voice information of the user is recognized as: "I want to know personal information about first in DungGai" in the lesson book "first in DungGai". After the voice information is recognized, it can be obtained from the voice information that the target entity that the user wants to know is dunga, and the target text is: the text "Deng Gai Xian".
Of course, there is also a possibility that an explicit target text cannot be obtained from the user's voice information, for example, the voice information of the user is recognized as: after recognizing the information, the user-specified target entity is the character, the specified target text is the story specified by the user, and the specific story is not clear, at this time, a current indication image of the user can be shot through a camera on the intelligent voice device, wherein the indication image refers to the image of the story specified by the user, namely the image of the currently read story, and the obtained image of the story is the target text. At this time, the human-computer interaction device further comprises an image acquisition module, which is used for acquiring an indication image of the user when the target text cannot be acquired from the user voice information, and taking the indication image as the target text.
An extracted content obtaining module 40, configured to obtain text content included in the target text according to the target text specified by the user;
specifically, for example, if a specific specified target text is directly obtained from the voice information of the user, the corresponding text content can be obtained according to the specified target text, for example, if the target text is the text "dunga first", and then the specific content taught by the text "dunga first" can be found according to the text.
If the specified target text obtained from the user's voice information is not well defined, for example, the target text obtained from the user's voice information alone is: the report, particularly which report is ambiguous, at this time, the target text has semantic defects, at this time, a camera is required to capture a current instruction image of the user, and then the instruction image is recognized by the extracted content obtaining module, so as to obtain the text information contained in the instruction image. Specifically, for example, if the obtained indication target text is a captured image of a story currently read by the user, if the user wants to obtain corresponding target content according to the target image, image processing needs to be performed on the image, text information included in the text image is identified, and the identified text information is text content of the specified target text.
A relationship extraction module 50, configured to perform entity relationship extraction on the text content through the relationship extraction model, so as to obtain an entity pair and a relationship thereof included in the target text;
specifically, after the text content of the target text is acquired, the entity relationship extraction is performed on the text content through the previously acquired relationship extraction model, so as to obtain the entity pair and the corresponding relationship information included in the target text.
And a feedback module 60, configured to give corresponding feedback to the user according to the target entity, in combination with the entity pair and the relationship thereof included in the text message.
Specifically, according to the extraction result of the relation extraction model on the target text, and in combination with the target entity which the user wants to know, the answer which the user wants to know can be obtained, and then a corresponding response is given.
In this embodiment, according to the target entity, in combination with the entity pair and the relationship thereof included in the text information, the user is given corresponding feedback. The feedback information can be information about the target entity in the extracted information directly fed back to the user, or information about the target entity can be organized by language to form a piece of short text information about the target entity and then fed back to the user, or a relation map of the target entity can be formed according to the extraction result of the target text content, or information related to the target content is selected from the extraction result to construct a relation map of the target entity and fed back to the user. The feedback can be in the form of voice broadcast or can be displayed on a display screen to the user.
In another embodiment of the apparatus of the present invention, as shown in fig. 7, on the basis of the previous embodiment of the apparatus, the semi-supervised learning module 10 includes:
the sample obtaining sub-module 11 is configured to obtain a text sample with an entity relationship marked and a text sample with an entity relationship not marked;
the initial training submodule 12 is configured to train an initial model through the text sample labeled with the entity relationship, so as to obtain an initial relationship extraction model;
and the correction training submodule 13 is configured to train and correct the initial relationship extraction model by using the text sample that is not labeled with the entity relationship through an iterative method, so as to obtain a final relationship extraction model.
In the embodiment, a relationship extraction model is obtained by training in a semi-supervised learning manner, the scheme mainly adopts a Bootstrapping method (Bootstrapping) in the semi-supervised learning, and a small amount of text samples labeled with entity relationships and a large amount of text samples not required to be labeled are required in the semi-supervised learning. Specifically, the semi-supervised learning is mainly divided into two stages, wherein the first stage is to carry out supervised training through a small number of labeled samples to obtain an initial relation extraction model. And in the second stage, the initial relation extraction model is utilized to extract the relation of the text sample which is not subjected to the entity marking relation, so that a new entity pair and a corresponding relation (a new relation tuple) can be obtained through extraction, then the part with higher confidence coefficient is screened and used as a new marked text sample to be input into the initial relation extraction model, the initial relation extraction model is further trained and corrected, and the stage is repeated until the specified iteration number is reached, so that the final relation extraction model is obtained.
In the embodiment, the final relation extraction model is obtained based on semi-supervised learning, and the initial model is trained only by combining a small amount of labeled texts and a large amount of unlabeled texts, so that the finally used relation extraction model is obtained, and time and labor are saved.
In another embodiment of the apparatus of the present invention, on the basis of any one of the above embodiments of the apparatus, as shown in fig. 8, the human-computer interaction apparatus based on semi-supervised learning further includes:
and the text map building module 70 is configured to build an entity relationship map of the target text according to the entity pair group and the relationship thereof included in the text information.
Specifically, after the entity relationship extraction is performed on the text content through the relationship extraction model to obtain the entity pair and the relationship thereof included in the target text, the text map construction module 70 constructs the entity relationship map of the target text according to the entity pair group and the relationship thereof included in the text information, and the effective information in the target text can be visually obtained from the entity relationship map of the target text.
In another embodiment of the apparatus of the present invention, as shown in fig. 9, on the basis of the above embodiment of the apparatus, the human-computer interaction apparatus based on semi-supervised learning further includes:
an information selecting module 80, configured to obtain a target entity pair and a corresponding relationship from the entity pair and the relationship included in the text information; the target entity pair comprises the target entity;
a target map construction module 90, configured to construct a relationship map of the target entity according to the target entity pair and the corresponding relationship;
the feedback module 60 is further configured to feed back the relationship map of the target entity to the user.
In this embodiment, after the relationship extraction module 50 obtains the entity pairs and the corresponding relationships contained in the target text content, the information selection module 80 performs relationship screening to obtain entities related to the target entities and obtain the relationships with the entities, the target map construction module 90 reconstructs the relationship maps corresponding to the target entities, and finally the feedback module 60 feeds back the relationship maps of the target entities to the user, so that the user can know the relationships between the target entities and the entities at a glance.
Preferably, on the basis of any one of the above device embodiments, the extracted content obtaining module 40 is further configured to search, according to the target entity, an associated sentence related to the target entity in the text content of the target text;
the relationship extraction module 50 is further configured to perform relationship extraction on all found association statements related to the target entity through the relationship extraction model, and obtain an association entity related to the target entity and a relationship corresponding to the association entity.
Specifically, since the text content contains more information, for example, a long novel, the entity pairs and relationships contained in the text content are very many, which brings great difficulty to the extraction of the entity pairs and relationships in the text content, and in order to reduce or reduce the extraction complexity, the extraction content obtaining module 40 of this embodiment may further directly search the associated sentences related to the target entity in the text content, and then use the associated sentences as the extraction objects. The relationship extraction module 50 then extracts the obtained association statements to the relationship extraction model for relationship extraction, and obtains entities related to the target entity and corresponding relationships. The feedback module 60 can accordingly feed back the relevant information of the target entity to the user.
The embodiment of the device corresponds to the embodiment of the method, the technical details of the man-machine interaction method based on semi-supervised learning are also suitable for the man-machine interaction device based on semi-supervised learning, and the details are not repeated for reducing the repetition.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A man-machine interaction method based on semi-supervised learning is characterized by comprising the following steps:
acquiring a relation extraction model through semi-supervised learning;
acquiring user voice information;
recognizing the user voice information to obtain a target entity and a target text appointed by the user;
acquiring text content contained in the target text according to the target text specified by the user;
performing entity relationship extraction on the text content through the relationship extraction model to obtain entity pairs and relationships thereof contained in the target text;
and according to the target entity, giving corresponding feedback to the user by combining the entity pair contained in the text message and the relation thereof.
2. The human-computer interaction method based on semi-supervised learning as claimed in claim 1, wherein the obtaining of the relationship extraction model through semi-supervised learning comprises:
acquiring a text sample marked with an entity relationship and a text sample not marked with the entity relationship;
training an initial model through the text sample marked with the entity relationship to obtain an initial relationship extraction model;
and training and correcting the initial relation extraction model by using the text sample which is not marked with the entity relation through an iterative method to obtain a final relation extraction model.
3. The human-computer interaction method based on semi-supervised learning as recited in claim 1, further comprising:
and constructing an entity relation map of the target text according to the entity pair group and the relation thereof contained in the text information.
4. The human-computer interaction method based on semi-supervised learning as recited in claim 1, further comprising:
acquiring a target entity pair and a corresponding relation from the entity pair and the relation contained in the text information; the target entity pair comprises the target entity;
constructing a relation map of the target entity according to the target entity pair and the corresponding relation;
and feeding back the relation map of the target entity to the user.
5. The human-computer interaction method based on semi-supervised learning according to any one of claims 1-4,
the extracting entity relationship of the text content through the relationship extraction model to obtain the entity pair and the relationship thereof included in the target text specifically includes:
searching for an associated sentence related to the target entity in the text content of the target text according to the target entity;
and extracting the relation of all the searched associated sentences related to the target entity through the relation extraction model to obtain the associated entities related to the target entity and the relation corresponding to the associated entities.
6. A human-computer interaction device based on semi-supervised learning is characterized by comprising:
the semi-supervised learning module is used for acquiring a relation extraction model through semi-supervised learning;
the voice acquisition module is used for acquiring voice information of a user;
the voice recognition module is used for recognizing the voice information of the user and acquiring a target entity and a target text specified by the user;
the extracted content acquisition module is used for acquiring text content contained in the target text according to the target text specified by the user;
the relation extraction module is used for extracting the entity relation of the text content through the relation extraction model to obtain the entity pair and the relation contained in the target text;
and the feedback module is used for giving corresponding feedback to the user according to the target entity by combining the entity pair contained in the text information and the relation thereof.
7. A human-computer interaction device based on semi-supervised learning according to claim 6, wherein the semi-supervised learning module comprises:
the sample acquisition submodule is used for acquiring a text sample marked with the entity relationship and a text sample not marked with the entity relationship;
the initial training submodule is used for training an initial model through the text sample marked with the entity relationship to obtain an initial relationship extraction model;
and the correction training sub-module is used for training and correcting the initial relation extraction model by using the text sample which is not marked with the entity relation through an iterative method to obtain a final relation extraction model.
8. The human-computer interaction device based on semi-supervised learning as claimed in claim 6, further comprising:
and the text map building module is used for building an entity relation map of the target text according to the entity pair group and the relation thereof contained in the text information.
9. The human-computer interaction device based on semi-supervised learning as claimed in claim 6, further comprising:
the information selection module is used for acquiring a target entity pair and a corresponding relation from the entity pair and the relation contained in the text information; the target entity pair comprises the target entity;
the target map building module is used for building a relation map of the target entity according to the target entity pair and the corresponding relation;
the feedback module is further configured to feed back the relationship map of the target entity to the user.
10. A human-computer interaction device based on semi-supervised learning according to any one of claims 6-9,
the extracted content acquisition module is further used for searching the associated sentences related to the target entities in the text content of the target text according to the target entities;
the relation extraction module is further configured to perform relation extraction on all found associated statements related to the target entity through the relation extraction model, and obtain an associated entity related to the target entity and a relation corresponding to the associated entity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910377324.XA CN111913563A (en) | 2019-05-07 | 2019-05-07 | Man-machine interaction method and device based on semi-supervised learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910377324.XA CN111913563A (en) | 2019-05-07 | 2019-05-07 | Man-machine interaction method and device based on semi-supervised learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111913563A true CN111913563A (en) | 2020-11-10 |
Family
ID=73241914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910377324.XA Pending CN111913563A (en) | 2019-05-07 | 2019-05-07 | Man-machine interaction method and device based on semi-supervised learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111913563A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112651513A (en) * | 2020-12-22 | 2021-04-13 | 厦门渊亭信息科技有限公司 | Information extraction method and system based on zero sample learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009017464A1 (en) * | 2007-07-31 | 2009-02-05 | Agency For Science, Technology And Research | Relation extraction system |
CN107967267A (en) * | 2016-10-18 | 2018-04-27 | 中兴通讯股份有限公司 | A kind of knowledge mapping construction method, apparatus and system |
CN109346059A (en) * | 2018-12-20 | 2019-02-15 | 广东小天才科技有限公司 | Dialect voice recognition method and electronic equipment |
CN109359297A (en) * | 2018-09-20 | 2019-02-19 | 清华大学 | A kind of Relation extraction method and system |
CN109635108A (en) * | 2018-11-22 | 2019-04-16 | 华东师范大学 | A kind of remote supervisory entity relation extraction method based on human-computer interaction |
-
2019
- 2019-05-07 CN CN201910377324.XA patent/CN111913563A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009017464A1 (en) * | 2007-07-31 | 2009-02-05 | Agency For Science, Technology And Research | Relation extraction system |
CN107967267A (en) * | 2016-10-18 | 2018-04-27 | 中兴通讯股份有限公司 | A kind of knowledge mapping construction method, apparatus and system |
CN109359297A (en) * | 2018-09-20 | 2019-02-19 | 清华大学 | A kind of Relation extraction method and system |
CN109635108A (en) * | 2018-11-22 | 2019-04-16 | 华东师范大学 | A kind of remote supervisory entity relation extraction method based on human-computer interaction |
CN109346059A (en) * | 2018-12-20 | 2019-02-15 | 广东小天才科技有限公司 | Dialect voice recognition method and electronic equipment |
Non-Patent Citations (1)
Title |
---|
武文雅 等: "中文实体关系抽取研究综述", 计算机与现代化, no. 08, 15 August 2018 (2018-08-15), pages 21 - 26 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112651513A (en) * | 2020-12-22 | 2021-04-13 | 厦门渊亭信息科技有限公司 | Information extraction method and system based on zero sample learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108052577B (en) | Universal text content mining method, device, server and storage medium | |
CN107273019B (en) | Collaborative gesture based input language | |
CN110659366A (en) | Semantic analysis method and device, electronic equipment and storage medium | |
EP3872652B1 (en) | Method and apparatus for processing video, electronic device, medium and product | |
CN107526846B (en) | Method, device, server and medium for generating and sorting channel sorting model | |
CN111324771A (en) | Video tag determination method and device, electronic equipment and storage medium | |
CN103593378A (en) | Terminal and method for determining type of input method editor | |
WO2020052061A1 (en) | Method and device for processing information | |
CN109284367B (en) | Method and device for processing text | |
CN114564666A (en) | Encyclopedic information display method, encyclopedic information display device, encyclopedic information display equipment and encyclopedic information display medium | |
CN113722438A (en) | Sentence vector generation method and device based on sentence vector model and computer equipment | |
WO2024149183A1 (en) | Document display method and apparatus, and electronic device | |
CN110020110B (en) | Media content recommendation method, device and storage medium | |
CN110717008A (en) | Semantic recognition-based search result ordering method and related device | |
CN113033163B (en) | Data processing method and device and electronic equipment | |
CN113038175B (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
CN110110143A (en) | A kind of video classification methods and device | |
CN117272977A (en) | Character description sentence recognition method and device, electronic equipment and storage medium | |
CN111913563A (en) | Man-machine interaction method and device based on semi-supervised learning | |
CN112231444A (en) | Processing method and device for corpus data combining RPA and AI and electronic equipment | |
CN109472028B (en) | Method and device for generating information | |
CN111597936A (en) | Face data set labeling method, system, terminal and medium based on deep learning | |
CN111090977A (en) | Intelligent writing system and intelligent writing method | |
US20220156611A1 (en) | Method and apparatus for entering information, electronic device, computer readable storage medium | |
CN113569741A (en) | Answer generation method and device for image test questions, electronic equipment and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |