CN111126075A - Semantic understanding method, system, equipment and medium for text antagonism training - Google Patents

Semantic understanding method, system, equipment and medium for text antagonism training Download PDF

Info

Publication number
CN111126075A
CN111126075A CN201911346518.XA CN201911346518A CN111126075A CN 111126075 A CN111126075 A CN 111126075A CN 201911346518 A CN201911346518 A CN 201911346518A CN 111126075 A CN111126075 A CN 111126075A
Authority
CN
China
Prior art keywords
text
antagonism
training
user
examples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911346518.XA
Other languages
Chinese (zh)
Other versions
CN111126075B (en
Inventor
彭德光
肖曼
高泫苏
王雅璇
孙健
汤宇腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Megalight Technology Co ltd
Original Assignee
Chongqing Megalight Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Megalight Technology Co ltd filed Critical Chongqing Megalight Technology Co ltd
Priority to CN201911346518.XA priority Critical patent/CN111126075B/en
Publication of CN111126075A publication Critical patent/CN111126075A/en
Application granted granted Critical
Publication of CN111126075B publication Critical patent/CN111126075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a semantic understanding method, a system, equipment and a medium for text antagonism training, which comprises the following steps: acquiring vocabularies in the problems consulted by the user at the historical moment and/or common vocabularies in the problems consulted by the user at the current moment, and generating a text base according to the vocabularies in the problems consulted by the user at the historical moment and/or the common vocabularies in the problems consulted by the user at the current moment; generating, with one or more generators, one or more text antagonism examples based on the text corpus; and performing one or more times of text antagonism training on the one or more text antagonism examples to obtain the optimal answer matching the question consulted by the user. By the method, the text antagonism training can be carried out, and the robustness of the text training antagonism example model is improved; the most accurate one of the candidate answers can be obtained according to the question consulted by the user.

Description

Semantic understanding method, system, equipment and medium for text antagonism training
Technical Field
The invention relates to the technical field of natural language, in particular to a semantic understanding method, a semantic understanding system, semantic understanding equipment and a semantic understanding medium for text antagonism training.
Background
With the rapid development of information technology, the internet now profoundly affects people's lives, more and more information is spread through the internet, and the data volume of text information is exponentially increased. However, the huge amount of text information increases the browsing and searching time of people, reduces the searching efficiency, and becomes a problem in preparing and efficiently acquiring key information from massive information. When a user provides question consultation, the user usually hopes to obtain the most accurate answer, and then the candidate answers have competitive confrontation relation, so the invention provides a method, a system, equipment and a medium for confrontation training of texts, describes the competitive relation and obtains the optimal answer aiming at the question of each user.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, it is an object of the present invention to provide a semantic understanding method, system, device and medium for text antagonism training, which are used to solve the problems in the prior art.
To achieve the above and other related objects, the present invention provides a semantic understanding method for text antagonism training, which comprises:
acquiring vocabularies in the problems consulted by the user at the historical moment and/or common vocabularies in the problems consulted by the user at the current moment, and generating a text base according to the vocabularies in the problems consulted by the user at the historical moment and/or the common vocabularies in the problems consulted by the user at the current moment;
generating, with one or more generators, one or more text antagonism examples based on the text corpus;
and performing one or more times of text antagonism training on the one or more text antagonism examples to obtain the optimal answer matching the question consulted by the user.
Optionally, the generator comprises at least one of: knowledge guidance generator, manual generator, neural generator.
Optionally, performing one or more text antagonism training on the one or more text antagonism examples, including: one or more discriminator training, one or more generator training, or both, of the one or more text antagonism examples.
Optionally, one or more discriminator training is performed on the one or more text resistance examples to discriminate whether the text in the one or more text resistance examples conforms to the target text.
Optionally, one or more generator trainings are performed on the one or more text antagonism examples to obtain an optimal text in the one or more text antagonism examples.
Optionally, obtaining one or more text antagonism examples generated by the knowledge guidance generator and/or the manual generator;
iteratively training the neural generator using the generated one or more text antagonism examples as a sample set.
The invention also provides a semantic understanding system for text antagonism training, which comprises the following components:
the vocabulary module is used for acquiring vocabularies in the problems consulted by the user at the historical moment and/or common vocabularies in the problems consulted by the user at the current moment, and generating a text base according to the vocabularies in the problems consulted by the user at the historical moment and/or the common vocabularies in the problems consulted by the user at the current moment;
a resistance instance module to generate one or more textual resistance instances based on the textual library with one or more generators;
and the training module is used for performing one or more times of text antagonism training on the one or more text antagonism examples to obtain the optimal answer matched with the question consulted by the user.
The present invention also provides an apparatus comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform a method as described in one or more of the above.
The present invention also provides one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the methods as described in one or more of the above.
As described above, the present invention provides a semantic understanding method, system, device and medium for text antagonism training, which has the following advantages: generating a text base according to the vocabulary in the problems consulted by the user at the historical moment and/or the common vocabulary in the problems consulted by the user at the current moment by acquiring the vocabulary in the problems consulted by the user at the historical moment and/or the common vocabulary in the problems consulted by the user at the current moment; generating, with one or more generators, one or more text antagonism examples based on the text corpus; and performing one or more times of text antagonism training on the one or more text antagonism examples to obtain the optimal answer matching the question consulted by the user. By the method, the text antagonism training can be carried out, and the most accurate answer can be obtained from the candidate answers according to the questions consulted by the user.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a text antagonism method according to an embodiment;
FIG. 2 is a schematic diagram illustrating the connection of a text antagonism system according to an embodiment;
FIG. 3 is a schematic diagram of a neural network according to an embodiment;
fig. 4 is a schematic hardware structure diagram of a terminal device according to an embodiment;
fig. 5 is a schematic diagram of a hardware structure of a terminal device according to another embodiment.
Description of the element reference numerals
1100 input device
1101 first processor
1102 output device
1103 first memory
1104 communication bus
1200 processing assembly
1201 second processor
1202 second memory
1203 communication assembly
1204 Power supply Assembly
1205 multimedia assembly
1206 voice assembly
1207 input/output interface
1208 sensor assembly
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
Please refer to fig. 1 to 4. It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated. The structures, proportions, sizes, and other dimensions shown in the drawings and described in the specification are for understanding and reading the present disclosure, and are not intended to limit the scope of the present disclosure, which is defined in the claims, and are not essential to the art, and any structural modifications, changes in proportions, or adjustments in size, which do not affect the efficacy and attainment of the same are intended to fall within the scope of the present disclosure. In addition, the terms "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not to be construed as a scope of the present invention.
A knowledge guidance generator: is a generator that generates text antagonism examples from the semantic dictionary, vocabulary, phrases, sentence grammars in a dataset.
A manual generator: is a generator that generates text antagonism examples by writing semantic dictionaries, words, phrases, semantic rules manually or manually.
A nerve generator: the generator is used for generating the text antagonism examples after training according to the historical text antagonism examples.
Referring to fig. 1, the present embodiment provides a semantic understanding method for text antagonism training, which includes the following steps:
s100, obtaining vocabularies in the problems consulted by the user at the historical moment and/or common vocabularies in the problems consulted by the user at the current moment, and generating a text base according to the vocabularies in the problems consulted by the user at the historical moment and/or the common vocabularies in the problems consulted by the user at the current moment. In the embodiment of the application, the vocabulary at the historical moment is formed according to the vocabulary after the text antagonism training; the commonly used vocabulary includes, for example, vocabulary in a semantic dictionary, commonly used vocabularies of etiquettes, and the like.
S200, generating one or more text antagonism examples based on the text library by adopting one or more generators; in an embodiment of the present application, the generator includes at least one of: knowledge guidance generator, manual generator, neural generator. Wherein the knowledge guidance generator is a generator for generating text antagonism examples according to semantic dictionary, vocabulary, phrase and sentence grammar in the data set. A manual generator is a generator that generates text antagonism examples by writing semantic dictionaries, words, phrases, semantic rules manually or manually. The neural generator is a generator which is trained according to the historical text antagonism examples and then used for generating the text antagonism examples.
S300, performing one or more times of text antagonism training on the one or more text antagonism examples, obtaining the optimal answer matching the questions consulted by the user, and improving the robustness of the antagonism examples.
The method provides a semantic understanding method for text antagonism training, and comprises the steps of acquiring words in problems consulted by a user at a historical moment and/or common words in problems consulted by the user at the current moment, and generating a text base according to the words in the problems consulted by the user at the historical moment and/or the common words in the problems consulted by the user at the current moment; generating, with one or more generators, one or more text antagonism examples based on the text corpus; the one or more text antagonism examples are subjected to one or more times of text antagonism training to obtain the optimal answer matching the questions consulted by the user, the text antagonism training can be carried out by the method, and the robustness of a text training antagonism example model is improved; the most accurate one of the candidate answers can be obtained according to the question consulted by the user.
In an exemplary embodiment, performing one or more text antagonism training on the one or more text antagonism examples includes: performing one or more discriminator training on the one or more textual antagonism examples. Specifically, one or more discriminator training is performed on the one or more text resistance examples to discriminate whether the text in the one or more text resistance examples conforms to the target text. As an example, in terms of human customer service, the target text is set as a toast text, a text antagonism example is generated in the robot, and text antagonism training is performed through a conversation between the human and the robot. Judging whether the robot uses the words through discriminator training; for example, whether the robot will say "hello" as "hello", whether it will say "thank you", and what you see. In the embodiment of the present application, the method further includes iteratively training the discriminator, for example, iteratively training the discriminator by combining the negative example Z generated by the manual generator with the original training example X.
In an exemplary embodiment, performing one or more text antagonism training on the one or more text antagonism examples includes: performing one or more generator trainings on the one or more text antagonism examples. Specifically, one or more generator trainings are carried out on the one or more text antagonism examples, and the optimal text in the one or more text antagonism examples is obtained. By way of example, in the embodiment of the application, in terms of manual customer service, a customer asks a robot about what preference package is recently available, the robot generates a antagonism example according to words input by the customer, then answers contents related to questions for the first time, and completes text antagonism training for the first time; if the customer is not satisfied with the first answer the robot answers and then again asks for the same question content, the robot re-takes a countervailing example and then answers again the content associated with the question, giving the customer the most appropriate or optimal result. In the embodiment of the application, the iterative training of the generator is further included.
According to the above embodiments, the present application trains each other iteratively by the discriminator and the generator to achieve better discrimination of the augmented data from the generator and better example generation of the learned discriminator. First, we pre-train discriminator D and generator G on the original training example X. We alternate the training of the discriminator and the generator in K iterations, e.g. set to 30 iterations. For each iteration, we obtain one mini batch B from the raw data X. For each mini-batch, we generate a new implication example ZG using the antagonism example generator. After collecting all generated examples, we will balance according to the source and label of the examples. In each training iteration, we optimize the discriminator against the enhanced training data X + ZG and use the discriminator penalty to guide the generator to select challenging examples.
In an exemplary embodiment, one or more text antagonism examples generated by a knowledge guidance generator and/or a manual generator are obtained; iteratively training the neural generator using the generated one or more text antagonism examples as a sample set.
Specifically, an initial candidate answer set { A ] is obtained from a question Q consulted by a user1,A2,...,An};
Obtaining a Score after confrontation according to the switching neural network, wherein the Score is N/V (A, Q), and the N/V is the switching neural network; switching the neural network characteristics into different input weights, wherein the input weights are in a competitive relationship, and the weight W with the competitive relationshipi、WjThe description is as follows:
Figure BDA0002333515350000061
Figure BDA0002333515350000062
wherein i, j are natural numbers; the input structure of the neural network is shown in FIG. 3, W1And W2Is a competing confrontational relationship, W3And W4Is a competing confrontational relationship, W5And W6Is a competitive confrontational relationship.
The method provides a semantic understanding method for text antagonism training, and comprises the steps of acquiring words in problems consulted by a user at a historical moment and/or common words in problems consulted by the user at the current moment, and generating a text base according to the words in the problems consulted by the user at the historical moment and/or the common words in the problems consulted by the user at the current moment; generating, with one or more generators, one or more text antagonism examples based on the text corpus; the one or more text antagonism examples are subjected to one or more times of text antagonism training to obtain the optimal answer matching the questions consulted by the user, the text antagonism training can be carried out by the method, and the robustness of a text training antagonism example model is improved; the most accurate one of the candidate answers can be obtained according to the question consulted by the user.
As shown in fig. 2, the present invention further provides a semantic understanding system for text antagonism training, which includes:
the vocabulary module M10 is used for acquiring vocabularies in the problems consulted by the user at the historical moment and/or common vocabularies in the problems consulted by the user at the current moment, and generating a text base according to the vocabularies in the problems consulted by the user at the historical moment and/or the common vocabularies in the problems consulted by the user at the current moment; in the embodiment of the application, the vocabulary at the historical moment is formed according to the vocabulary after the text antagonism training; the commonly used vocabulary includes, for example, vocabulary in a semantic dictionary, commonly used vocabularies of etiquettes, and the like.
A antagonism examples module M20 for generating one or more text antagonism examples based on the text corpus with one or more generators; in an embodiment of the present application, the generator includes at least one of: knowledge guidance generator, manual generator, neural generator. Wherein the knowledge guidance generator is a generator for generating text antagonism examples according to semantic dictionary, vocabulary, phrase and sentence grammar in the data set. A manual generator is a generator that generates text antagonism examples by writing semantic dictionaries, words, phrases, semantic rules manually or manually. The neural generator is a generator which is trained according to the historical text antagonism examples and then used for generating the text antagonism examples.
A training module M3, configured to perform one or more text antagonism training operations on the one or more text antagonism examples to obtain an optimal answer matching the question consulted by the user.
The semantic understanding system for the text antagonism training is used for acquiring words and phrases in the problems consulted by the user at the historical moment and/or common words and phrases in the problems consulted by the user at the current moment, and generating a text base according to the words and phrases in the problems consulted by the user at the historical moment and/or the common words and phrases in the problems consulted by the user at the current moment; generating, with one or more generators, one or more text antagonism examples based on the text corpus; the text antagonism example model is used for carrying out text antagonism training for one time or a plurality of times to obtain the optimal answer matched with the question consulted by the user.
In an exemplary embodiment, performing one or more text antagonism training on the one or more text antagonism examples includes: performing one or more discriminator training on the one or more textual antagonism examples. Specifically, one or more discriminator training is performed on the one or more text resistance examples to discriminate whether the text in the one or more text resistance examples conforms to the target text. As an example, in terms of human customer service, the target text is set as a toast text, a text antagonism example is generated in the robot, and text antagonism training is performed through a conversation between the human and the robot. Judging whether the robot uses the words through discriminator training; for example, whether the robot will say "hello" as "hello", whether it will say "thank you", and what you see. In the embodiment of the present application, the method further includes iteratively training the discriminator, for example, iteratively training the discriminator by combining the negative example Z generated by the manual generator with the original training example X.
In an exemplary embodiment, performing one or more text antagonism training on the one or more text antagonism examples includes: performing one or more generator trainings on the one or more text antagonism examples. Specifically, one or more generator trainings are carried out on the one or more text antagonism examples, and the optimal text in the one or more text antagonism examples is obtained. By way of example, in the embodiment of the application, in terms of manual customer service, a customer asks a robot about what preference package is recently available, the robot generates a antagonism example according to words input by the customer, then answers contents related to questions for the first time, and completes text antagonism training for the first time; if the customer is not satisfied with the first answer the robot answers and then again asks for the same question content, the robot re-takes a countervailing example and then answers again the content associated with the question, giving the customer the most appropriate or optimal result. In the embodiment of the application, the iterative training of the generator is further included.
According to the above embodiments, the present application trains each other iteratively by the discriminator and the generator to achieve better discrimination of the augmented data from the generator and better example generation of the learned discriminator. First, we pre-train discriminator D and generator G on the original training example X. We alternate the training of the discriminator and the generator in K iterations, e.g. set to 30 iterations. For each iteration, we obtain one mini batch B from the raw data X. For each mini-batch, we generate a new implication example ZG using the antagonism example generator. After collecting all generated examples, we will balance according to the source and label of the examples. In each training iteration, we optimize the discriminator against the enhanced training data X + ZG and use the discriminator penalty to guide the generator to select challenging examples.
In an exemplary embodiment, one or more text antagonism examples generated by a knowledge guidance generator and/or a manual generator are obtained; iteratively training the neural generator using the generated one or more text antagonism examples as a sample set.
Specifically, an initial candidate answer set { A ] is obtained from a question Q consulted by a user1,A2,...,An};
Obtaining a Score after confrontation according to the switching neural network, wherein the Score is N/V (A, Q), and the N/V is the switching neural network; switching the neural network characteristics into different input weights, wherein the input weights are in a competitive relationship, and the weight W with the competitive relationshipi、WjThe description is as follows:
Figure BDA0002333515350000081
Figure BDA0002333515350000082
wherein i, j are natural numbers; the input structure of the neural network is shown in FIG. 3, W1And W2Is a competing confrontational relationship, W3And W4Is a competing confrontational relationship, W5And W6Is a competitive confrontational relationship.
The semantic understanding system for the text antagonism training is used for acquiring words and phrases in the problems consulted by the user at the historical moment and/or common words and phrases in the problems consulted by the user at the current moment, and generating a text base according to the words and phrases in the problems consulted by the user at the historical moment and/or the common words and phrases in the problems consulted by the user at the current moment; generating, with one or more generators, one or more text antagonism examples based on the text corpus; the one or more text antagonism examples are subjected to one or more times of text antagonism training to obtain the optimal answer matching the questions consulted by the user, and the text antagonism training can be carried out by the system, so that the robustness of a text training antagonism example model is improved; the most accurate one of the candidate answers can be obtained according to the question consulted by the user.
An embodiment of the present application further provides an apparatus, which may include: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method of fig. 1. In practical applications, the device may be used as a terminal device, and may also be used as a server, where examples of the terminal device may include: the mobile terminal includes a smart phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a vehicle-mounted computer, a desktop computer, a set-top box, an intelligent television, a wearable device, and the like.
Embodiments of the present application also provide a non-transitory readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a device, the device may execute instructions (instructions) included in the method in fig. 1 according to the embodiments of the present application.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown, the terminal device may include: an input device 1100, a first processor 1101, an output device 1102, a first memory 1103, and at least one communication bus 1104. The communication bus 1104 is used to implement communication connections between the elements. The first memory 1103 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, and the first memory 1103 may store various programs for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the first processor 1101 may be, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the first processor 1101 is coupled to the input device 1100 and the output device 1102 through a wired or wireless connection.
Optionally, the input device 1100 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; the output devices 1102 may include output devices such as a display, audio, and the like.
In this embodiment, the processor of the terminal device includes a function for executing each module of the speech recognition apparatus in each device, and specific functions and technical effects may refer to the above embodiments, which are not described herein again.
Fig. 5 is a schematic hardware structure diagram of a terminal device according to an embodiment of the present application. Fig. 5 is a specific embodiment of the implementation process of fig. 4. As shown, the terminal device of the present embodiment may include a second processor 1201 and a second memory 1202.
The second processor 1201 executes the computer program code stored in the second memory 1202 to implement the method described in fig. 1 in the above embodiment.
The second memory 1202 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The second memory 1202 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a second processor 1201 is provided in the processing assembly 1200. The terminal device may further include: communication component 1203, power component 1204, multimedia component 1205, speech component 1206, input/output interfaces 1207, and/or sensor component 1208. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 1200 generally controls the overall operation of the terminal device. The processing assembly 1200 may include one or more second processors 1201 to execute instructions to perform all or part of the steps of the data processing method described above. Further, the processing component 1200 can include one or more modules that facilitate interaction between the processing component 1200 and other components. For example, the processing component 1200 can include a multimedia module to facilitate interaction between the multimedia component 1205 and the processing component 1200.
The power supply component 1204 provides power to the various components of the terminal device. The power components 1204 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia components 1205 include a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The voice component 1206 is configured to output and/or input voice signals. For example, the voice component 1206 includes a Microphone (MIC) configured to receive external voice signals when the terminal device is in an operational mode, such as a voice recognition mode. The received speech signal may further be stored in the second memory 1202 or transmitted via the communication component 1203. In some embodiments, the speech component 1206 further comprises a speaker for outputting speech signals.
The input/output interface 1207 provides an interface between the processing component 1200 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor component 1208 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor component 1208 may detect an open/closed state of the terminal device, relative positioning of the components, presence or absence of user contact with the terminal device. The sensor assembly 1208 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 1208 may also include a camera or the like.
The communication component 1203 is configured to facilitate communications between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot therein for inserting a SIM card therein, so that the terminal device may log onto a GPRS network to establish communication with the server via the internet.
As can be seen from the above, the communication component 1203, the voice component 1206, the input/output interface 1207 and the sensor component 1208 involved in the embodiment of fig. 5 can be implemented as the input device in the embodiment of fig. 4.
In conclusion, the present invention effectively overcomes various disadvantages of the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (9)

1. A semantic understanding method for text antagonism training is characterized by comprising the following steps:
acquiring vocabularies in the problems consulted by the user at the historical moment and/or common vocabularies in the problems consulted by the user at the current moment, and generating a text base according to the vocabularies in the problems consulted by the user at the historical moment and/or the common vocabularies in the problems consulted by the user at the current moment;
generating, with one or more generators, one or more text antagonism examples based on the text corpus;
and performing one or more times of text antagonism training on the one or more text antagonism examples to obtain the optimal answer matching the question consulted by the user.
2. The method for semantic understanding of text antagonism training according to claim 1, wherein: the generator comprises at least one of: knowledge guidance generator, manual generator, neural generator.
3. The method for semantic understanding of text antagonism training according to claim 1, wherein: performing one or more text antagonism training on the one or more text antagonism examples, including: one or more discriminator training, one or more generator training, or both, of the one or more text antagonism examples.
4. The method for semantic understanding of text antagonism training according to claim 3, wherein: performing one or more discriminator training on the one or more text resistance examples to discriminate whether the text in the one or more text resistance examples conforms to the target text.
5. The method for semantic understanding of text antagonism training according to claim 3 or 4, wherein: and performing one or more times of generator training on the one or more text antagonism examples to obtain the optimal text in the one or more text antagonism examples.
6. The method for semantic understanding of text antagonism training according to claim 2, wherein: obtaining one or more text antagonism examples generated by a knowledge guidance generator and/or a manual generator;
iteratively training the neural generator using the generated one or more text antagonism examples as a sample set.
7. A semantic understanding system for text antagonism training is characterized by comprising:
the vocabulary module is used for acquiring vocabularies in the problems consulted by the user at the historical moment and/or common vocabularies in the problems consulted by the user at the current moment, and generating a text base according to the vocabularies in the problems consulted by the user at the historical moment and/or the common vocabularies in the problems consulted by the user at the current moment;
a resistance instance module to generate one or more textual resistance instances based on the textual library with one or more generators;
and the training module is used for performing one or more times of text antagonism training on the one or more text antagonism examples to obtain the optimal answer matched with the question consulted by the user.
8. An apparatus, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method recited by one or more of claims 1-6.
9. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method recited by one or more of claims 1-6.
CN201911346518.XA 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training Active CN111126075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911346518.XA CN111126075B (en) 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911346518.XA CN111126075B (en) 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training

Publications (2)

Publication Number Publication Date
CN111126075A true CN111126075A (en) 2020-05-08
CN111126075B CN111126075B (en) 2023-07-25

Family

ID=70501955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911346518.XA Active CN111126075B (en) 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training

Country Status (1)

Country Link
CN (1) CN111126075B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392147A (en) * 2017-07-20 2017-11-24 北京工商大学 A kind of image sentence conversion method based on improved production confrontation network
US20180373979A1 (en) * 2017-06-22 2018-12-27 Adobe Systems Incorporated Image captioning utilizing semantic text modeling and adversarial learning
CN110019732A (en) * 2017-12-27 2019-07-16 杭州华为数字技术有限公司 A kind of intelligent answer method and relevant apparatus
US20190237061A1 (en) * 2018-01-31 2019-08-01 Semantic Machines, Inc. Training natural language system with generated dialogues

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180373979A1 (en) * 2017-06-22 2018-12-27 Adobe Systems Incorporated Image captioning utilizing semantic text modeling and adversarial learning
CN107392147A (en) * 2017-07-20 2017-11-24 北京工商大学 A kind of image sentence conversion method based on improved production confrontation network
CN110019732A (en) * 2017-12-27 2019-07-16 杭州华为数字技术有限公司 A kind of intelligent answer method and relevant apparatus
US20190237061A1 (en) * 2018-01-31 2019-08-01 Semantic Machines, Inc. Training natural language system with generated dialogues

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAO XU,YANAN CAO,RUIPENG JIA,YANBING LIU,JIANLONG TAN: ""Sequence Generative Adversarial Network for Long Text Summarization"", 《2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE》 *
康云云等: "ED-GAN:基于改进生成对抗网络的法律文本生成模型", 《小型微型计算机系统》 *

Also Published As

Publication number Publication date
CN111126075B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US11455989B2 (en) Electronic apparatus for processing user utterance and controlling method thereof
CN111428010B (en) Man-machine intelligent question-answering method and device
CN112527962A (en) Intelligent response method and device based on multi-mode fusion, machine readable medium and equipment
KR20210061141A (en) Method and apparatus for processimg natural languages
CN104485115A (en) Pronunciation evaluation equipment, method and system
US11238050B2 (en) Method and apparatus for determining response for user input data, and medium
CN111831806B (en) Semantic integrity determination method, device, electronic equipment and storage medium
US20220027574A1 (en) Method for providing sentences on basis of persona, and electronic device supporting same
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN112906348B (en) Method, system, device and medium for automatically adding punctuation marks to text
CN117609443A (en) Intelligent interaction method, system, terminal, server and medium based on large model
US12008988B2 (en) Electronic apparatus and controlling method thereof
CN117520498A (en) Virtual digital human interaction processing method, system, terminal, equipment and medium
CN109948155B (en) Multi-intention selection method and device and terminal equipment
US20220013135A1 (en) Electronic device for displaying voice recognition-based image
US10133920B2 (en) OCR through voice recognition
CN111126075B (en) Semantic understanding method, system, equipment and medium for text resistance training
CN112084780B (en) Coreference resolution method, device, equipment and medium in natural language processing
CN114047900A (en) Service processing method and device, electronic equipment and computer readable storage medium
CN111222334A (en) Named entity identification method, device, equipment and medium
US9129598B1 (en) Increasing semantic coverage with semantically irrelevant insertions
CN109829157B (en) Text emotion presenting method, text emotion presenting device and storage medium
CN112908307A (en) Audio feature extraction method, system, device and medium
CN110619038A (en) Method, system and electronic equipment for vertically guiding professional consultation
CN111833846B (en) Method and device for starting dictation state according to intention, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 400000 6-1, 6-2, 6-3, 6-4, building 7, No. 50, Shuangxing Avenue, Biquan street, Bishan District, Chongqing

Applicant after: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

Address before: 400000 2-2-1, 109 Fengtian Avenue, tianxingqiao, Shapingba District, Chongqing

Applicant before: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant