CN111126075B - Semantic understanding method, system, equipment and medium for text resistance training - Google Patents

Semantic understanding method, system, equipment and medium for text resistance training Download PDF

Info

Publication number
CN111126075B
CN111126075B CN201911346518.XA CN201911346518A CN111126075B CN 111126075 B CN111126075 B CN 111126075B CN 201911346518 A CN201911346518 A CN 201911346518A CN 111126075 B CN111126075 B CN 111126075B
Authority
CN
China
Prior art keywords
text
user
generator
examples
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911346518.XA
Other languages
Chinese (zh)
Other versions
CN111126075A (en
Inventor
彭德光
肖曼
高泫苏
王雅璇
孙健
汤宇腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhaoguang Technology Co ltd
Original Assignee
Chongqing Zhaoguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhaoguang Technology Co ltd filed Critical Chongqing Zhaoguang Technology Co ltd
Priority to CN201911346518.XA priority Critical patent/CN111126075B/en
Publication of CN111126075A publication Critical patent/CN111126075A/en
Application granted granted Critical
Publication of CN111126075B publication Critical patent/CN111126075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a semantic understanding method, a semantic understanding system, semantic understanding equipment and semantic understanding media for text countermeasure training, comprising the following steps: acquiring words in a problem consulted by a user at a historical moment and/or common words in a problem consulted by the user at a current moment, and generating a text library according to the words in the problem consulted by the user at the historical moment and/or the common words in the problem consulted by the user at the current moment; generating one or more text countermeasure examples based on the text library using one or more generators; and performing one or more text resistance training on the one or more text resistance examples to obtain optimal answers matching the questions consulted by the user. According to the method and the device, the text resistance training can be carried out, and the robustness of the text training resistance example model is improved; the most accurate answer can be obtained from the candidate answers according to the questions consulted by the user.

Description

Semantic understanding method, system, equipment and medium for text resistance training
Technical Field
The invention relates to the technical field of natural language, in particular to a semantic understanding method, a semantic understanding system, semantic understanding equipment and semantic understanding medium for text resistance training.
Background
With the rapid development of information technology, the internet now deeply influences the lives of people, more and more information is transmitted through the internet, and the data volume of text information is exponentially increased. However, the huge text information volume increases the browsing and searching time of people, reduces the searching efficiency, and becomes a problem of preparing and efficiently acquiring key information from massive information. When a user gives a question consultation, the user usually hopes to obtain the most accurate answer, and then the candidate answers have a competition countermeasure relationship, so that the invention provides a countermeasure training method, a system, equipment and media for texts, describes the competition relationship, and obtains the optimal answer for the questions of each user.
Disclosure of Invention
In view of the above-described drawbacks of the prior art, an object of the present invention is to provide a semantic understanding method, system, device and medium for text resistance training, which are used to solve the problems in the prior art.
To achieve the above and other related objects, the present invention provides a semantic understanding method for text resistance training, comprising:
acquiring words in a problem consulted by a user at a historical moment and/or common words in a problem consulted by the user at a current moment, and generating a text library according to the words in the problem consulted by the user at the historical moment and/or the common words in the problem consulted by the user at the current moment;
generating one or more text countermeasure examples based on the text library using one or more generators;
and performing one or more text resistance training on the one or more text resistance examples to obtain optimal answers matching the questions consulted by the user.
Optionally, the generator comprises at least one of: knowledge guidance generator, manual generator, neural generator.
Optionally, performing one or more text antagonism training on the one or more text antagonism examples includes: one or more discriminator training on the one or more text-countermeasure examples, one or more generator training on the one or more text-countermeasure examples.
Optionally, the one or more text-countermeasure examples are subjected to one or more discriminator exercises to discriminate whether text in the one or more text-countermeasure examples corresponds to the target text.
Optionally, training the generator one or more times on the one or more text countermeasure examples to obtain optimal text in the one or more text countermeasure examples.
Optionally, one or more text antagonism examples generated by the knowledge guidance generator and/or the manual generator are obtained;
the neural generator is iteratively trained using the generated one or more text-to-resistance examples as a sample set.
The invention also provides a semantic understanding system for text antagonism training, which comprises:
the vocabulary module is used for acquiring vocabularies in the problems consulted by the user at the historical moment and/or commonly used vocabularies in the problems consulted by the user at the current moment, and generating a text library according to the vocabularies in the problems consulted by the user at the historical moment and/or commonly used vocabularies in the problems consulted by the user at the current moment;
an antagonism instantiation module to generate one or more text antagonism instantiations based on the text library using one or more generators;
and the training module is used for carrying out one or more text resistance training on the one or more text resistance examples, and acquiring optimal answers matching the questions consulted by the user.
The present invention also provides an apparatus comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method as described in one or more of the above.
The invention also provides one or more machine-readable media having instructions stored thereon that, when executed by one or more processors, cause an apparatus to perform a method as described in one or more of the above.
As described above, the semantic understanding method, system, device and medium for text challenge training provided by the invention have the following beneficial effects: the method comprises the steps of generating a text library according to vocabulary in a problem consulted by a user at a historical moment and/or commonly used vocabulary in a problem consulted by the user at a current moment by acquiring the vocabulary in the problem consulted by the user at the historical moment and/or commonly used vocabulary in the problem consulted by the user at the current moment; generating one or more text countermeasure examples based on the text library using one or more generators; and performing one or more text resistance training on the one or more text resistance examples to obtain optimal answers matching the questions consulted by the user. According to the method, text resistance training can be performed, and the most accurate answer can be obtained from candidate answers according to the questions consulted by the user.
Drawings
FIG. 1 is a flow chart of a text antagonism method according to an embodiment;
FIG. 2 is a schematic diagram of a connection of a text-based resistance system according to an embodiment;
FIG. 3 is a schematic diagram of a neural network according to an embodiment;
fig. 4 is a schematic hardware structure of a terminal device according to an embodiment;
fig. 5 is a schematic hardware structure of a terminal device according to another embodiment.
Description of element reference numerals
1100. Input device
1101. First processor
1102. Output device
1103. First memory
1104. Communication bus
1200. Processing assembly
1201. Second processor
1202. Second memory
1203. Communication assembly
1204. Power supply assembly
1205. Multimedia assembly
1206. Voice assembly
1207. Input/output interface
1208. Sensor assembly
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
Please refer to fig. 1 to 4. It should be noted that, the illustrations provided in the present embodiment merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complex. The structures, proportions, sizes, etc. shown in the drawings attached hereto are for illustration purposes only and are not intended to limit the scope of the invention, which is defined by the claims, but rather by the claims. Also, the terms such as "upper," "lower," "left," "right," "middle," and "a" and the like recited in the present specification are merely for descriptive purposes and are not intended to limit the scope of the invention, but are intended to provide relative positional changes or modifications without materially altering the technical context in which the invention may be practiced.
Knowledge guidance generator: is a generator that generates text-countermeasure examples from semantic dictionaries, words, phrases, sentence grammars in a dataset.
Manual generator: is a generator that generates text-countermeasure examples by manually or manually writing semantic dictionaries, words, phrases, semantic rules.
A neural generator: is a generator that is trained from historical text countermeasure examples and then used to generate text countermeasure examples.
Referring to fig. 1, the present embodiment provides a semantic understanding method for text challenge training, which includes the following steps:
s100, acquiring vocabulary in the problems consulted by the user at the historical moment and/or common vocabulary in the problems consulted by the user at the current moment, and generating a text library according to the vocabulary in the problems consulted by the user at the historical moment and/or the common vocabulary in the problems consulted by the user at the current moment. In the embodiment of the application, the vocabulary at the history time is formed according to the vocabulary after the text resistance training; the common vocabulary includes, for example, vocabulary in a semantic dictionary, etiquette common vocabulary, and the like.
S200, generating one or more text countermeasure examples based on the text library by adopting one or more generators; wherein, in the embodiment of the application, the generator comprises at least one of the following: knowledge guidance generator, manual generator, neural generator. Wherein the knowledge guidance generator is a generator that generates text countermeasure examples from semantic dictionaries, vocabularies, phrases, sentence grammars in the dataset. A manual generator is a generator that generates text-countermeasure examples by manually or manually writing semantic dictionaries, vocabularies, phrases, semantic rules. A neuro-generator is a generator that is trained from historical text-based resistance examples and then used to generate text-based resistance examples.
And S300, performing one or more text antagonism training on the one or more text antagonism examples, acquiring optimal answers matching the questions consulted by the user, and improving the robustness of the antagonism examples.
The method provides a semantic understanding method of text countermeasure training, which comprises the steps of obtaining vocabulary in a problem consulted by a user at a historical moment and/or common vocabulary in a problem consulted by the user at a current moment, and generating a text library according to the vocabulary in the problem consulted by the user at the historical moment and/or the common vocabulary in the problem consulted by the user at the current moment; generating one or more text countermeasure examples based on the text library using one or more generators; performing one or more text countermeasure training on the one or more text countermeasure examples to obtain optimal answers matching the questions consulted by the user, and performing the text countermeasure training by the method to improve the robustness of a text training countermeasure example model; the most accurate answer can be obtained from the candidate answers according to the questions consulted by the user.
In an exemplary embodiment, performing one or more text resistance exercises on the one or more text resistance examples includes: one or more discriminator training is performed on the one or more text-countermeasure examples. Specifically, the one or more text-countermeasure examples are subjected to one or more discriminator exercises to discriminate whether text in the one or more text-countermeasure examples meets a target text. As an example, in terms of manual service, target text is set to be in a plain text, a text resistance example is generated in the robot, and text resistance training is performed through a dialogue between a person and the robot. Judging whether the robot uses the worship or not through the training of the discriminator; for example, it is discriminated whether the robot will speak "hello" as "hello", whether it will speak "thank you", see again, etc. In the embodiment of the application, iterative training is further included on the discriminator, for example, combining the negative example Z generated by the manual generator with the original training example X, and the discriminator is iteratively trained.
In an exemplary embodiment, performing one or more text resistance exercises on the one or more text resistance examples includes: one or more generator training is performed on the one or more text-countermeasure examples. Specifically, the one or more text countermeasure examples are trained by one or more generators to obtain optimal text in the one or more text countermeasure examples. As an example, in the embodiment of the application, in terms of manual customer service, a client asks a robot about what preferential packages exist recently, the robot generates an antagonism example according to a vocabulary input by the client, and then answers content related to a question for the first time to complete text antagonism training; if the client is not satisfied with the first answer to the robot, so as to query the same question content again, the robot re-performs an antagonistic example, and then answers the content related to the question again, so as to give the client a most suitable or optimal result. In the embodiment of the application, iterative training is further included on the generator.
According to the above-described embodiments, the present application is trained iteratively with each other by the discriminator and the generator to achieve better discrimination of augmented data from the generator and better example generation of a learned discriminator. First, we pre-train the discriminator D and generator G on the original training example X. We alternate the training of the discriminator and the generator in K iterations, e.g. set to 30 iterations. For each iteration we obtain a mini-batch B from the raw data X. For each mini-batch we generate a new implication example ZG using the antagonistic example generator. After collecting all generated examples, we will balance according to the origin and labels of the examples. In each training iteration, we optimize the discriminator for the enhanced training data x+zg and use the discriminator penalty to guide the generator in selecting challenging examples.
In an exemplary embodiment, one or more text antagonism examples generated by a knowledge guidance generator and/or a manual generator are obtained; the neural generator is iteratively trained using the generated one or more text-to-resistance examples as a sample set.
Specifically, an initial candidate answer set { A } is obtained from the question Q consulted by the user 1 ,A 2 ,...,A n };
Obtaining score=n/V (a, Q) after challenge according to the switching neural network, wherein N/V is the switching neural network; the switching neural network is characterized by different input weights, which are competing pairsResistance relation, weight W of the competition resistance relation i 、W j The description is as follows:
wherein i, j is a natural number; the input structure of the neural network is shown in FIG. 3, W 1 And W is 2 Is in competition and antagonism relationship, W 3 And W is 4 Is in competition and antagonism relationship, W 5 And W is 6 Is a competitive relationship.
The method provides a semantic understanding method of text countermeasure training, which comprises the steps of obtaining vocabulary in a problem consulted by a user at a historical moment and/or common vocabulary in a problem consulted by the user at a current moment, and generating a text library according to the vocabulary in the problem consulted by the user at the historical moment and/or the common vocabulary in the problem consulted by the user at the current moment; generating one or more text countermeasure examples based on the text library using one or more generators; performing one or more text countermeasure training on the one or more text countermeasure examples to obtain optimal answers matching the questions consulted by the user, and performing the text countermeasure training by the method to improve the robustness of a text training countermeasure example model; the most accurate answer can be obtained from the candidate answers according to the questions consulted by the user.
As shown in fig. 2, the present invention further provides a semantic understanding system for text resistance training, which includes:
the vocabulary module M10 is used for acquiring the vocabulary in the problem consulted by the user at the historical moment and/or the common vocabulary in the problem consulted by the user at the current moment, and generating a text library according to the vocabulary in the problem consulted by the user at the historical moment and/or the common vocabulary in the problem consulted by the user at the current moment; in the embodiment of the application, the vocabulary at the history time is formed according to the vocabulary after the text resistance training; the common vocabulary includes, for example, vocabulary in a semantic dictionary, etiquette common vocabulary, and the like.
An antagonism instantiation module M20 for generating one or more text antagonism instantiations based on the text library using one or more generators; wherein, in the embodiment of the application, the generator comprises at least one of the following: knowledge guidance generator, manual generator, neural generator. Wherein the knowledge guidance generator is a generator that generates text countermeasure examples from semantic dictionaries, vocabularies, phrases, sentence grammars in the dataset. A manual generator is a generator that generates text-countermeasure examples by manually or manually writing semantic dictionaries, vocabularies, phrases, semantic rules. A neuro-generator is a generator that is trained from historical text-based resistance examples and then used to generate text-based resistance examples.
And the training module M3 is used for carrying out one or more text resistance training on the one or more text resistance examples to obtain an optimal answer matching the problem consulted by the user.
The semantic understanding system for text resistance training is provided, and is used for acquiring words in the problems consulted by the user at the historical moment and/or common words in the problems consulted by the user at the current moment, and generating a text library according to the words in the problems consulted by the user at the historical moment and/or the common words in the problems consulted by the user at the current moment; generating one or more text countermeasure examples based on the text library using one or more generators; and performing one or more times of text countermeasure training on the one or more text countermeasure examples, acquiring optimal answers matching the questions consulted by the user, and performing the text countermeasure training through the system, so that the robustness of a text training countermeasure example model is improved.
In an exemplary embodiment, performing one or more text resistance exercises on the one or more text resistance examples includes: one or more discriminator training is performed on the one or more text-countermeasure examples. Specifically, the one or more text-countermeasure examples are subjected to one or more discriminator exercises to discriminate whether text in the one or more text-countermeasure examples meets a target text. As an example, in terms of manual service, target text is set to be in a plain text, a text resistance example is generated in the robot, and text resistance training is performed through a dialogue between a person and the robot. Judging whether the robot uses the worship or not through the training of the discriminator; for example, it is discriminated whether the robot will speak "hello" as "hello", whether it will speak "thank you", see again, etc. In the embodiment of the application, iterative training is further included on the discriminator, for example, combining the negative example Z generated by the manual generator with the original training example X, and the discriminator is iteratively trained.
In an exemplary embodiment, performing one or more text resistance exercises on the one or more text resistance examples includes: one or more generator training is performed on the one or more text-countermeasure examples. Specifically, the one or more text countermeasure examples are trained by one or more generators to obtain optimal text in the one or more text countermeasure examples. As an example, in the embodiment of the application, in terms of manual customer service, a client asks a robot about what preferential packages exist recently, the robot generates an antagonism example according to a vocabulary input by the client, and then answers content related to a question for the first time to complete text antagonism training; if the client is not satisfied with the first answer to the robot, so as to query the same question content again, the robot re-performs an antagonistic example, and then answers the content related to the question again, so as to give the client a most suitable or optimal result. In the embodiment of the application, iterative training is further included on the generator.
According to the above-described embodiments, the present application is trained iteratively with each other by the discriminator and the generator to achieve better discrimination of augmented data from the generator and better example generation of a learned discriminator. First, we pre-train the discriminator D and generator G on the original training example X. We alternate the training of the discriminator and the generator in K iterations, e.g. set to 30 iterations. For each iteration we obtain a mini-batch B from the raw data X. For each mini-batch we generate a new implication example ZG using the antagonistic example generator. After collecting all generated examples, we will balance according to the origin and labels of the examples. In each training iteration, we optimize the discriminator for the enhanced training data x+zg and use the discriminator penalty to guide the generator in selecting challenging examples.
In an exemplary embodiment, one or more text antagonism examples generated by a knowledge guidance generator and/or a manual generator are obtained; the neural generator is iteratively trained using the generated one or more text-to-resistance examples as a sample set.
Specifically, an initial candidate answer set { A } is obtained from the question Q consulted by the user 1 ,A 2 ,...,A n };
Obtaining score=n/V (a, Q) after challenge according to the switching neural network, wherein N/V is the switching neural network; the switching neural network is characterized by different input weights, and the different input weights are in competition and countermeasure relation, and the weight W with competition and countermeasure relation i 、W j The description is as follows:
wherein i, j is a natural number; the input structure of the neural network is shown in FIG. 3, W 1 And W is 2 Is in competition and antagonism relationship, W 3 And W is 4 Is in competition and antagonism relationship, W 5 And W is 6 Is a competitive relationship.
The semantic understanding system for text resistance training is provided, and is used for acquiring words in the problems consulted by the user at the historical moment and/or common words in the problems consulted by the user at the current moment, and generating a text library according to the words in the problems consulted by the user at the historical moment and/or the common words in the problems consulted by the user at the current moment; generating one or more text countermeasure examples based on the text library using one or more generators; performing one or more text countermeasure training on the one or more text countermeasure examples to obtain optimal answers matching the questions consulted by the user, and performing the text countermeasure training by the system to improve the robustness of a text training countermeasure example model; the most accurate answer can be obtained from the candidate answers according to the questions consulted by the user.
The embodiment of the application also provides a device, which may include: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method described in fig. 1. In practical applications, the device may be used as a terminal device or may be used as a server, and examples of the terminal device may include: smart phones, tablet computers, e-book readers, MP3 (dynamic video expert compression standard voice plane 3,Moving Picture Experts Group Audio Layer III) players, MP4 (dynamic video expert compression standard voice plane 4,Moving Picture Experts Group Audio Layer IV) players, laptop computers, car computers, desktop computers, set-top boxes, smart televisions, wearable devices, etc., the embodiments of the present application are not limited to specific devices.
The embodiment of the application further provides a non-volatile readable storage medium, where one or more modules (programs) are stored, where the one or more modules are applied to a device, and the device may be caused to execute instructions (instructions) of steps included in the method shown in fig. 1 in the embodiment of the application.
Fig. 4 is a schematic hardware structure of a terminal device according to an embodiment of the present application. As shown, the terminal device may include: an input device 1100, a first processor 1101, an output device 1102, a first memory 1103 and at least one communication bus 1104. The communication bus 1104 is used to enable communication connections between the elements. The first memory 1103 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and various programs may be stored in the first memory 1103 for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the first processor 1101 may be implemented as, for example, a central processing unit (Central Processing Unit, abbreviated as CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the first processor 1101 is coupled to the input device 1100 and the output device 1102 through a wired or wireless connection.
Alternatively, the input device 1100 may include a variety of input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a programmable interface of software, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware insertion interface (such as a USB interface, a serial port, etc.) for data transmission between devices; alternatively, the user-oriented user interface may be, for example, a user-oriented control key, a voice input device for receiving voice input, and a touch-sensitive device (e.g., a touch screen, a touch pad, etc. having touch-sensitive functionality) for receiving user touch input by a user; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, for example, an input pin interface or an input interface of a chip, etc.; the output device 1102 may include a display, sound, or the like.
In this embodiment, the processor of the terminal device may include functions for executing each module of the speech recognition device in each device, and specific functions and technical effects may be referred to the above embodiments and are not described herein.
Fig. 5 is a schematic hardware structure of a terminal device according to an embodiment of the present application. Fig. 5 is a diagram of one particular embodiment of the implementation of fig. 4. As shown, the terminal device of the present embodiment may include a second processor 1201 and a second memory 1202.
The second processor 1201 executes the computer program code stored in the second memory 1202 to implement the method described in fig. 1 in the above embodiment.
The second memory 1202 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, video, etc. The second memory 1202 may include a random access memory (random access memory, simply RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a second processor 1201 is provided in the processing assembly 1200. The terminal device may further include: a communication component 1203, a power component 1204, a multimedia component 1205, a voice component 1206, an input/output interface 1207, and/or a sensor component 1208. The components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.
The processing component 1200 generally controls the overall operation of the terminal device. The processing assembly 1200 may include one or more second processors 1201 to execute instructions to perform all or part of the steps in the data processing methods described above. Further, the processing component 1200 may include one or more modules that facilitate interactions between the processing component 1200 and other components. For example, the processing component 1200 may include a multimedia module to facilitate interaction between the multimedia component 1205 and the processing component 1200.
The power supply component 1204 provides power to the various components of the terminal device. Power supply components 1204 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
The multimedia component 1205 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The voice component 1206 is configured to output and/or input voice signals. For example, the voice component 1206 includes a Microphone (MIC) configured to receive external voice signals when the terminal device is in an operational mode, such as a voice recognition mode. The received voice signals may be further stored in the second memory 1202 or transmitted via the communication component 1203. In some embodiments, the voice component 1206 further includes a speaker for outputting voice signals.
The input/output interface 1207 provides an interface between the processing assembly 1200 and peripheral interface modules, which may be click wheels, buttons, and the like. These buttons may include, but are not limited to: volume button, start button and lock button.
The sensor assembly 1208 includes one or more sensors for providing status assessment of various aspects for the terminal device. For example, the sensor assembly 1208 may detect an on/off state of the terminal device, a relative positioning of the assembly, and the presence or absence of user contact with the terminal device. The sensor assembly 1208 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 1208 may also include a camera or the like.
The communication component 1203 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot therein for inserting a SIM card, so that the terminal device may log into a GPRS network and establish communication with a server via the internet.
From the above, the communication component 1203, the voice component 1206, the input/output interface 1207, and the sensor component 1208 in the embodiment of fig. 5 can be implemented as the input device in the embodiment of fig. 4.
In summary, the present invention effectively overcomes the disadvantages of the prior art and has high industrial utility value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (9)

1. A semantic understanding method of text resistance training, comprising the steps of:
acquiring words in a problem consulted by a user at a historical moment and/or common words in a problem consulted by the user at a current moment, and generating a text library according to the words in the problem consulted by the user at the historical moment and/or the common words in the problem consulted by the user at the current moment;
generating one or more text countermeasure examples based on the text library using one or more generators;
and performing one or more text resistance training on the one or more text resistance examples to obtain optimal answers matching the questions consulted by the user.
2. The semantic understanding method of text resistance training according to claim 1, characterized in that: the generator includes at least one of: a knowledge instruction generator, a manual generator and a nerve generator;
the knowledge instruction generator is a generator for generating text antagonism examples according to semantic dictionary, vocabulary, phrase and sentence grammar in the data set;
a manual generator is a generator that generates text-countermeasure examples by manually or manually writing semantic dictionaries, vocabularies, phrases, semantic rules;
a neuro-generator is a generator that is trained from historical text-based resistance examples and then used to generate text-based resistance examples.
3. The semantic understanding method of text resistance training according to claim 1, characterized in that: one or more text resistance training is performed on the one or more text resistance examples, including: one or more discriminator training on the one or more text-countermeasure examples, one or more generator training on the one or more text-countermeasure examples.
4. A semantic understanding method of text resistance training according to claim 3, characterized by: one or more discriminator exercises are performed on the one or more text-countermeasure examples to discriminate whether text in the one or more text-countermeasure examples meets target text.
5. A semantic understanding method of text resistance training according to claim 3 or 4, characterized in that: training the generator one or more times on the one or more text-countermeasure examples to obtain optimal text in the one or more text-countermeasure examples.
6. The semantic understanding method of text resistance training according to claim 2, characterized in that: obtaining one or more text antagonism examples generated by a knowledge guidance generator and/or a manual generator;
the neural generator is iteratively trained using the generated one or more text-to-resistance examples as a sample set.
7. A semantic understanding system for text resistance training, comprising:
the vocabulary module is used for acquiring vocabularies in the problems consulted by the user at the historical moment and/or commonly used vocabularies in the problems consulted by the user at the current moment, and generating a text library according to the vocabularies in the problems consulted by the user at the historical moment and/or commonly used vocabularies in the problems consulted by the user at the current moment;
an antagonism instantiation module to generate one or more text antagonism instantiations based on the text library using one or more generators;
and the training module is used for carrying out one or more text resistance training on the one or more text resistance examples, and acquiring optimal answers matching the questions consulted by the user.
8. An apparatus, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method of one or more of claims 1-6.
9. One or more machine readable media having instructions stored thereon that, when executed by one or more processors, cause an apparatus to perform the method of one or more of claims 1-6.
CN201911346518.XA 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training Active CN111126075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911346518.XA CN111126075B (en) 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911346518.XA CN111126075B (en) 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training

Publications (2)

Publication Number Publication Date
CN111126075A CN111126075A (en) 2020-05-08
CN111126075B true CN111126075B (en) 2023-07-25

Family

ID=70501955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911346518.XA Active CN111126075B (en) 2019-12-24 2019-12-24 Semantic understanding method, system, equipment and medium for text resistance training

Country Status (1)

Country Link
CN (1) CN111126075B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392147A (en) * 2017-07-20 2017-11-24 北京工商大学 A kind of image sentence conversion method based on improved production confrontation network
CN110019732A (en) * 2017-12-27 2019-07-16 杭州华为数字技术有限公司 A kind of intelligent answer method and relevant apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11113599B2 (en) * 2017-06-22 2021-09-07 Adobe Inc. Image captioning utilizing semantic text modeling and adversarial learning
US11145291B2 (en) * 2018-01-31 2021-10-12 Microsoft Technology Licensing, Llc Training natural language system with generated dialogues

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392147A (en) * 2017-07-20 2017-11-24 北京工商大学 A kind of image sentence conversion method based on improved production confrontation network
CN110019732A (en) * 2017-12-27 2019-07-16 杭州华为数字技术有限公司 A kind of intelligent answer method and relevant apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Sequence Generative Adversarial Network for Long Text Summarization";Hao Xu,Yanan Cao,Ruipeng Jia,Yanbing Liu,Jianlong Tan;《2018 IEEE 30th International Conference on Tools with Artificial Intelligence》;20181216;全文 *
ED-GAN:基于改进生成对抗网络的法律文本生成模型;康云云等;《小型微型计算机系统》;20190514(第05期);全文 *

Also Published As

Publication number Publication date
CN111126075A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
EP3183728B1 (en) Orphaned utterance detection system and method
US20190005953A1 (en) Hands free always on near field wakeword solution
US9396724B2 (en) Method and apparatus for building a language model
WO2021022992A1 (en) Dialog generation model training method and device, and dialog generation method and device, and medium
US11455989B2 (en) Electronic apparatus for processing user utterance and controlling method thereof
CN111428010B (en) Man-machine intelligent question-answering method and device
JP2019102063A (en) Method and apparatus for controlling page
CN108920649B (en) Information recommendation method, device, equipment and medium
WO2014190732A1 (en) Method and apparatus for building a language model
CN112527962A (en) Intelligent response method and device based on multi-mode fusion, machine readable medium and equipment
US11238050B2 (en) Method and apparatus for determining response for user input data, and medium
JP7063937B2 (en) Methods, devices, electronic devices, computer-readable storage media, and computer programs for voice interaction.
TW201327214A (en) Electronic device and language analysis method thereof
US20220165257A1 (en) Neural sentence generator for virtual assistants
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN112906348B (en) Method, system, device and medium for automatically adding punctuation marks to text
CN117520498A (en) Virtual digital human interaction processing method, system, terminal, equipment and medium
CN117609443A (en) Intelligent interaction method, system, terminal, server and medium based on large model
CN117520497A (en) Large model interaction processing method, system, terminal, equipment and medium
CN111126075B (en) Semantic understanding method, system, equipment and medium for text resistance training
CN109948155B (en) Multi-intention selection method and device and terminal equipment
US20220270604A1 (en) Electronic device and operation method thereof
CN112084780B (en) Coreference resolution method, device, equipment and medium in natural language processing
US9129598B1 (en) Increasing semantic coverage with semantically irrelevant insertions
CN114047900A (en) Service processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 400000 6-1, 6-2, 6-3, 6-4, building 7, No. 50, Shuangxing Avenue, Biquan street, Bishan District, Chongqing

Applicant after: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

Address before: 400000 2-2-1, 109 Fengtian Avenue, tianxingqiao, Shapingba District, Chongqing

Applicant before: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant