WO2022206091A1 - Procédé et appareil de génération de données - Google Patents

Procédé et appareil de génération de données Download PDF

Info

Publication number
WO2022206091A1
WO2022206091A1 PCT/CN2022/070250 CN2022070250W WO2022206091A1 WO 2022206091 A1 WO2022206091 A1 WO 2022206091A1 CN 2022070250 W CN2022070250 W CN 2022070250W WO 2022206091 A1 WO2022206091 A1 WO 2022206091A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
model
corpus
training
data generation
Prior art date
Application number
PCT/CN2022/070250
Other languages
English (en)
Chinese (zh)
Inventor
刘瑞雪
陈蒙
Original Assignee
京东科技控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东科技控股股份有限公司 filed Critical 京东科技控股股份有限公司
Publication of WO2022206091A1 publication Critical patent/WO2022206091A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/49Data-driven translation using very large corpora, e.g. the web

Definitions

  • Embodiments of the present disclosure relate to the field of computer technology, in particular to the field of artificial intelligence, and in particular, to a method and apparatus for generating data.
  • Data Augmentation is a technology that allows limited data to generate more equivalent data to expand the training data set, and is an effective means to overcome the lack of training data.
  • deep learning methods usually require a large amount of training data to avoid overfitting.
  • data augmentation sometimes enough data cannot be obtained, which requires data augmentation to solve such problems.
  • text data enhancement methods include two methods.
  • One is to locally modify the sentence to generate a new sentence under the premise of maintaining the original structure of the sentence.
  • new sentences are generated using simple synonym replacement, random word exchange, random word deletion, etc.
  • Another example is the recently proposed Masked language model, which performs masked prediction on words and performs conditional adjustments on class labels to achieve data expansion.
  • the other is to pre-train the text generation model through a large amount of data, and then use the data generation method to generate complete sentences through the text generation model instead of making some local changes.
  • back translation that is, first translating the corpus into another language, and then translating it back to the source language, to generate more varied sentences.
  • paraphrasing is used to generate more sentences by adding noise to the input of the text generation model.
  • Embodiments of the present disclosure propose methods and apparatus for generating data.
  • An embodiment of the present disclosure provides a method for generating data, the method includes: acquiring target training data and target data generation conditions, where the target training data includes a corpus of a target domain marked with feature tags; The corpus with the feature label is determined as the target sample corpus, the feature label of the target sample corpus is determined as the target sample label, and the target sample set is obtained; based on the target sample set, the pre-training model is trained, and the parameters of the pre-training model are adjusted.
  • the target data generation model of wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model; using the target data generation model, based on target data generation conditions, generate target data.
  • training the pre-training model based on the target sample set includes: inputting the target sample label into the pre-training model, using the target sample corpus as the expected output, training the pre-training model, and obtaining the target data generation model.
  • the target data generation conditions include target feature labels; and, using a target data generation model and based on the target data generation conditions, generating target data includes: inputting the target feature labels into the target data generation model to obtain the target corpus; The target corpus is determined as target data.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and before determining the target corpus as the target data, the method further includes: inputting the target corpus into the classification model to obtain the target corpus The classification label of the corpus; in response to determining that the preset label set of the classification model includes the classification label of the target corpus, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
  • training the pre-training model based on the target sample set includes: inputting the target sample corpus into the pre-training model, using the target sample label as the expected output, training the pre-training model, and obtaining the target data generation model.
  • the target data generation condition includes the target corpus to be recognized; and, using the target data generation model and based on the target data generation condition, generating the target data includes: inputting the target corpus to be recognized into the target data generation model to obtain the target data generation model. Identify the feature tags of the corpus; determine the feature tags of the target corpus to be recognized as the target data.
  • Embodiments of the present disclosure provide an apparatus for generating data, the apparatus comprising: a data acquisition unit configured to acquire target training data and target data generation conditions, where the target training data includes corpus of a target domain marked with feature tags;
  • the sample construction unit is configured to determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set;
  • the model adjustment unit is configured to be based on The target sample set, the pre-training model is trained, the parameters of the pre-training model are adjusted, and the target data generation model after retraining is obtained, wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model.
  • a training model; a data generation unit configured to use a target data generation model to generate target data based on target data generation conditions.
  • the model adjustment unit is further configured to: input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation conditions include target feature labels; and the data generation unit is further configured to: input the target feature labels into the target data generation model to obtain target corpus; and determine the target corpus as target data.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and the data generating unit further includes a data verification module configured to: input the target corpus into the classification model to obtain the target corpus In response to determining that the classification label of the target corpus is included in the preset label set of the classification model, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
  • the model adjustment unit is further configured to: input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation condition includes the target corpus to be recognized; and the data generating unit is further configured to: input the target corpus to be recognized into the target data generation model to obtain the feature label of the target corpus; The feature labels of the corpus are determined as the target data.
  • Embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device on which one or more programs are stored, when the one or more programs are processed by the one or more programs
  • the processor executes such that the one or more processors implement the method in any of the above embodiments.
  • Embodiments of the present disclosure also provide a computer-readable medium on which a computer program is stored, wherein when the program is executed by a processor, the method in any of the foregoing embodiments is implemented.
  • FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present disclosure may be applied;
  • FIG. 2 is a flowchart of one embodiment of a method for generating data according to the present disclosure
  • FIG. 3 is a flowchart of yet another embodiment of a method for generating data according to the present disclosure
  • FIG. 4 is a flowchart of yet another embodiment of a method for generating data according to the present disclosure
  • FIG. 5 is a schematic structural diagram of an embodiment of an apparatus for generating data according to the present disclosure
  • FIG. 6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
  • FIG. 1 illustrates an exemplary system architecture 100 of a method for generating data or an apparatus for generating data to which embodiments of the present disclosure may be applied.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send data, etc.
  • the user can send the original data of the target field to the server, and can also receive the target data from the server to generate the target generated by the model. data.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, and 103 may be electronic devices with communication functions, including but not limited to smart phones, tablet computers, e-book readers, laptop computers, and desktop computers.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
  • the server 105 may be a server that provides various services, such as a background data server that processes the raw data uploaded by the terminal devices 101 , 102 , and 103 (eg, constructs training samples based on target training data).
  • the background data server can use the received raw data to adjust the pre-training model, and the obtained data generation model is used to generate new data, and feed back the processing result (eg, the generated target data) to the terminal device.
  • the server may be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers, or can be implemented as a single server.
  • the server is software, it may be implemented as multiple software or software modules for providing distributed services, or may be implemented as a single software or software module. There is no specific limitation here.
  • the method for generating data provided by the embodiments of the present disclosure may be executed by the terminal devices 101 , 102 , and 103 , or may be executed by the server 105 .
  • the means for generating data may be provided in the terminal devices 101 , 102 , and 103 , and may also be provided in the server 105 . There is no specific limitation here.
  • the method for generating data includes the following steps:
  • Step 201 Obtain target training data and target data generation conditions.
  • the target training data includes the corpus of the target domain marked with feature labels.
  • Feature tags represent the features of the corpus and can include multiple dimensions.
  • structural feature tags can represent the structural features of the corpus
  • intent tags can represent the intent features of the corpus
  • semantic tags can represent the semantic features of the corpus.
  • the target data generation condition represents the user's expectation for the generated data, for example, it may be data including entity information in the target domain, and may also be data including specific syntactic structure or semantic information.
  • the operator when an operator receives a data generation task in a certain technical field, the operator can directly obtain the target training data and target data generation conditions of the field from the business party through the execution subject (for example, the server 105 shown in FIG. 1 ), and further The real corpus in the field can be obtained from the network, and the corresponding feature labels of each real corpus can be marked to obtain the target training data.
  • the execution subject for example, the server 105 shown in FIG. 1
  • target training data may also include unlabeled corpus.
  • Step 202 Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set.
  • the corpus of the target domain may include features of multiple dimensions of the real corpus in the target domain, such as sentence structure features, word features, semantic features, and the like.
  • the target sample label can characterize the characteristics of the target sample corpus from multiple dimensions, for example, the target sample corpus can be labeled from the sentence structure dimension, and the target sample corpus can also be labeled from the keyword dimension. Entity-named dimensions mark the target sample corpus.
  • Step 203 based on the target sample set, train the pre-training model, adjust the parameters of the pre-training model, and obtain the target data generation model after retraining.
  • the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on a general sample set to obtain a pre-training model.
  • the training data in the general sample set is easily obtained training data in various fields.
  • models such as ELMo (Embeddings from Language Models), BERT (Bidirectional Encoder Representation from Transformers) or GPT (Generative Pre-Training) can be selected as
  • ELMo Embeddings from Language Models
  • BERT Bidirectional Encoder Representation from Transformers
  • GPT Generic Pre-Training
  • the pre-training model obtained after the initial model is trained on the sample set can learn basic data generation rules (for example, it can generate coherent and real corpus), but for some fields where data acquisition is difficult, the pre-training model generates The similarity between the data and the real data in the field is low.
  • the pre-training model is re-trained based on the target sample set, and the parameters in the pre-training model are adjusted so that the pre-training model learns the rules for generating data in the target field, so that the data generated by the target data model obtained by re-training is more accurate. close to the real data.
  • Step 204 using the target data generation model, and based on the target data generation conditions, to generate target data.
  • the target data generation model represents the corresponding relationship between target data generation conditions and target data.
  • the amount of data in a specific field is small, and in order to enhance the data in this field, this field can be used as the target field.
  • the execution body can build a general sample set based on public data on the Internet (for example, Chinese novels or dialogue materials), and then train the initial GPT model based on the general sample set, and the obtained pre-trained GPT model can generate coherent real sentences .
  • the execution body can obtain the corpus of the target domain as the target training data, and construct the target sample set, and then retrain the GPT model based on the target sample set, adjust the parameters of the GPT model, and make it learn the generation rules of the real corpus in the target domain,
  • the GPT model obtained after training is the target data generation model.
  • the execution body can obtain the target data generation conditions (such as keyword tags, sentence structure tags, semantic tags, etc.), and input the target data tags into the GPT model, and GPT can generate new corpus, thus expanding the target field. amount of data.
  • Table 1 shows the target training data (including input labels and training corpus) and the target corpus generated by GPT in this example.
  • the pre-trained model is retrained by using a small amount of data in the target domain, so that the obtained data generation model learns the data generation rules of the target domain, and the data can be enhanced. Improve the authenticity and pertinence of the data generated.
  • FIG. 3 is a flowchart of another embodiment of the method for generating data according to the present disclosure.
  • the following steps are included:
  • Step 301 Obtain target training data and target data generation conditions.
  • Step 302 Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set. Steps 301 and 302 are similar to the foregoing steps 201 and 202 and will not be repeated here.
  • Step 303 input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target sample label can represent the characteristics of the target sample corpus.
  • the pre-training model uses the target sample label as the conditional label, and uses the conditional label to constrain the corpus generation process, and then determines the loss function by comparing the target sample corpus and the generated corpus to obtain the target data generation model.
  • the target data generation model characterizes the correspondence between the conditional labels and the generated corpus.
  • Step 304 input the target feature label into the target data generation model to obtain the target corpus.
  • the target data generation conditions include target feature labels, which represent the user's expectation of the generated corpus in one or more dimensions.
  • the execution body inputs the target feature label into the target data generation model, as the condition label of the target data generation model, and constrains its corpus generation process to generate the target corpus that meets the user's expectation.
  • Step 305 determining the target corpus as target data.
  • the flow 400 of the method for generating data in this embodiment highlights the step of generating corpus data by using the target data generation model.
  • the method for generating data in this embodiment only needs a small amount of training data in the target field to ensure that the generated corpus is closer to the real expectation of the target field, and the data can be enhanced in a more targeted manner.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized, and before the target corpus is determined as the target data (step 306 ), the above process 300 may further include: inputting the target corpus into the classification model to obtain the classification label of the target corpus; in response to determining that the classification label of the target corpus is included in the preset label set of the classification model, the target corpus is determined as the target data, and the target data is used as the target corpus. training samples for building a classification model.
  • the target data generation model is used to expand the training data of the classification model. If the corpus generated by the target data generation model can be correctly identified by the classification model, it proves that the authenticity of the corpus generated by the target data generation model meets the training requirements of the classification model.
  • the data volume of training samples is positively related to the accuracy of the model. Therefore, in order to ensure the accuracy of the classification model, sufficient classification sample corpus is required, and for some specific fields, the data volume of the corpus is difficult to obtain. larger.
  • the field can be used as the target field, and the execution subject is expected to construct the target sample set based on the obtained small number of classified samples, and obtain the target data generation model through retraining. Then, the sample classification labels of the constructed classification model are input into the target data generation model to obtain the target corpus. Then, the target corpus is input into the classification model. If the classification label output by the classification model is consistent with the sample classification label, it means that the authenticity of the target corpus meets the training requirements of the classification model. In this way, the obtained target data can effectively expand the sample data of the classification model.
  • a corpus can correspond to multiple feature tags, which respectively represent the features of the corpus from multiple dimensions.
  • multiple target sample labels can be input into the pre-training model at the same time, so that the target data generation model can learn data of multiple dimensions Generate rules.
  • the target data generation condition may include target feature labels of multiple dimensions, and each target feature label represents a data generation condition of one dimension.
  • the executive body can constrain its corpus generation process from multiple dimensions, thus realizing data augmentation that integrates multiple dimensions.
  • the target data generation conditions may include intent tags, structure tags, entity tags, and technical field tags at the same time, which respectively represent the user's expectations of the generated corpus from the dimensions of intent, structure, entity, and technical field.
  • the execution body can input the above-mentioned multiple feature tags into the target data generation model at the same time, and constrain the generation process of the corpus from the above-mentioned multiple dimensions, so as to obtain the target corpus that meets the user's needs.
  • the user uses the target generation model to expand the corpus data in the field of air conditioners, and the user can set the target data generation conditions as: “air conditioner”, “green”, and “purchase” according to their own needs, where “air conditioner” is the domain label, "green” is the entity label, and "purchase” is the intent label.
  • the execution body inputs the above three feature labels into the target data generation model at the same time to generate the target corpus.
  • the target corpus can be "I want to buy a Green Air Conditioners", “How to Buy Green Air Conditioners", etc.
  • Table 2 shows the correspondence between multi-dimensional labels, training corpus, and target corpus in this example.
  • the process 400 of the method for generating data includes the following steps:
  • Step 401 Obtain target training data and target data generation conditions.
  • Step 402 Determine the corpus marked with the feature label in the target training data as the target sample corpus, and determine the feature label of the target sample corpus as the target sample label to obtain the target sample set. Steps 401 and 402 are similar to the aforementioned steps 201 and 202, and are not described again here.
  • Step 403 Input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
  • Step 404 input the target corpus to be recognized into the target data generation model, and obtain the feature label of the target corpus to be recognized.
  • the target data generation condition includes the target corpus to be recognized.
  • the target data generation model characterizes the correspondence between corpus and labels.
  • the execution body inputs the target corpus to be recognized into the target data generation model, identifies the features of the target corpus to be recognized, and outputs a target feature label representing the feature of the target corpus to be recognized.
  • Step 405 determining the feature tag of the target corpus to be recognized as target data.
  • the process 400 of the method for generating data in this embodiment embodies the step of identifying the feature labels of the corpus through the target data generation model.
  • the amount of data is large and only a small number of labels
  • the method for generating data in this embodiment only needs a small amount of training data in the target field to ensure the accuracy of recognition, and can enhance the data more effectively.
  • the present disclosure provides an embodiment of an apparatus for generating data.
  • the apparatus embodiment corresponds to the method embodiment shown in FIG. 2 .
  • the device can be specifically applied to various electronic devices.
  • the apparatus 500 for generating data in this embodiment includes: a data acquisition unit 501 configured to acquire target training data and target data generation conditions, and the target training data includes corpus of the target domain marked with feature tags
  • the sample construction unit 502 is configured to determine the corpus marked with the feature label in the target training data as the target sample corpus, and the feature label of the target sample corpus is determined as the target sample label to obtain the target sample set;
  • the model adjustment unit 503 is It is configured to train the pre-training model based on the target sample set, adjust the parameters of the pre-training model, and obtain the target data generation model after retraining, wherein the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set , to obtain a pre-training model;
  • the data generation unit 504 is configured to use the target data generation model to generate target data based on the target data generation conditions.
  • the model adjustment unit 503 is further configured to: input the target sample label into the pre-training model, use the target sample corpus as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation conditions include target feature labels; and, the data generation unit 504 is further configured to: input the target feature labels into the target data generation model to obtain target corpus; and determine the target corpus as target data.
  • the target feature label is a classification label estimated by a pre-built classification model based on the corpus to be recognized; and the data generation unit 504 further includes a data verification module, configured to: input the target corpus into the classification model to obtain the target corpus.
  • the classification label of the corpus in response to determining that the preset label set of the classification model includes the classification label of the target corpus, the target corpus is determined as the target data, and the target data is used to construct the training samples of the classification model.
  • the model adjustment unit 503 is further configured to: input the target sample corpus into the pre-training model, use the target sample label as the expected output, train the pre-training model, and obtain the target data generation model.
  • the target data generation conditions include the target corpus to be recognized; and the data generating unit 504 is further configured to: input the target corpus to be recognized into the target data generation model to obtain the feature label of the target corpus; The feature labels of the recognition corpus are determined as target data.
  • FIG. 6 it shows a schematic structural diagram of an electronic device (eg, the server or terminal device in FIG. 1 ) 600 suitable for implementing the embodiments of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), etc., as well as mobile terminals such as digital TVs, desktop computers, etc. etc. Fixed terminal.
  • the terminal device shown in FIG. 6 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • an electronic device 600 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 601 that may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from a storage device 608 Various appropriate actions and processes are executed by the programs in the memory (RAM) 603 . In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604 .
  • I/O interface 605 input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 607 of a computer, etc.; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • Communication means 609 may allow electronic device 600 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in FIG. 6 may represent one device, or may represent multiple devices as required.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing apparatus 601 the above-described functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal in baseband or propagated as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains target training data and target data generation conditions, and the target training data includes a feature label marked with The corpus of the target domain is determined; the corpus marked with the feature label in the target training data is determined as the target sample corpus, and the feature label of the target sample corpus is determined as the target sample label to obtain the target sample set; based on the target sample set, train the pre-training model , adjust the parameters of the pre-training model to obtain the target data generation model after retraining, wherein, the pre-training model is obtained through the following steps: constructing an initial model and training the initial model based on the general sample set to obtain a pre-training model; using the target data to generate the model , generate target data based on
  • Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, or a combination thereof, Also included are conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to via Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in software or hardware.
  • the described unit can also be provided in the processor, for example, it can be described as: a processor includes a data acquisition unit, a sample construction unit, a model adjustment unit and a data generation unit. Wherein, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the data acquisition unit can also be described as "a unit that acquires target training data and target data generation conditions".

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Des modes de réalisation de la présente divulgation concernent un procédé et un appareil de génération de données. Un mode de réalisation spécifique du procédé consiste à : acquérir des données d'apprentissage cibles et une condition de génération de données cibles, les données d'apprentissage cibles comprenant des matériels de langage dans le champ cible, et chaque matériel de langage étant marqué avec une étiquette caractéristique ; construire un ensemble d'échantillons cibles sur la base des données d'apprentissage cibles ; former un modèle de préapprentissage sur la base de l'ensemble d'échantillons cibles, et ajuster des paramètres du modèle de préapprentissage afin d'obtenir un modèle de génération de données cibles réappris, le modèle de préapprentissage étant obtenu par les étapes suivantes : construction d'un modèle initial et apprentissage du modèle initial sur la base d'un ensemble d'échantillons généraux pour obtenir le modèle de préapprentissage ; et utilisation du modèle de génération de données cibles pour générer des données cibles sur la base de la condition de génération de données cibles.
PCT/CN2022/070250 2021-03-30 2022-01-05 Procédé et appareil de génération de données WO2022206091A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110340188.4 2021-03-30
CN202110340188.4A CN115146624A (zh) 2021-03-30 2021-03-30 用于生成数据的方法和装置

Publications (1)

Publication Number Publication Date
WO2022206091A1 true WO2022206091A1 (fr) 2022-10-06

Family

ID=83403542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/070250 WO2022206091A1 (fr) 2021-03-30 2022-01-05 Procédé et appareil de génération de données

Country Status (2)

Country Link
CN (1) CN115146624A (fr)
WO (1) WO2022206091A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029492A (zh) * 2022-12-01 2023-04-28 广州云趣信息科技有限公司 派单方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170358295A1 (en) * 2016-06-10 2017-12-14 Conduent Business Services, Llc Natural language generation, a hybrid sequence-to-sequence approach
CN111339278A (zh) * 2020-02-28 2020-06-26 支付宝(杭州)信息技术有限公司 训练话术生成模型、生成应答话术的方法和装置
CN111898369A (zh) * 2020-08-17 2020-11-06 腾讯科技(深圳)有限公司 文章标题生成方法、模型的训练方法、装置和电子设备
CN112182210A (zh) * 2020-09-25 2021-01-05 四川华空天行科技有限公司 基于作文论据特征分类器的语言生成模型及写作支持方法
CN112541346A (zh) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 摘要生成方法、装置、电子设备及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170358295A1 (en) * 2016-06-10 2017-12-14 Conduent Business Services, Llc Natural language generation, a hybrid sequence-to-sequence approach
CN111339278A (zh) * 2020-02-28 2020-06-26 支付宝(杭州)信息技术有限公司 训练话术生成模型、生成应答话术的方法和装置
CN111898369A (zh) * 2020-08-17 2020-11-06 腾讯科技(深圳)有限公司 文章标题生成方法、模型的训练方法、装置和电子设备
CN112182210A (zh) * 2020-09-25 2021-01-05 四川华空天行科技有限公司 基于作文论据特征分类器的语言生成模型及写作支持方法
CN112541346A (zh) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 摘要生成方法、装置、电子设备及可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029492A (zh) * 2022-12-01 2023-04-28 广州云趣信息科技有限公司 派单方法和装置
CN116029492B (zh) * 2022-12-01 2023-12-01 广州云趣信息科技有限公司 派单方法和装置

Also Published As

Publication number Publication date
CN115146624A (zh) 2022-10-04

Similar Documents

Publication Publication Date Title
JP7208952B2 (ja) 対話モデルを生成するための方法及び装置
CN107491534B (zh) 信息处理方法和装置
US10698932B2 (en) Method and apparatus for parsing query based on artificial intelligence, and storage medium
US11775761B2 (en) Method and apparatus for mining entity focus in text
US20180144749A1 (en) Speech recognition apparatus and method
CN107861954B (zh) 基于人工智能的信息输出方法和装置
US9916395B2 (en) Determining answer stability in a question answering system
US11586817B2 (en) Word vector retrofitting method and apparatus
US20230023789A1 (en) Method for identifying noise samples, electronic device, and storage medium
US11321534B2 (en) Conversation space artifact generation using natural language processing, machine learning, and ontology-based techniques
CN111563390B (zh) 文本生成方法、装置和电子设备
WO2022174496A1 (fr) Procédé et appareil d'annotation de données basés sur un modèle génératif, dispositif et support de stockage
WO2022156434A1 (fr) Procédé et appareil de génération de texte
CN111666416A (zh) 用于生成语义匹配模型的方法和装置
US11036996B2 (en) Method and apparatus for determining (raw) video materials for news
CN111144124A (zh) 机器学习模型的训练方法、意图识别方法及相关装置、设备
WO2023274187A1 (fr) Procédé et appareil de traitement d'informations basés sur une inférence de langage naturel et dispositif électronique
WO2022105536A1 (fr) Procédé et appareil de génération d'une page
JP7178394B2 (ja) 音声信号を処理するための方法、装置、機器、および媒体
WO2023005968A1 (fr) Procédé et appareil de reconnaissance de catégorie de texte, et dispositif électronique et support d'informations
KR20200080400A (ko) 페르소나에 기반하여 문장을 제공하는 방법 및 이를 지원하는 전자 장치
CN113434683A (zh) 文本分类方法、装置、介质及电子设备
WO2022206091A1 (fr) Procédé et appareil de génération de données
CN115062617A (zh) 基于提示学习的任务处理方法、装置、设备及介质
KR20210028041A (ko) 전자 장치 및 그 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 140224)