CN111104514B - Training method and device for document tag model - Google Patents
Training method and device for document tag model Download PDFInfo
- Publication number
- CN111104514B CN111104514B CN201911338269.XA CN201911338269A CN111104514B CN 111104514 B CN111104514 B CN 111104514B CN 201911338269 A CN201911338269 A CN 201911338269A CN 111104514 B CN111104514 B CN 111104514B
- Authority
- CN
- China
- Prior art keywords
- model
- sub
- recall
- document
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
Abstract
The application discloses a training method and device for a document tag model, and relates to the technical field of document tag prediction. The specific implementation scheme is as follows: obtaining a pre-trained document tag model, wherein the document tag model is obtained by pre-training universal training data of each application scene; acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: a plurality of documents and corresponding tag information under an application scene to be applied; obtaining a sub-model related to an application scene to be applied in a document tag model; training the sub-model by using scene training data to obtain a trained document tag model, so that training data required for training the document tag model in an application scene to be applied can be reduced, and training cost is reduced under the condition of ensuring the accuracy of the document tag model.
Description
Technical Field
The application relates to the technical field of data processing, in particular to the technical field of document tag prediction, and particularly relates to a training method and device of a document tag model.
Background
Currently, tag prediction technology of documents is an important task for understanding document contents. For a new document tag prediction scene, the main solution ideas are as follows, and one is to train a general document tag model: the model is trained without considering the difference among various scenes, and a universal document tag model is used in all scenes. The other is to train the document tag model alone: training data is prepared separately for the new scene for training.
In the first method, the obtained model is trained, the scene or field pertinence is lacked, and the prediction accuracy in a single scene is low. In the second method, the required amount of training data to be prepared is large, and the training cost is high.
Disclosure of Invention
According to the training method and device for the document tag model, the sub-model related to the application scene to be applied in the pre-trained document tag model is trained according to scene training data of the application scene to be applied, so that the training cost of the document tag model in the application scene to be applied is reduced on the premise of ensuring the accuracy of the document tag model.
In one aspect, an embodiment of the present application provides a training method for a document tag model, including:
obtaining a pre-trained document tag model, wherein the document tag model is obtained by pre-training universal training data of each application scene;
acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: the plurality of documents and the corresponding label information under the application scene to be applied;
acquiring a sub-model related to the application scene to be applied in the document tag model;
and training the sub-model by adopting the scene training data to obtain a trained document tag model.
In one embodiment of the present application, the document tag model includes: the device comprises a pretreatment layer, a candidate recall layer, a coarse arrangement layer and a fine arrangement layer;
the candidate recall layer comprises: a keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model which are connected in parallel;
the coarse row layer comprises: a rule sub-model and a semantic matching sub-model which are connected in parallel;
the sub-model related to the application scene to be applied comprises: a semantic matching sub-model, any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model, and an implicit recall sub-model.
In one embodiment of the present application, the sub-model related to the application scenario to be applied includes: when the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model are adopted, training is carried out on the sub-model by adopting the scene training data, and a trained document label model is obtained, and the method comprises the following steps:
inputting the documents into a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model respectively aiming at each document in the scene training data, and merging the output results to obtain candidate label results;
inputting the document and the candidate tag results into the semantic matching sub-model, and obtaining the relevance of the document and each candidate tag in the candidate tag results;
and adjusting coefficients of the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model according to the relevance of the document to each candidate label in the candidate label result and label information corresponding to the document, so as to obtain a trained document label model.
In one embodiment of the present application, the scene training data further includes: a set of tags, the set of tags comprising: the document tag model can predict tags so that the document tag model can predict the tags of the documents in the scene training data in combination with the tag set.
In an embodiment of the present application, before training the sub-model by using the scene training data to obtain a trained document tag model, the method further includes:
initializing coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model.
According to the training method of the document tag model, the document tag model is obtained by obtaining the pre-trained document tag model, and the document tag model is obtained by pre-training universal training data of all application scenes; acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: a plurality of documents and corresponding tag information under an application scene to be applied; obtaining a sub-model related to an application scene to be applied in a document tag model; training the sub-model by using scene training data to obtain a trained document tag model, so that training data required for training the document tag model in an application scene to be applied can be reduced, and training cost is reduced under the condition of ensuring the accuracy of the document tag model.
In another aspect, an embodiment of the present application provides a training device for a document tag model, including:
the acquisition module is used for acquiring a pre-trained document tag model, wherein the document tag model is obtained by pre-training universal training data of each application scene;
the acquisition module is further configured to acquire scene training data of an application scene to be applied, where the scene training data includes: the plurality of documents and the corresponding label information under the application scene to be applied;
the acquisition module is further used for acquiring a sub-model related to the application scene to be applied in the document tag model;
and the training module is used for training the sub-model by adopting the scene training data to obtain a trained document label model.
In one embodiment of the present application, the document tag model includes: the device comprises a pretreatment layer, a candidate recall layer, a coarse arrangement layer and a fine arrangement layer;
the candidate recall layer comprises: a keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model which are connected in parallel;
the coarse row layer comprises: a rule sub-model and a semantic matching sub-model which are connected in parallel;
the sub-model related to the application scene to be applied comprises: a semantic matching sub-model, any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model, and an implicit recall sub-model.
In one embodiment of the present application, the sub-model related to the application scenario to be applied includes: the training module is specifically used for, when the semantics match sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model,
inputting the documents into a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model respectively aiming at each document in the scene training data, and merging the output results to obtain candidate label results;
inputting the document and the candidate tag results into the semantic matching sub-model, and obtaining the relevance of the document and each candidate tag in the candidate tag results;
and adjusting coefficients of the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model according to the relevance of the document to each candidate label in the candidate label result and label information corresponding to the document, so as to obtain a trained document label model.
In one embodiment of the present application, the scene training data further includes: a set of tags, the set of tags comprising: the document tag model can predict tags so that the document tag model can predict the tags of the documents in the scene training data in combination with the tag set.
In one embodiment of the present application, the apparatus further comprises: and the initialization module is used for initializing coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model.
According to the training device for the document tag model, the document tag model is obtained by obtaining the pre-trained document tag model, and the document tag model is obtained by pre-training universal training data of all application scenes; acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: a plurality of documents and corresponding tag information under an application scene to be applied; obtaining a sub-model related to an application scene to be applied in a document tag model; training the sub-model by using scene training data to obtain a trained document tag model, so that training data required for training the document tag model in an application scene to be applied can be reduced, and training cost is reduced under the condition of ensuring the accuracy of the document tag model.
Another embodiment of the present application proposes an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the training method of the document tag model according to the embodiment of the application.
Another aspect of the present application provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of training a document tag model of the embodiments of the present application.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of a document tag model structure.
FIG. 3 is a schematic diagram according to a second embodiment of the present application;
FIG. 4 is a schematic diagram according to a third embodiment of the present application;
FIG. 5 is a block diagram of an electronic device for implementing a training method for a document tag model of an embodiment of the present application;
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following describes a training method and device for a document tag model according to an embodiment of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic diagram according to a first embodiment of the present application. It should be noted that, the execution body of the training method of the document tag model provided in this embodiment is a training device of the document tag model, and the device may be implemented in a software and/or hardware manner, and the device may be configured in a terminal device or a server, and this embodiment is not limited specifically.
As shown in fig. 1, the training method of the document tag model may include:
In this application, a schematic diagram of a document tag model structure may be shown in fig. 2, where in fig. 2, the document tag model includes: the device comprises a pretreatment layer, a candidate recall layer, a coarse arrangement layer and a fine arrangement layer. The preprocessing layer is used for carrying out processing such as segmentation, sentence segmentation, word segmentation, part-of-speech labeling POS, named entity recognition NER and the like on the document to obtain a preprocessing result; the pretreatment result comprises: segmentation results, sentence results, word segmentation results, part-of-speech tagging results and named entity recognition results.
Wherein the candidate recall layer comprises: the method comprises the following steps of a keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model which are connected in parallel. The 4 recall sub-models are input into a document and a preprocessing result corresponding to the document; the output result is a plurality of candidate tags. And combining the output results of the 4 recall sub-models to obtain candidate label results. The keyword recall sub-model is used for determining candidate labels by analyzing the semantic structures and the statistical characteristics of the documents. A multi-tag class recall sub-model for determining candidate tags based on NN multi-label classification. An explicit recall sub-model determines candidate tags based on literal matching and frequency screening. An implicit recall sub-model determines candidate tags based on a principal and minor component analysis.
Wherein, the thick layer of arranging includes: a rule sub-model and a semantic matching sub-model in parallel. The rule submodel is used for determining candidate labels to be filtered in the candidate label result according to a preset rule. The semantic matching sub-model is used for determining text relativity of the document and each candidate label in the candidate label result, and determining the candidate label to be filtered according to the text relativity. And filtering the candidate labels to be filtered in the candidate label results to obtain filtered candidate label results. The text relatedness refers to the similarity of the semantic layers between the text and the candidate labels.
The fine ranking layer is used for ranking the candidate labels according to the text relativity, the label heat and the label granularity of the candidate labels in the filtered candidate label results, and predicting label information corresponding to the document according to the ranking results. The tag heat refers to the attention heat of a user to the candidate tag, such as the search heat of the candidate tag. Tag granularity is determined based on constituent word types and length calculations of candidate tags. The more detailed the content of the candidate tag, the smaller the tag granularity. E.g., ordered in tag granularity, then hundred degrees- > hundred degrees alliance peak; entertainment- > entertainment star.
In the application, the application scene is used for example for carrying out label prediction of the heavy entity on the long file, carrying out accurate label prediction on the question and answer, carrying out label prediction on recall of original content of a user and the like. Wherein, the prediction object may include: long documents, questions and answers, user original content, etc. Predictive requirements such as recall, recraxess, heavy entities, reclassification, heavy commercial value, etc.
In this application, the general training data of each application scenario may refer to training data obtained by combining training data of each application scenario, for example. In the application, before determining the application scene to be applied, a large amount of general training data of each application scene can be adopted to pretrain the initial document tag model, so that after determining the application scene to be applied, the amount of training data in the application scene to be applied is reduced.
And step 103, obtaining a sub-model related to the application scene to be applied in the document tag model.
In the present application, the sub-model related to the application scenario to be applied includes: a semantic matching sub-model, any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model, and an implicit recall sub-model. In the application, the sub-model can be selected from the sub-models to be retrained or finely adjusted according to a specific application scene to be applied.
And 104, training the sub-models by using scene training data to obtain a trained document tag model.
In the present application, the sub-model related to the application scenario to be applied includes: when the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model are executed, the training device of the document label model may specifically execute the process of step 104 by respectively inputting the document into the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model for each document in the scene training data, and merging the output results to obtain candidate label results; inputting the document and the candidate tag results into a semantic matching sub-model, and obtaining the correlation degree of each candidate tag in the document and the candidate tag results; and adjusting coefficients of the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model according to the relevance of each candidate label in the result of the document and the candidate label and label information corresponding to the document, so as to obtain a trained document label model.
In this application, in order to improve the accuracy of the trained document tag model, the scene training data may further include: a set of tags, the set of tags comprising: the document tag model may predict tags such that the document tag model, in combination with the tag set, performs tag prediction on documents in the scene training data.
In this application, before step 104, the method may further include the following steps: initializing coefficients of a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model in a document label model to avoid interference when the coefficients of the sub-models in the pre-trained document label model are trained in an application scene to be applied, and further improving accuracy of the document label model in the application scene to be applied.
According to the training method of the document tag model, the document tag model is obtained by obtaining the pre-trained document tag model, and the document tag model is obtained by pre-training universal training data of all application scenes; acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: a plurality of documents and corresponding tag information under an application scene to be applied; obtaining a sub-model related to an application scene to be applied in a document tag model; training the sub-model by using scene training data to obtain a trained document tag model, so that training data required for training the document tag model in an application scene to be applied can be reduced, and training cost is reduced under the condition of ensuring the accuracy of the document tag model.
In order to achieve the above embodiment, the embodiment of the present application further provides a training device for a document tag model.
Fig. 3 is a schematic diagram according to a second embodiment of the present application. As shown in fig. 3, the training apparatus 100 of the document tag model includes:
an obtaining module 110, configured to obtain a pre-trained document tag model, where the document tag model is obtained by pre-training general training data of each application scenario;
the obtaining module 110 is further configured to obtain scene training data of an application scene to be applied, where the scene training data includes: the plurality of documents and the corresponding label information under the application scene to be applied;
the obtaining module 110 is further configured to obtain a sub-model related to the application scenario to be adapted in the document tag model;
and the training module 120 is configured to train the sub-model by using the scene training data to obtain a trained document tag model.
In one embodiment of the present application, the document tag model includes: the device comprises a pretreatment layer, a candidate recall layer, a coarse arrangement layer and a fine arrangement layer;
the candidate recall layer comprises: a keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model which are connected in parallel;
the coarse row layer comprises: a rule sub-model and a semantic matching sub-model which are connected in parallel;
the sub-model related to the application scene to be applied comprises: a semantic matching sub-model, any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model, and an implicit recall sub-model.
In one embodiment of the present application, the sub-model related to the application scenario to be applied includes: in the case of a semantic matching sub-model, a multi-label classification recall sub-model, an explicit recall sub-model, and an implicit recall sub-model, the training module 120 is specifically configured to,
inputting the documents into a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model respectively aiming at each document in the scene training data, and merging the output results to obtain candidate label results;
inputting the document and the candidate tag results into the semantic matching sub-model, and obtaining the relevance of the document and each candidate tag in the candidate tag results;
and adjusting coefficients of the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model according to the relevance of the document to each candidate label in the candidate label result and label information corresponding to the document, so as to obtain a trained document label model.
In one embodiment of the present application, the scene training data further includes: a set of tags, the set of tags comprising: the document tag model can predict tags so that the document tag model can predict the tags of the documents in the scene training data in combination with the tag set.
In one embodiment of the present application, referring to fig. 4 in combination, the apparatus further includes: and an initialization module 130, which performs an initialization operation on coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model.
It should be noted that the foregoing explanation of the training method of the document tag model is also applicable to the training device of the document tag model in this embodiment, and will not be repeated here.
According to the training device for the document tag model, the document tag model is obtained by obtaining the pre-trained document tag model, and the document tag model is obtained by pre-training universal training data of all application scenes; acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: a plurality of documents and corresponding tag information under an application scene to be applied; obtaining a sub-model related to an application scene to be applied in a document tag model; training the sub-model by using scene training data to obtain a trained document tag model, so that training data required for training the document tag model in an application scene to be applied can be reduced, and training cost is reduced under the condition of ensuring the accuracy of the document tag model.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 5, a block diagram of an electronic device is provided for a training method of a document tag model according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 5, the electronic device includes: one or more processors 301, memory 302, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 301 is illustrated in fig. 5.
The memory 302 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 110, the training module 120, and the initialization module 130 shown in fig. 3 and fig. 4) corresponding to the training method of the document tag model in the embodiments of the present application. The processor 301 executes various functional applications of the server and data processing, i.e., implements the training method of the document tag model in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 302.
The electronic device of the method of training a document tag model may further include: an input device 303 and an output device 304. The processor 301, memory 302, input device 303, and output device 304 may be connected by a bus or other means, for example in fig. 5.
The input device 303 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the trained electronic device of the document tag model, such as input devices for a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, and the like. The output device 304 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), haptic feedback devices (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.
Claims (10)
1. A method for training a document tag model, comprising:
obtaining a pre-trained document tag model, wherein the document tag model is obtained by pre-training universal training data of each application scene;
acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: the plurality of documents and the corresponding label information under the application scene to be applied;
acquiring a sub-model related to the application scene to be applied in the document tag model;
training the sub-model by adopting the scene training data to obtain a trained document tag model;
wherein, the sub-model related to the application scene to be applied comprises: when the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model are adopted, training is carried out on the sub-model by adopting the scene training data, and a trained document label model is obtained, and the method comprises the following steps:
inputting the documents into a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model respectively aiming at each document in the scene training data, and merging the output results to obtain candidate label results;
inputting the document and the candidate tag results into the semantic matching sub-model, and obtaining the relevance of the document and each candidate tag in the candidate tag results;
and adjusting coefficients of the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model according to the relevance of the document to each candidate label in the candidate label result and label information corresponding to the document, so as to obtain a trained document label model.
2. The method of claim 1, wherein the document tag model comprises: the device comprises a pretreatment layer, a candidate recall layer, a coarse arrangement layer and a fine arrangement layer;
the candidate recall layer comprises: a keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model which are connected in parallel;
the coarse row layer comprises: a rule sub-model and a semantic matching sub-model which are connected in parallel;
the sub-model related to the application scene to be applied comprises: a semantic matching sub-model, any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model, and an implicit recall sub-model.
3. The method of claim 1, wherein the scene training data further comprises: a set of tags, the set of tags comprising: the document tag model can predict tags so that the document tag model can predict the tags of the documents in the scene training data in combination with the tag set.
4. The method of claim 1, wherein training the sub-model using the scene training data further comprises, prior to obtaining a trained document tag model:
initializing coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model.
5. A training device for a document tag model, comprising:
the acquisition module is used for acquiring a pre-trained document tag model, wherein the document tag model is obtained by pre-training universal training data of each application scene;
the acquisition module is further configured to acquire scene training data of an application scene to be applied, where the scene training data includes: the plurality of documents and the corresponding label information under the application scene to be applied;
the acquisition module is further used for acquiring a sub-model related to the application scene to be applied in the document tag model;
the training module is used for training the sub-model by adopting the scene training data to obtain a trained document label model;
wherein, the sub-model related to the application scene to be applied comprises: the training module is specifically used for, when the semantics match sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model,
inputting the documents into a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model respectively aiming at each document in the scene training data, and merging the output results to obtain candidate label results;
inputting the document and the candidate tag results into the semantic matching sub-model, and obtaining the relevance of the document and each candidate tag in the candidate tag results;
and adjusting coefficients of the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model according to the relevance of the document to each candidate label in the candidate label result and label information corresponding to the document, so as to obtain a trained document label model.
6. The apparatus of claim 5, wherein the document tag model comprises: the device comprises a pretreatment layer, a candidate recall layer, a coarse arrangement layer and a fine arrangement layer;
the candidate recall layer comprises: a keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model which are connected in parallel;
the coarse row layer comprises: a rule sub-model and a semantic matching sub-model which are connected in parallel;
the sub-model related to the application scene to be applied comprises: a semantic matching sub-model, any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model, and an implicit recall sub-model.
7. The apparatus of claim 5, wherein the scene training data further comprises: a set of tags, the set of tags comprising: the document tag model can predict tags so that the document tag model can predict the tags of the documents in the scene training data in combination with the tag set.
8. The apparatus as recited in claim 5, further comprising: and the initialization module is used for initializing coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911338269.XA CN111104514B (en) | 2019-12-23 | 2019-12-23 | Training method and device for document tag model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911338269.XA CN111104514B (en) | 2019-12-23 | 2019-12-23 | Training method and device for document tag model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111104514A CN111104514A (en) | 2020-05-05 |
CN111104514B true CN111104514B (en) | 2023-04-25 |
Family
ID=70423892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911338269.XA Active CN111104514B (en) | 2019-12-23 | 2019-12-23 | Training method and device for document tag model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111104514B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111581545B (en) * | 2020-05-12 | 2023-09-19 | 腾讯科技(深圳)有限公司 | Method for sorting recall documents and related equipment |
CN111783448B (en) * | 2020-06-23 | 2024-03-15 | 北京百度网讯科技有限公司 | Document dynamic adjustment method, device, equipment and readable storage medium |
CN111782949A (en) * | 2020-06-30 | 2020-10-16 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN111858895B (en) * | 2020-07-30 | 2024-04-05 | 阳光保险集团股份有限公司 | Sequencing model determining method, sequencing device and electronic equipment |
CN112149733B (en) * | 2020-09-23 | 2024-04-05 | 北京金山云网络技术有限公司 | Model training method, model quality determining method, model training device, model quality determining device, electronic equipment and storage medium |
CN112560402A (en) * | 2020-12-28 | 2021-03-26 | 北京百度网讯科技有限公司 | Model training method and device and electronic equipment |
CN112784033B (en) * | 2021-01-29 | 2023-11-03 | 北京百度网讯科技有限公司 | Aging grade identification model training and application method and electronic equipment |
CN113011490B (en) * | 2021-03-16 | 2024-03-08 | 北京百度网讯科技有限公司 | Model training method and device and electronic equipment |
CN113239128B (en) * | 2021-06-01 | 2022-03-18 | 平安科技(深圳)有限公司 | Data pair classification method, device, equipment and storage medium based on implicit characteristics |
CN117456416A (en) * | 2023-11-03 | 2024-01-26 | 北京饼干科技有限公司 | Method and system for intelligently generating material labels |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015187155A1 (en) * | 2014-06-04 | 2015-12-10 | Waterline Data Science, Inc. | Systems and methods for management of data platforms |
CN108153856A (en) * | 2017-12-22 | 2018-06-12 | 北京百度网讯科技有限公司 | For the method and apparatus of output information |
CN108304439A (en) * | 2017-10-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of semantic model optimization method, device and smart machine, storage medium |
CN108733779A (en) * | 2018-05-04 | 2018-11-02 | 百度在线网络技术(北京)有限公司 | The method and apparatus of text figure |
CN109376222A (en) * | 2018-09-27 | 2019-02-22 | 国信优易数据有限公司 | Question and answer matching degree calculation method, question and answer automatic matching method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9836450B2 (en) * | 2014-12-09 | 2017-12-05 | Sansa AI Inc. | Methods and systems for providing universal portability in machine learning |
US20160162569A1 (en) * | 2014-12-09 | 2016-06-09 | Idibon, Inc. | Methods and systems for improving machine learning performance |
-
2019
- 2019-12-23 CN CN201911338269.XA patent/CN111104514B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015187155A1 (en) * | 2014-06-04 | 2015-12-10 | Waterline Data Science, Inc. | Systems and methods for management of data platforms |
CN108304439A (en) * | 2017-10-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of semantic model optimization method, device and smart machine, storage medium |
CN108153856A (en) * | 2017-12-22 | 2018-06-12 | 北京百度网讯科技有限公司 | For the method and apparatus of output information |
CN108733779A (en) * | 2018-05-04 | 2018-11-02 | 百度在线网络技术(北京)有限公司 | The method and apparatus of text figure |
CN109376222A (en) * | 2018-09-27 | 2019-02-22 | 国信优易数据有限公司 | Question and answer matching degree calculation method, question and answer automatic matching method and device |
Non-Patent Citations (1)
Title |
---|
谢晨阳.基于层次监督的多标签文档分类问题研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2019,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111104514A (en) | 2020-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111104514B (en) | Training method and device for document tag model | |
CN111428008B (en) | Method, apparatus, device and storage medium for training a model | |
US11847164B2 (en) | Method, electronic device and storage medium for generating information | |
CN111507104B (en) | Method and device for establishing label labeling model, electronic equipment and readable storage medium | |
CN111859951B (en) | Language model training method and device, electronic equipment and readable storage medium | |
CN112487814B (en) | Entity classification model training method, entity classification device and electronic equipment | |
CN110674314B (en) | Sentence recognition method and device | |
CN111859982B (en) | Language model training method and device, electronic equipment and readable storage medium | |
CN111737994A (en) | Method, device and equipment for obtaining word vector based on language model and storage medium | |
US20210200813A1 (en) | Human-machine interaction method, electronic device, and storage medium | |
CN111611468B (en) | Page interaction method and device and electronic equipment | |
CN112541076B (en) | Method and device for generating expanded corpus in target field and electronic equipment | |
US11775766B2 (en) | Method and apparatus for improving model based on pre-trained semantic model | |
US11915484B2 (en) | Method and apparatus for generating target re-recognition model and re-recognizing target | |
CN111539209B (en) | Method and apparatus for entity classification | |
CN111241819A (en) | Word vector generation method and device and electronic equipment | |
CN111078878B (en) | Text processing method, device, equipment and computer readable storage medium | |
CN111339759A (en) | Method and device for training field element recognition model and electronic equipment | |
CN111127191B (en) | Risk assessment method and risk assessment device | |
CN111259671A (en) | Semantic description processing method, device and equipment for text entity | |
CN111241234B (en) | Text classification method and device | |
CN110674260A (en) | Training method and device of semantic similarity model, electronic equipment and storage medium | |
CN111090991A (en) | Scene error correction method and device, electronic equipment and storage medium | |
CN111310058B (en) | Information theme recommendation method, device, terminal and storage medium | |
CN111984775A (en) | Question and answer quality determination method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |