CN115017915A - Model training and task executing method and device - Google Patents

Model training and task executing method and device Download PDF

Info

Publication number
CN115017915A
CN115017915A CN202210605524.8A CN202210605524A CN115017915A CN 115017915 A CN115017915 A CN 115017915A CN 202210605524 A CN202210605524 A CN 202210605524A CN 115017915 A CN115017915 A CN 115017915A
Authority
CN
China
Prior art keywords
word
model
feature representation
sequence
word sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210605524.8A
Other languages
Chinese (zh)
Other versions
CN115017915B (en
Inventor
步佳昊
王金刚
武威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202210605524.8A priority Critical patent/CN115017915B/en
Publication of CN115017915A publication Critical patent/CN115017915A/en
Application granted granted Critical
Publication of CN115017915B publication Critical patent/CN115017915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking

Abstract

The specification discloses a method and a device for model training and task execution, which can train a model together according to a word sequence obtained by dividing a target sentence by word granularity and a word sequence obtained by dividing the target sentence by word granularity, so that the model can learn the character sequence and the feature information of the word sequence of the target sentence and the association relation between words and words contained in the target sentence in the training process, thereby integrating the advantages of the word feature representation and the word feature representation of the target sentence in the feature representation output by the trained model for the target sentence, and further improving the accuracy of the feature representation output by the model.

Description

Model training and task executing method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for model training and task execution.
Background
Currently, each language model is a common model in the field of natural language processing, and is widely applied in the fields of search, recommendation, advertisement, etc., and a pre-training method of the current language model usually adopts a word granularity input mode (i.e., an input sentence is split into individual words and then input into the model), but in a chinese language environment, a word granularity input mode (i.e., an input sentence is split into individual basic words and then input into the model) is a more common input mode for expressing a basic meaning of a sentence than a word granularity input mode, for example: the washing machine is a noun, the original meaning can be expressed only when three characters are combined to be used as words, and after the words are separated into the words, the combined expression capability is lost, wherein the clothes word and the single word in the washing machine refer to clothes and do not refer to a part of the noun, but in the model, the clothes word and the single word in the washing machine are marked by using the same identification information, so that the difference cannot be expressed.
Therefore, how to improve the accuracy of feature representation of language model output is a problem to be solved urgently.
Disclosure of Invention
The present specification provides a method and an apparatus for model training and task execution, which partially solve the above problems in the prior art.
The technical scheme adopted by the specification is as follows:
the present specification provides a method of model training, comprising:
acquiring a word sequence and a word sequence corresponding to a target sentence;
inputting the word sequence and the word sequence into a preset model to obtain word characteristic representation output by the model based on the word sequence and word characteristic representation output by the model based on the word sequence, and taking the word characteristic representation and the word characteristic representation as a characteristic representation pair of the target sentence;
training the model with an optimization goal of a similarity between two feature representations in the pair of feature representations, the greater the similarity between the feature representation in the pair of feature representations and a feature representation in a pair of feature representations of another sentence, wherein the target sentence is a different sentence from the other sentence.
Optionally, the word sequence includes all words in the target sentence, and the word sequence includes all words in the target sentence;
inputting the word sequence and the word sequence into a preset model to obtain a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence, specifically comprising:
and inputting the word sequence and the word sequence into a preset model so as to enable the model to obtain word characteristic representation corresponding to the word sequence and word characteristic representation corresponding to the word sequence based on the overall semantics of the target sentence.
Optionally, the word sequence includes any word in the target sentence, and the word sequence includes a word corresponding to a word included in the word sequence;
inputting the word sequence and the word sequence into a preset model to obtain a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence, and specifically comprising:
and inputting the word sequence and the word sequence into a preset model so that the model obtains word characteristic representation corresponding to the word sequence and word characteristic representation corresponding to the word sequence based on the semantics of the words contained in the word sequence.
Optionally, inputting the word sequence and the word sequence into a preset model, to obtain a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence, specifically including:
inputting the word sequence and the word sequence into a preset model, so that the model obtains a word feature representation corresponding to the word sequence and a word feature representation corresponding to the word sequence based on the overall semantics of the target sentence, determines the word feature representation corresponding to each word contained in the target sentence based on the semantics of the word, and determines the word feature representation corresponding to the word contained in the word based on the semantics of the word.
Optionally, training the model with an optimization goal of a similarity between two feature representations in the feature representation pair, the similarity being greater compared to the similarity between the feature representation in the feature representation pair and feature representations in feature representation pairs of other sentences, specifically includes:
determining a first comparison loss according to the similarity between the word characteristic representation corresponding to the word sequence and the word characteristic representation corresponding to the word sequence, and the word characteristic representation corresponding to the word sequence and the word characteristic representation in the characteristic representation pairs of other sentences;
for each word in the target sentence, determining a second comparison loss according to the similarity between the word feature representation corresponding to the word and the word feature representation corresponding to the word contained in the word, and the similarity between the word feature representation corresponding to the word contained in the word and the word feature representation corresponding to the word contained in other sentences;
determining a total loss according to the first comparison loss and the second comparison loss;
and training the model by taking the minimum total loss as an optimization target.
Optionally, inputting the word sequence and the word sequence into a preset model, to obtain a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence, specifically including:
inquiring basic representation corresponding to the words contained in the word sequence and basic representation corresponding to the words contained in the word sequence;
inputting the basic representation corresponding to the words contained in the word sequence and the basic representation corresponding to the words contained in the word sequence into an embedding layer of a preset model, so as to obtain word characteristic representation output by the model based on the word sequence and word characteristic representation output by the model based on the word sequence through the embedding layer;
training the model with the similarity between the two feature representations in the feature representation pair as an optimization objective, the similarity between the feature representation in the feature representation pair and the feature representations in the feature representation pairs of other sentences being larger, specifically comprising:
training the embedding layer with the optimization goal that the similarity between the two feature representations in the feature representation pair is larger than the similarity between the feature representation in the feature representation pair and the feature representations in the feature representation pairs of other sentences.
Optionally, the method further comprises:
acquiring a designated model for processing a target service;
deploying the trained embedding layer into the specified model, and executing tasks according to the specified model after deploying the embedding layer.
The present specification provides a method of task execution, comprising:
acquiring text data;
extracting an information sequence with specified granularity from the text data, wherein the specified granularity comprises: word or word granularity;
inputting the information sequence into a pre-trained model to obtain an output result aiming at the information sequence, wherein the model is obtained by training through the model training method;
and executing the task according to the output result.
The present specification provides an apparatus for model training, comprising:
the acquisition module is used for acquiring a word sequence and a word sequence corresponding to the target sentence;
the feature extraction module is used for inputting the word sequence and the word sequence into a preset model to obtain word feature representation output by the model based on the word sequence and word feature representation output by the model based on the word sequence, and taking the word feature representation and the word feature representation as a feature representation pair of the target sentence;
and the training module is used for training the model by taking the similarity between the two feature representations in the feature representation pair as an optimization target, wherein the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair of other sentences is larger than the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair of other sentences, and the target sentence and the other sentences are different sentences.
The present specification provides an apparatus for task execution, comprising:
the text acquisition module is used for acquiring text data;
an extraction module, configured to extract an information sequence with a specified granularity from the text data, where the specified granularity includes: word or word granularity;
the output module is used for inputting the information sequence into a pre-trained model to obtain an output result aiming at the information sequence, and the model is obtained by training through the model training method;
and the execution module is used for executing tasks according to the output result.
The present specification provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the above-described method of model training, task execution.
The present specification provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the above method for model training and task execution.
The technical scheme adopted by the specification can achieve the following beneficial effects:
in the model training method provided in this specification, a word sequence and a word sequence corresponding to a target sentence may be obtained first, then the word sequence and the word sequence are input into a preset model, a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence are obtained, the word feature representation and the word feature representation are used as a feature representation pair of the target sentence, and further, a similarity between two feature representations in the feature representation pair is used, and a larger similarity between the feature representation in the feature representation pair and a feature representation in a feature representation pair of another sentence is used as an optimization target to train the model, where the target sentence is different from the other sentence.
It can be seen from the above method that the model can be trained according to the word sequence obtained by dividing the target sentence by the word granularity and the word sequence obtained by dividing the target sentence by the word granularity, so that the model can learn the feature information of the word sequence and the word sequence of the target sentence and the association relationship between the words and the words contained in the target sentence in the training process, and thus the feature representation of the target sentence and the features represented by the word features can be fused in the feature representation output by the trained model for the target sentence, and the accuracy of the feature representation output by the model can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
FIG. 1 is a schematic flow chart of a method of model training provided herein;
FIG. 2 is a schematic diagram of a process for training a model provided herein;
FIG. 3 is a flow diagram of a method of task execution provided in the present specification;
FIG. 4 is a schematic diagram of an apparatus for model training provided herein;
FIG. 5 is a schematic diagram of an apparatus for task execution provided herein;
fig. 6 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present application.
At present, a training method for a language model generally adopts that, in an input layer, after text required for training the language model is divided into a word granularity and a word granularity, each word is mapped to an independent feature representation (only used for representing a unique identifier of the word), and then, in the input layer, for each word, the feature representation of the word is spliced with the feature representation of the word corresponding to the word and then input into the language model so as to train the language model.
However, in practical applications, downstream tasks are often performed based on feature representations of texts output by an embedding layer of a language model, and in the above method, before features are input to the embedding layer, feature representations of word-size texts and feature representations of word-size texts are spliced and fused, so that the embedding layer cannot effectively distinguish the word-size texts from the word-size texts according to the fused features, and cannot learn the association relationship between the word-size texts and the word-size texts.
In addition, a common method is to divide the text into word granularities and input the word granularities into a language model by a full word mask method, train the language model, divide the text into word granularities and input the word granularities into the language model, and train the language model, so that the language model can be input by fusing the character granularity input and the word granularity input of the text twice. However, in practical applications, the downstream task needs to concatenate the character granularity input of the language model for the text, the feature representation of the obtained text, and the character granularity input of the text, so that the finally obtained feature representation occupies more dimensions, and the execution difficulty of the downstream task is further increased.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a model training method provided in this specification, specifically including the following steps:
s101: and acquiring a word sequence and a word sequence corresponding to the target sentence.
In practical application, the service platform can send text information input by a user to the model, so that the model can extract text feature representation corresponding to the text information, and further, the service platform can execute corresponding downstream services (for example, recommending commodities for the user, predicting user behaviors and the like) based on the extracted text feature representation, and in order to ensure the smooth execution of the downstream services, the accuracy of the text feature representation obtained through the model is particularly important.
Based on this, the present specification provides a model training method, in which a word sequence and a word sequence of a target sentence used for training a model are input into the model, and in a process of training the model, an embedded layer of the model can learn text features of the word sequence and the word sequence of the target sentence, so as to improve accuracy of feature representation of model extraction, where the model may be a language model.
Wherein the word sequence (i.e. word granularity input) of the target sentence is obtained by dividing all words contained in the target sentence, and each word contained in the word sequence is ordered according to the order in the target sentence, for example: if the target sentence is ' beacon fire is connected with March ', the character sequence of the target sentence is ' beacon ', fire ', connection ', Sanyu ' and ' moon '.
The word sequence (i.e., word granularity input) of the target sentence is obtained by dividing all words contained in the target sentence, and each word contained in the word sequence is ordered in the target sentence in the order, for example: if the target sentence is ' beacon fire is connected with March ', the word sequence of the target sentence is ' beacon fire ', ' connected with ' March '.
Further, in order to enable the model to learn the association relationship between all words and words included in the target sentence in the semantic context of the whole target sentence, and enable the model to learn the association relationship between a single word of the target sentence and a word corresponding to the single word, the server may divide the acquired word sequence and word sequence of the target sentence into two types to enable the language to learn for different semantic environments, and the two types of word sequence and word sequence are described in detail below.
Specifically, the first type is that the word sequence includes all words in the target sentence, and the word sequence includes all words in the target sentence, for example: the target sentence is "beacon fire connecting March", then the word sequence of the target sentence in this case is "beacon fire", "connecting", "March", and the word sequence is "beacon", "fire", "connecting", "March", and "March".
The second type is that the word sequence includes any word in the target sentence, and the word sequence includes words corresponding to the words included in the word sequence, for example: the target sentence is "beacon fire linked to March", then the word sequence of the target sentence in this case can be "beacon fire", and the corresponding word sequence is "beacon fire".
In the above, there are many ways to divide the target sentence into word sequences or word sequences, for example: and performing word segmentation through a word segmenter jieba and the like.
It should be noted that, in practical applications, the manner in which the server inputs the word sequence and the word sequence of the target sentence into the model may be that, for each word in the word sequence, a base representation (which may be understood as a unique identification ID) corresponding to the word is queried, and for each word in the word sequence, a base representation (which may be understood as a unique identification ID) corresponding to the word is queried, so that the base representation corresponding to each word included in the word sequence and the base representation corresponding to each word in the word sequence may be input into the model.
In the present specification, the execution subject of the method for implementing model training and task execution may refer to a designated device such as a server installed on a business platform, or may refer to a designated device such as a desktop computer or a notebook computer.
S102: and inputting the word sequence and the word sequence into a preset model to obtain word characteristic representation output by the model based on the word sequence and word characteristic representation output by the model based on the word sequence, and taking the word characteristic representation and the word characteristic representation as a characteristic representation pair of the target sentence.
The server can input the word sequence and the word sequence of the target sentence acquired by the method into the preset model to obtain word feature representation output by the preset model based on the word sequence and word feature representation output by the preset model based on the word sequence, and the obtained word feature representation and word feature representation are used as feature representation pairs of the target sentence, so that the model can be trained through the feature representation pairs of the target sentence.
As can be seen from the above, there may be two word sequences and two word sequences obtained by the server, and the word sequences of the two target sentences are respectively directed to different semantic environments, where the first type may train the model for the semantic environment of the whole sentence of the target sentence, and the second type may train the model for the semantic environment of any word in the target sentence, and the following will describe in detail the application of the two word sequences and the word sequences in the model training method.
If the word sequence of the target sentence contains all words in the target sentence and the word sequence contains all words in the target sentence, the server may input the acquired word sequence and word sequence into a preset model, so that the model obtains word feature representations corresponding to the word sequence and word feature representations corresponding to the word sequence based on the overall semantics of the target sentence as feature representation pairs of the target sentence.
If the word sequence of the target sentence contains any word in the target sentence, and the word sequence contains a word corresponding to the word contained in the word sequence, the server may input the obtained word sequence and word sequence into a preset model, so that the model obtains a word feature representation corresponding to the word sequence and a word feature representation corresponding to the word sequence as a feature representation pair of the target sentence based on the semantics of the word contained in the word sequence.
As can be seen from the above, the two word sequences and word sequences are directed to different semantic environments, and thus the server can use the two word sequences and word sequences together. Specifically, the server may input the word sequence and the word sequence into a preset model, so that the model obtains word feature representations corresponding to the word sequence and word feature representations corresponding to the word sequence based on the overall semantics of the target sentence, and for each word included in the target sentence, determines the word feature representation corresponding to the word based on the semantics of the word, and determines the word feature representation corresponding to the word included in the word as a feature representation pair of the target sentence based on the semantics of the word.
S103: training the model with an optimization goal of a similarity between two feature representations in the pair of feature representations, the greater the similarity between the feature representation in the pair of feature representations and a feature representation in a pair of feature representations of another sentence, wherein the target sentence is a different sentence from the other sentence.
In this specification, the server may train the model with an optimization goal of a greater similarity between two feature representations in a pair of feature representations as compared to a similarity between a feature representation in a pair of feature representations of a target sentence and a feature representation in a pair of feature representations of another sentence, where the target sentence is a different sentence from the other sentence, as shown in fig. 2.
Fig. 2 is a schematic diagram of a process for training a model provided in this specification.
As can be seen from fig. 2, the server can divide the target sentence into a word sequence and a word sequence, and input the word sequence and the word sequence into a preset model, and further train the model according to the word feature representation and the word feature representation output by the model based on the word sequence and the word sequence, and the feature representation in the feature representation pair of other sentences,
the similarity between the feature representation in the feature representation pair of the target sentence and the feature representation in the feature representation pair of the other sentence is a similarity between any one feature representation in the feature representation pair of the target sentence and any one feature representation in the feature representation pair of the other sentence, for example: similarity between the word feature representation in the pair of feature representation of the target sentence and the word feature representation in the pair of feature representation of the other sentence, further for example: similarity between the word feature representation in the feature representation pair of the target sentence and the word feature representation in the feature representation pair of the other sentence, and the like.
Further, the server takes the similarity between the two feature representations in the feature representation pair as an optimization target, and the similarity between the feature representation in the feature representation pair of the target sentence and the feature representation in the feature representation pair of the other sentence is larger than the similarity between the feature representation in the feature representation pair of the target sentence, and the similarity between the feature representation in the feature representation pair of the target sentence and the feature representation in the feature representation pair of the other sentence, as a method for training the model, the comparison loss is determined according to the similarity between the two feature representations in the feature representation pair and the similarity between the feature representation in the feature representation pair of the target sentence and the feature representation in the feature representation pair of the other sentence, and the model can be trained with the aim of minimizing the comparison loss.
As can be seen from the above, there are three specific cases in the process of training the model by determining the comparison loss by the server, and the following description will be made for the three cases.
In the first case: if the feature representation pair includes the overall semantic meaning based on the target sentence, the word feature representation corresponding to the obtained word sequence, and the word feature representation corresponding to the word sequence, the server may determine the comparison loss according to the similarity between the two feature representations in the feature representation pair obtained based on the overall semantic meaning of the target sentence, and the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair obtained based on the overall semantic meaning of the other sentence, and may train the model with the smaller comparison loss as the optimization target, wherein if the similarity between the two feature representations in the feature representation pair is greater than the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair of the other sentence, the comparison loss is smaller.
In the second case: if the feature representation pair contains semantics based on words contained in the word sequence, obtaining word feature representation corresponding to the word sequence and word feature representation corresponding to the word sequence, the server may determine a similarity between the two feature representations in the pair of feature representations based on semantics of words comprised in the sequence of words, and the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair obtained based on the semantics of the words contained in the word sequences of other sentences, determining the comparison loss, and further training a model by taking the smaller the comparison loss as an optimization target, wherein the comparison penalty is smaller if the similarity between two feature representations in a feature representation pair is larger compared to the similarity between the feature representation in the feature representation pair and the feature representations in the feature representation pairs of other sentences.
In the third case: if the word feature representation corresponding to the word sequence and the word feature representation corresponding to the word sequence are included in the feature representation pair based on the overall semantic meaning of the target sentence, and the word feature representation corresponding to the word sequence are obtained based on the semantic meaning of the words included in the word sequence, the server may determine the first comparison loss according to the word feature representation corresponding to the word sequence and the word feature representation corresponding to the word sequence obtained based on the overall semantic meaning of the target sentence, and the word feature representation corresponding to the word sequence and the overall semantic meaning based on the target sentence in the feature representation pair based on the overall semantic meaning of the target sentence. And aiming at any word in the target sentence, determining a second comparison loss according to the similarity between the word feature representation corresponding to the word and the word feature representation corresponding to the word contained in the word and the similarity between the word feature representation corresponding to the word contained in the word and the word feature representation corresponding to the word contained in any word in other sentences, determining a total loss according to the first comparison loss and the second comparison loss, and training the model by taking the minimum total loss as an optimization target.
In the above, the second comparison loss may be determined by, for each word in the target sentence, determining the sub-comparison loss corresponding to the word according to the word feature representation corresponding to the word and the word feature representation corresponding to the word included in the word, and the similarity between the word feature representation corresponding to the word included in the word and the word feature representation corresponding to the word included in any one of the other sentences, and further weighting the sub-comparison loss corresponding to each word included in the target sentence, thereby determining the second comparison loss.
For example: if the target sentence is a beacon fire linked to March, the word sequence of the target sentence is 'beacon fire', 'linked' and 'March', wherein the second comparison loss of the target sentence is obtained by weighting according to the sub-comparison loss corresponding to the 'beacon fire', the sub-comparison loss corresponding to the 'linked', and the sub-comparison loss corresponding to the 'March'.
It should be noted that the method of training a model in the above-mentioned contents may be training for the entire model or training for an embedded layer of the model, and specifically, if training is performed for an embedded layer of the model, the basic representation corresponding to a word included in a word sequence and the basic representation corresponding to a word included in a word sequence may be input to the embedded layer of the model, the basic representation corresponding to a word included in a word sequence and the basic representation corresponding to a word included in a word sequence may be input to obtain a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence through the embedded layer of the model, and the similarity between two feature representations in a feature representation pair may be further used as an optimization target, the similarity between the feature representation in the feature representation pair and the feature representation in another sentence may be larger, the embedded layer of the model is trained.
In addition, after the training of the embedded layer is completed, the trained embedded layer can be flexibly deployed to other models, and the specific deployment method can be that the server acquires a specified model for processing the target service, and then the trained model can be deployed to the specified model, and the task is executed according to the specified model after the embedded layer is deployed.
The first type is that after the trained embedded layer is deployed in a specified model, a target service corresponding to the specified model is executed, for example: the server can obtain a recommendation model for processing the recommended commodities for the user, and then can deploy the trained model into the recommendation model to serve as an embedded layer of the recommendation model, and further can execute a search content text input by the user according to the recommendation model after deploying the embedded layer, and extract corresponding text feature representation, so that the recommendation model can represent a task of recommending commodities for the user based on the text feature representation.
The second method is to deploy the trained embedding layer to the model to be trained, and then train the model to be trained based on the text feature representation output by the embedding layer, and in the training process of the model to be trained, the parameters of the embedding layer may remain unchanged, or may be fine-tuned according to the training result of the model to be trained, for example: the server can obtain a prediction model to be trained for predicting the user behavior, and then can deploy the trained embedded layer into the prediction model to be trained, and further can add the embedded layer into the training process of the prediction model to be trained.
As can be seen from the above, the server may train the model based on the semantics of the whole sentence of the target sentence, and train the model based on the semantics of the single word contained in the target sentence, so that the model can learn the word sequence and the feature information of the word sequence of the target sentence, and the association relationship between the word sequence and the word sequence of the target sentence in the training process, thereby combining the advantages of the word feature representation and the word feature representation of the target sentence in the feature representation output by the trained model for the target sentence, and further improving the accuracy of the feature representation output by the model.
To further illustrate the present specification, a method of performing task execution by the model trained by the above method is described in detail below, as shown in fig. 3.
Fig. 3 is a flowchart of a task execution method provided in this specification, including the following steps:
s301, acquiring text data.
In this specification, when a server executes a task by using a model trained by the above-described model training method, it is necessary to first acquire text data required for executing the task, for example: if the commodity recommendation task is executed through the trained model, the commodity name input by the user needs to be acquired first, and then the corresponding commodity is recommended for the user based on the text characteristic representation corresponding to the commodity name.
S302: extracting an information sequence with specified granularity from the text data, wherein the specified granularity comprises: word granularity or word granularity.
S303: and inputting the information sequence into a pre-trained model to obtain an output result aiming at the information sequence, wherein the model is obtained by training through the model training method.
S304: and executing the task according to the output result.
In this specification, in the process of training the model by using the above model training method, the features of the word granularity and the word granularity of the input target sentence are learned, and the association between the word granularity and the word granularity of the target sentence is also learned, so that in the practical application process of the model, only the word granularity or the information sequence of the word granularity of the text data needs to be input, and the model has the feature representation of the word granularity and the word granularity feature of the text data based on the feature representation of the text data extracted from the word granularity or the information sequence of the word granularity of the text data.
Based on this, the server may extract an information sequence with a specified granularity from the text data, where the specified granularity may be a word granularity or a word granularity, then input the information sequence into a pre-trained model (a model trained by the above-mentioned model training method), obtain an output result (which may refer to a feature representation corresponding to the text data or a specific result obtained according to a service corresponding to the model, for example, search for recommended content matching the text data in a recommended service, and execute a task according to the output result, where the task to be executed may specifically be determined according to actual needs, for example: and according to the trained model, aiming at the commodity information input by the user, obtaining a commodity (namely, an output result) matched with the commodity information input by the user, and further recommending the corresponding commodity for the user.
It can be seen from the above contents that, because the model learns the character granularity features and the word granularity features of the text data and the association relationship between the words and the words contained in the text data in the training process, the model can obtain the output result corresponding to the text data only by acquiring the character granularity or the word granularity of the input text data in the task execution process, and because the association relationship between the words and the words in the text data is hidden in the output feature representation, and because the feature representation of the text data extracted by the model has both the character granularity features and the word granularity features of the text data, the accuracy of the output result obtained by the model is also effectively improved.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the data protection regulation policy responded by the country of the location and obtaining the authorization given by the owner of the corresponding device.
Based on the same idea, the present specification also provides a corresponding apparatus for model training and task execution, as shown in fig. 4 and 5.
Fig. 4 is a schematic diagram of a model training apparatus provided in this specification, which specifically includes:
an obtaining module 401, configured to obtain a word sequence and a word sequence corresponding to a target sentence;
a feature extraction module 402, configured to input the word sequence and the word sequence into a preset model, to obtain a word feature representation output by the model based on the word sequence, and to use the word feature representation and the word feature representation as a feature representation pair of the target sentence;
a training module 403, configured to train the model with an optimization goal that a similarity between two feature representations in the pair of feature representations is greater than a similarity between the feature representation in the pair of feature representations and a feature representation in a pair of feature representations of another sentence, wherein the target sentence is a different sentence from the other sentence.
Optionally, the word sequence includes all words in the target sentence, and the word sequence includes all words in the target sentence;
the feature extraction module 402 is specifically configured to input the word sequence and the word sequence into a preset model, so that the model obtains a word feature representation corresponding to the word sequence and a word feature representation corresponding to the word sequence based on the overall semantic meaning of the target sentence.
Optionally, the word sequence includes any word in the target sentence, and the word sequence includes a word corresponding to a word included in the word sequence;
the feature extraction module 402 is specifically configured to input the word sequence and the word sequence into a preset model, so that the model obtains word feature representations corresponding to the word sequence and word feature representations corresponding to the word sequence based on semantics of words included in the word sequence.
Optionally, the feature extraction module 402 is specifically configured to input the word sequence and the word sequence into a preset model, so that the model obtains, based on the overall semantic meaning of the target sentence, a word feature representation corresponding to the word sequence and a word feature representation corresponding to the word sequence, determine, for each word included in the target sentence, a word feature representation corresponding to the word based on the semantic meaning of the word, and determine, based on the semantic meaning of the word, a word feature representation corresponding to a word included in the word.
Optionally, the training module 403 is specifically configured to determine a first comparison loss according to a similarity between the word feature representation corresponding to the word sequence and the word feature representation corresponding to the word sequence, and a word feature representation in a pair of the word feature representation corresponding to the word sequence and feature representations of other sentences; for each word in the target sentence, determining a second comparison loss according to the similarity between the word feature representation corresponding to the word and the word feature representation corresponding to the word contained in the word, and the similarity between the word feature representation corresponding to the word contained in the word and the word feature representation corresponding to the word contained in other sentences; determining a total loss according to the first comparison loss and the second comparison loss; and training the model by taking the minimum total loss as an optimization target.
Optionally, the feature extraction module 402 is specifically configured to query a basic representation corresponding to a word included in the word sequence and a basic representation corresponding to a word included in the word sequence; inputting the basic representation corresponding to the words contained in the word sequence and the basic representation corresponding to the words contained in the word sequence into an embedding layer of a preset model, so as to obtain word characteristic representation output by the model based on the word sequence and word characteristic representation output by the model based on the word sequence through the embedding layer;
the training module 403 is specifically configured to train the embedding layer with an optimization goal that the similarity between two feature representations in the feature representation pair is greater than the similarity between the feature representation in the feature representation pair and feature representations in feature representation pairs of other sentences.
Optionally, the apparatus further comprises:
a deployment module 404, configured to obtain a specified model for processing a target service; deploying the trained embedding layer into the specified model, and executing tasks according to the specified model after deploying the embedding layer.
Fig. 5 is a schematic diagram of a task execution device provided in this specification, which specifically includes:
a text obtaining module 501, configured to obtain text data;
an extracting module 502, configured to extract an information sequence with a specified granularity from the text data, where the specified granularity includes: word or word granularity;
an output module 503, configured to input the information sequence into a pre-trained model, so as to obtain an output result for the information sequence, where the model is obtained by training through the model training method;
and the execution module 504 is configured to execute a task according to the output result.
The present specification also provides a computer readable storage medium having stored thereon a computer program operable to execute the method of model training, task execution provided in fig. 1 above.
This specification also provides a schematic block diagram of the electronic device shown in fig. 6. As shown in fig. 6, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and may also include hardware required for other services. The processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program to implement the method for model training and task execution described in fig. 1. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain a corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to the software compiler used in program development, but the original code before compiling is also written in a specific Programming Language, which is called Hardware Description Language (HDL), and the HDL is not only one kind but many kinds, such as abel (advanced boot Expression Language), ahdl (alternate Language Description Language), communication, CUPL (computer universal Programming Language), HDCal (Java Hardware Description Language), langa, Lola, mylar, HDL, PALASM, rhydl (runtime Description Language), vhjhdul (Hardware Description Language), and vhygl-Language, which are currently used commonly. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be regarded as a hardware component and the means for performing the various functions included therein may also be regarded as structures within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present application.

Claims (12)

1. A method of model training, comprising:
acquiring a word sequence and a word sequence corresponding to a target sentence;
inputting the word sequence and the word sequence into a preset model to obtain word characteristic representation output by the model based on the word sequence and word characteristic representation output by the model based on the word sequence, and taking the word characteristic representation and the word characteristic representation as a characteristic representation pair of the target sentence;
training the model with an optimization goal of a similarity between two feature representations in the pair of feature representations, the greater the similarity between the feature representation in the pair of feature representations and a feature representation in a pair of feature representations of another sentence, wherein the target sentence is a different sentence from the other sentence.
2. The method of claim 1, wherein the word sequence comprises all words in the target sentence, and the word sequence comprises all words in the target sentence;
inputting the word sequence and the word sequence into a preset model to obtain a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence, and specifically comprising:
and inputting the word sequence and the word sequence into a preset model so as to enable the model to obtain word characteristic representation corresponding to the word sequence and word characteristic representation corresponding to the word sequence based on the overall semantics of the target sentence.
3. The method according to claim 1, wherein the word sequence includes any one word in the target sentence, and the word sequence includes a word corresponding to a word included in the word sequence;
inputting the word sequence and the word sequence into a preset model to obtain a word feature representation output by the model based on the word sequence and a word feature representation output by the model based on the word sequence, and specifically comprising:
and inputting the word sequence and the word sequence into a preset model so that the model obtains word characteristic representation corresponding to the word sequence and word characteristic representation corresponding to the word sequence based on the semantics of the words contained in the word sequence.
4. The method according to claim 1, wherein inputting the word sequence and the word sequence into a preset model to obtain a word feature representation output by the model based on the word sequence, and a word feature representation output by the model based on the word sequence specifically comprises:
and inputting the word sequence and the word sequence into a preset model, so that the model obtains word feature representation corresponding to the word sequence and word feature representation corresponding to the word sequence based on the overall semantic meaning of the target sentence, determines the word feature representation corresponding to each word contained in the target sentence based on the semantic meaning of the word, and determines the word feature representation corresponding to the word contained in the word based on the semantic meaning of the word.
5. The method of claim 4, wherein training the model with the objective of optimizing the similarity between the two feature representations in the pair of feature representations as compared to the greater the similarity between the feature representation in the pair of feature representations and the feature representations in the pair of feature representations of other sentences comprises:
determining a first comparison loss according to the similarity between the word characteristic representation corresponding to the word sequence and the word characteristic representation corresponding to the word sequence, and the word characteristic representation corresponding to the word sequence and the word characteristic representation in the characteristic representation pairs of other sentences;
for each word in the target sentence, determining a second comparison loss according to the similarity between the word feature representation corresponding to the word and the word feature representation corresponding to the word contained in the word, and the similarity between the word feature representation corresponding to the word contained in the word and the word feature representation corresponding to the word contained in other sentences;
determining a total loss according to the first comparison loss and the second comparison loss;
and training the model by taking the minimum total loss as an optimization target.
6. The method according to claim 1, wherein inputting the word sequence and the word sequence into a preset model, obtaining a word feature representation output by the model based on the word sequence, and obtaining a word feature representation output by the model based on the word sequence, specifically includes:
inquiring basic representation corresponding to the words contained in the word sequence and basic representation corresponding to the words contained in the word sequence;
inputting the basic representation corresponding to the words contained in the word sequence and the basic representation corresponding to the words contained in the word sequence into an embedding layer of a preset model, so as to obtain word characteristic representation output by the model based on the word sequence and word characteristic representation output by the model based on the word sequence through the embedding layer;
training the model by taking the similarity between the two feature representations in the feature representation pair as an optimization target, wherein the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair of other sentences is larger than that between the feature representation in the feature representation pair, and the training specifically comprises the following steps:
training the embedding layer with the optimization goal that the similarity between the two feature representations in the feature representation pair is larger than the similarity between the feature representation in the feature representation pair and the feature representations in the feature representation pairs of other sentences.
7. The method of claim 6, wherein the method further comprises:
acquiring a designated model for processing a target service;
deploying the trained embedding layer into the specified model, and executing tasks according to the specified model after deploying the embedding layer.
8. A method of task execution, comprising:
acquiring text data;
extracting an information sequence with specified granularity from the text data, wherein the specified granularity comprises: word or word granularity;
inputting the information sequence into a pre-trained model to obtain an output result aiming at the information sequence, wherein the model is obtained by training through the method of any one of the claims 1-7;
and executing the task according to the output result.
9. An apparatus for model training, comprising:
the acquisition module is used for acquiring a word sequence and a word sequence corresponding to the target sentence;
the feature extraction module is used for inputting the word sequence and the word sequence into a preset model to obtain word feature representation output by the model based on the word sequence and word feature representation output by the model based on the word sequence, and taking the word feature representation and the word feature representation as a feature representation pair of the target sentence;
and the training module is used for training the model by taking the similarity between the two feature representations in the feature representation pair as an optimization target, wherein the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair of other sentences is larger than the similarity between the feature representation in the feature representation pair and the feature representation in the feature representation pair of other sentences, and the target sentence and the other sentences are different sentences.
10. An apparatus for task execution, comprising:
the text acquisition module is used for acquiring text data;
an extraction module, configured to extract an information sequence with a specified granularity from the text data, where the specified granularity includes: word or word granularity;
an output module, configured to input the information sequence into a pre-trained model, so as to obtain an output result for the information sequence, where the model is obtained by training according to the method of any one of claims 1 to 7;
and the execution module is used for executing tasks according to the output result.
11. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1 to 8.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 8 when executing the program.
CN202210605524.8A 2022-05-30 2022-05-30 Model training and task execution method and device Active CN115017915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210605524.8A CN115017915B (en) 2022-05-30 2022-05-30 Model training and task execution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210605524.8A CN115017915B (en) 2022-05-30 2022-05-30 Model training and task execution method and device

Publications (2)

Publication Number Publication Date
CN115017915A true CN115017915A (en) 2022-09-06
CN115017915B CN115017915B (en) 2023-05-30

Family

ID=83071527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210605524.8A Active CN115017915B (en) 2022-05-30 2022-05-30 Model training and task execution method and device

Country Status (1)

Country Link
CN (1) CN115017915B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019105134A1 (en) * 2017-11-30 2019-06-06 阿里巴巴集团控股有限公司 Word vector processing method, apparatus and device
CN110162749A (en) * 2018-10-22 2019-08-23 哈尔滨工业大学(深圳) Information extracting method, device, computer equipment and computer readable storage medium
CN110287312A (en) * 2019-05-10 2019-09-27 平安科技(深圳)有限公司 Calculation method, device, computer equipment and the computer storage medium of text similarity
CN110377905A (en) * 2019-06-28 2019-10-25 北京百度网讯科技有限公司 Semantic expressiveness processing method and processing device, computer equipment and the readable medium of sentence
CN110489555A (en) * 2019-08-21 2019-11-22 创新工场(广州)人工智能研究有限公司 A kind of language model pre-training method of combination class word information
CN110795935A (en) * 2020-01-06 2020-02-14 广东博智林机器人有限公司 Training method and device for character word vector model, terminal and storage medium
CN110956033A (en) * 2019-12-04 2020-04-03 北京中电普华信息技术有限公司 Text similarity calculation method and device
US20200193217A1 (en) * 2017-02-27 2020-06-18 Yutou Technology (Hangzhou) Co., Ltd. Method for determining sentence similarity
CN111914551A (en) * 2020-07-29 2020-11-10 北京字节跳动网络技术有限公司 Language representation model system, pre-training method, device, equipment and medium
CN113673201A (en) * 2021-07-15 2021-11-19 北京三快在线科技有限公司 Text representation vector generation method and device, storage medium and electronic equipment
CN114077841A (en) * 2021-11-18 2022-02-22 平安普惠企业管理有限公司 Semantic extraction method and device based on artificial intelligence, electronic equipment and medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200193217A1 (en) * 2017-02-27 2020-06-18 Yutou Technology (Hangzhou) Co., Ltd. Method for determining sentence similarity
WO2019105134A1 (en) * 2017-11-30 2019-06-06 阿里巴巴集团控股有限公司 Word vector processing method, apparatus and device
CN110162749A (en) * 2018-10-22 2019-08-23 哈尔滨工业大学(深圳) Information extracting method, device, computer equipment and computer readable storage medium
CN110287312A (en) * 2019-05-10 2019-09-27 平安科技(深圳)有限公司 Calculation method, device, computer equipment and the computer storage medium of text similarity
CN110377905A (en) * 2019-06-28 2019-10-25 北京百度网讯科技有限公司 Semantic expressiveness processing method and processing device, computer equipment and the readable medium of sentence
CN110489555A (en) * 2019-08-21 2019-11-22 创新工场(广州)人工智能研究有限公司 A kind of language model pre-training method of combination class word information
CN110956033A (en) * 2019-12-04 2020-04-03 北京中电普华信息技术有限公司 Text similarity calculation method and device
CN110795935A (en) * 2020-01-06 2020-02-14 广东博智林机器人有限公司 Training method and device for character word vector model, terminal and storage medium
CN111914551A (en) * 2020-07-29 2020-11-10 北京字节跳动网络技术有限公司 Language representation model system, pre-training method, device, equipment and medium
CN113673201A (en) * 2021-07-15 2021-11-19 北京三快在线科技有限公司 Text representation vector generation method and device, storage medium and electronic equipment
CN114077841A (en) * 2021-11-18 2022-02-22 平安普惠企业管理有限公司 Semantic extraction method and device based on artificial intelligence, electronic equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NINGJIE LU ET.AL: "Chinese Clinical Named Entity Recognition with Word-Level Information Incorporating Dictionaries", 《2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)》 *
刘小敏等: "不同特征粒度在微博短文本分类中作用的比较研究", 《情报科学》 *
武威: "基于模板匹配与结构特征的字符识别算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》 *

Also Published As

Publication number Publication date
CN115017915B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN112308113A (en) Target identification method, device and medium based on semi-supervision
CN114332873A (en) Training method and device for recognition model
CN115545002A (en) Method, device, storage medium and equipment for model training and business processing
CN113887206B (en) Model training and keyword extraction method and device
CN112966577B (en) Method and device for model training and information providing
CN113887235A (en) Information recommendation method and device
CN107577660B (en) Category information identification method and device and server
CN115221523B (en) Data processing method, device and equipment
CN115238250B (en) Model processing method, device and equipment
CN113887234B (en) Model training and recommending method and device
CN115017915A (en) Model training and task executing method and device
CN114996570A (en) Information recommendation method and device
CN114926437A (en) Image quality evaluation method and device
CN113344197A (en) Training method of recognition model, service execution method and device
CN114116816A (en) Recommendation method and device
CN113344590A (en) Method and device for model training and complaint rate estimation
CN111539962A (en) Target image classification method, device and medium
CN112287130A (en) Searching method, device and equipment for graphic questions
CN113642603B (en) Data matching method and device, storage medium and electronic equipment
CN117369783B (en) Training method and device for security code generation model
CN114861665B (en) Method and device for training reinforcement learning model and determining data relation
CN116795972B (en) Model training method and device, storage medium and electronic equipment
CN115423485B (en) Data processing method, device and equipment
CN113011424A (en) Training sample generation method and device
CN115017899B (en) Abbreviation generation method, apparatus, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant