WO2021147404A1 - 依存关系分类方法及相关设备 - Google Patents

依存关系分类方法及相关设备 Download PDF

Info

Publication number
WO2021147404A1
WO2021147404A1 PCT/CN2020/122917 CN2020122917W WO2021147404A1 WO 2021147404 A1 WO2021147404 A1 WO 2021147404A1 CN 2020122917 W CN2020122917 W CN 2020122917W WO 2021147404 A1 WO2021147404 A1 WO 2021147404A1
Authority
WO
WIPO (PCT)
Prior art keywords
word
sample
samples
vector
sentence
Prior art date
Application number
PCT/CN2020/122917
Other languages
English (en)
French (fr)
Inventor
马旭强
郝正鸿
王少军
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021147404A1 publication Critical patent/WO2021147404A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to artificial intelligence technology, and specifically relates to a dependency classification method, device, computer equipment, and computer-readable storage medium.
  • Dependency classification is a key technology in natural language processing. The inventor realizes that the accuracy of dependency classification will affect the accuracy of natural language processing. Dependency classification often has the problem of inaccurate classification.
  • the first aspect of the present application provides a dependency classification method, and the method includes:
  • classification models include a BERT layer, a word encoding layer, a word segmentation layer, a word encoding layer, a perception layer, and an affine classification layer;
  • the target sentence is classified by the word dependency relationship through the trained classification model.
  • a second aspect of the present application provides a dependency classification device, the device includes:
  • the acquisition module is used to acquire sentence samples, target sentences and classification models, the classification model includes a BERT layer, a word coding layer, a word segmentation layer, a word coding layer, a perception layer, and an affine classification layer;
  • a generating module configured to generate the first word vector sequence of the sentence sample through the BERT layer
  • the word segmentation module is used to segment the sentence sample through the word segmentation layer to obtain multiple word samples of the sentence sample;
  • An encoding module configured to encode the sentence sample through the word encoding layer to obtain a second word vector sequence and a third word vector sequence of the sentence sample;
  • a calculation module for calculating the word vectors of the multiple word samples according to the first word vector sequence, the second word vector sequence, and the third word vector sequence of the sentence sample through the word coding layer;
  • a determining module configured to determine the core word vector and the dependent word vector of the plurality of word samples according to the word vectors of the plurality of word samples through the perception layer;
  • the first classification module is configured to classify the dependency relationship of any two word samples according to the core word vector and the dependent word vector of any two word samples through the affine classification layer;
  • the training module is used to train the classification model according to the dependency relationship classification results of the any two word samples and the dependency relationship labels of the any two word samples in the sentence samples to obtain a trained classification model ;
  • the second classification module is used to classify the target sentence by the word dependency relationship through the trained classification model.
  • a third aspect of the present application provides a computer device that includes a processor, and the processor is configured to execute computer-readable instructions stored in a memory to implement the following steps:
  • classification models include a BERT layer, a word encoding layer, a word segmentation layer, a word encoding layer, a perception layer, and an affine classification layer;
  • the target sentence is classified by the word dependency relationship through the trained classification model.
  • a fourth aspect of the present application provides a computer-readable storage medium having computer-readable instructions stored thereon, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
  • classification models include a BERT layer, a word encoding layer, a word segmentation layer, a word encoding layer, a perception layer, and an affine classification layer;
  • the target sentence is classified by the word dependency relationship through the trained classification model.
  • This application encodes sentence samples through the BERT layer and the word encoding layer, which improves the efficiency of training the classification model.
  • the affine classification layer classifies the dependency relationship of any two word samples according to the core word vectors and dependent word vectors of any two word samples, which increases the adaptability of the scene and enables the classification model to be able to target sentences. Any two words in the categorization of dependencies. Training the classification model according to the dependency classification results of the any two word samples and the dependency labels of the any two word samples in the sentence samples to obtain a trained classification model; through the training The latter classification model classifies the target sentence based on the word dependence relationship, which improves the accuracy of the classification.
  • Fig. 1 is a flowchart of a dependency classification method provided by an embodiment of the present application.
  • Fig. 2 is a structural diagram of a dependency classification device provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a computer device provided by an embodiment of the present application.
  • the dependency classification method of the present application is applied to one or more computer devices.
  • the computer device is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
  • Its hardware includes, but is not limited to, a microprocessor and an application specific integrated circuit (ASIC) , Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC application specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Processor
  • embedded equipment etc.
  • the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
  • FIG. 1 is a flowchart of a dependency classification method provided in Embodiment 1 of the present application.
  • the dependency classification method is applied to computer equipment, and is used to classify the word dependency relationship of the target sentence to improve the accuracy of the classification.
  • the dependency classification method specifically includes the following steps. According to different requirements, the order of the steps in the flowchart can be changed, and some of the steps can be omitted.
  • the classification model includes a BERT layer, a word encoding layer, a word segmentation layer, a word encoding layer, a perception layer, and an affine classification layer.
  • the sentence sample, the target sentence, and the classification model input by the user can be received. Or pull the sentence sample, the target sentence, and the classification model from a cloud storage device.
  • the language samples are used to train the classification model.
  • the target sentence is a sentence to be classified.
  • the BERT layer and the word coding layer may be pre-trained.
  • BERT Bidirectional Encoder Representation from Transformers, that is, the Encoder part of the bidirectional Transformer model.
  • the BERT layer can be pre-trained based on the Masked LM and Next Sentence Prediction methods. In order to achieve the purpose of capturing word and sentence-level semantic representation through the BERT layer.
  • the first word vector sequence of the sentence sample includes semantic information of the sentence sample.
  • the word segmentation layer may include a Recurrent Neural Network (RNN); or the word segmentation layer may include a BiLSTM (bidirectional long short-term memory network) layer and a CRF (Conditional Random Fields, Conditional Random Fields) layer.
  • RNN Recurrent Neural Network
  • BiLSTM bidirectional long short-term memory network
  • CRF Conditional Random Fields, Conditional Random Fields
  • the component type of each word sample includes entity, attribute, attribute value, description, relationship and so on.
  • the dependency classification method further includes:
  • the deleted samples of multiple words are "Baidu”, “Q1", “Total Revenue” and "24.1 billion yuan”.
  • the encoding of the sentence samples by the word encoding layer includes:
  • the position information of the word sample is the sequence number of the word sample in the sentence sample, or the reverse sequence number of the word sample in the sentence sample.
  • the target word sample to which the word sample belongs is determined, the component type of the target word sample is determined as the type information of the word sample, and the type information of the word sample is encoded by the second word encoding sublayer.
  • the third word vector of the predicate word sample For example, the third word vectors of the word samples whose component types are entity, attribute, attribute value, description, and relationship are "001", "010", “011”, "100", and "101", respectively.
  • the calculating, by the word encoding layer, the word vectors of the plurality of word samples according to the first word vector sequence, the second word vector sequence, and the third word vector sequence of the sentence sample includes:
  • For each target word sample generating a feature vector of the target word sample according to the first word vector, the second word vector, and the third word vector of the target word sample;
  • the word vector of the word sample is calculated according to the feature vectors of the multiple target word samples.
  • the generating the feature vector of the target word sample according to the first word vector, the second word vector, and the third word vector of the target word sample includes:
  • the calculating the word vector of the word sample according to the feature vectors of the multiple target word samples includes:
  • the perceptual layer includes two different perceptrons, a core word perceptron and a dependent word perceptron.
  • For each word sample encode the word vector of the word sample by the core word sensor to obtain the core word vector of the word sample;
  • the word vector of the word sample is encoded by the dependent word sensor to obtain the dependent word vector of the word sample.
  • the core word vector of the word sample is the vector representation of the word sample; when the word sample corresponds to the dependency item in the dependency relationship, the value of the word sample
  • the dependent word vector is a vector representation of the word sample.
  • the dependency relationship points from the core item to the dependency item.
  • the core word vector of the u-th word sample and the dependent word vector of the v-th word sample are input into the affine classification layer; the u-th word sample is processed through the affine classification layer Calculate the core word vector of the v-th word sample and the dependent word vector of the v-th word sample, and output the first score vector; determine the dependency relationship type corresponding to the dimension with the highest score in the first score vector as the u-th
  • the word sample points to the target dependency relationship type of the v-th word sample.
  • the u-th word sample corresponds to a dependency item in the dependency relationship
  • the v-th word sample corresponds to a core item in the dependency relationship
  • the core word vector of the word sample is input to the affine classification layer; the dependent word vector of the u-th word sample and the core word vector of the v-th word sample are calculated through the affine classification layer, and the first word vector is output Two score vectors; determining the dependency relationship type corresponding to the dimension with the highest score in the second score vector as the target dependency relationship type from the v-th word sample to the u-th word sample.
  • the above classification results can also be stored in a node of a blockchain.
  • the training the classification model according to the dependency relationship classification result of the any two word samples and the dependency relationship labels of the any two word samples in the sentence sample includes:
  • the parameters in the classification model are optimized according to the loss value based on a back propagation algorithm.
  • the determining a plurality of tag weights according to the dependency relationship tags in the sentence sample includes:
  • the label type of the dependency relationship label includes a first label type indicating that there is no dependency relationship and a second label type indicating that there is a dependency relationship;
  • the first label weight is determined as the label weight of the dependency label of the first label type
  • the second label weight is determined as the label weight of the dependency label of the second label type.
  • the first tag type includes "UNK”
  • the second tag type includes "root”, “subj”, “obj”, “pred”, “adv”, and so on.
  • the weight of the first label is determined to be 0.1
  • the weight of the second label is determined to be 1.
  • the weight of the first label is determined to be 0.2
  • the weight of the second label is determined to be 0.9.
  • the first tag type indicates that there is no dependency relationship between any two word samples (that is, there is no dependency arc from the word sample corresponding to the core item to the word sample pointed to by the dependent item).
  • the dependency relationship classification result of the word pair of the i-th group is y i
  • the dependency label of the word pair of the i-th group is y′ i
  • the label weight of the word pair of the i-th group is w i .
  • the loss value is ce
  • y i and y′ i are onehot vectors.
  • the dependency classification method further includes:
  • the training of the classification model is stopped, and the trained classification model is obtained.
  • the word dependence relationship classification of the target sentence by the trained classification model can obtain the two-way dependence relationship type of any two words in the target sentence. For the jth word and the kth word in the target sentence, when the jth word corresponds to the core item in the dependency relationship, and the kth word corresponds to the dependency item in the dependency relationship, use the trained The latter classification model outputs a third score vector; the dependency type corresponding to the dimension with the highest score in the third score vector is determined as the target dependency type from the jth word to the kth word.
  • the dependency classification method of the first embodiment encodes sentence samples through the BERT layer and the word encoding layer, which improves the efficiency of training the classification model.
  • the affine classification layer classifies the dependency relationship of any two word samples according to the core word vectors and dependent word vectors of any two word samples, which increases the adaptability of the scene and enables the classification model to be able to target sentences. Any two words in the categorization of dependencies. Training the classification model according to the dependency classification results of the any two word samples and the dependency labels of the any two word samples in the sentence samples to obtain a trained classification model; through the training The latter classification model classifies the target sentence based on the word dependence relationship, which improves the accuracy of the classification.
  • FIG. 2 is a structural diagram of a dependency classification device provided in Embodiment 2 of the present application.
  • the dependency classification device 20 is applied to computer equipment.
  • the dependence relationship classification device 20 is used to classify the word dependence relationship of the target sentence to improve the accuracy of the classification.
  • the dependency classification device 20 may include an acquisition module 201, a generation module 202, a word segmentation module 203, an encoding module 204, a calculation module 205, a determination module 206, a first classification module 207, a training module 208, and a first classification module 207.
  • the obtaining module 201 is used to obtain sentence samples, target sentences, and classification models.
  • the classification model includes a BERT layer, a word coding layer, a word segmentation layer, a word coding layer, a perception layer, and an affine classification layer.
  • the sentence sample, the target sentence, and the classification model input by the user can be received. Or pull the sentence sample, the target sentence, and the classification model from a cloud storage device.
  • the language samples are used to train the classification model.
  • the target sentence is a sentence to be classified.
  • the generating module 202 is configured to generate the first word vector sequence of the sentence sample through the BERT layer.
  • BERT Bidirectional Encoder Representation from Transformers, that is, the Encoder part of the bidirectional Transformer model.
  • the BERT layer can be pre-trained based on the Masked LM and Next Sentence Prediction methods. In order to achieve the purpose of capturing word and sentence-level semantic representation through the BERT layer.
  • the first word vector sequence of the sentence sample includes semantic information of the sentence sample.
  • the word segmentation module 203 is configured to segment the sentence sample through the word segmentation layer to obtain multiple word samples of the sentence sample.
  • the word segmentation layer may include a Recurrent Neural Network (RNN); or the word segmentation layer may include a BiLSTM (bidirectional long short-term memory network) layer and a CRF (Conditional Random Fields, Conditional Random Fields) layer.
  • RNN Recurrent Neural Network
  • BiLSTM bidirectional long short-term memory network
  • CRF Conditional Random Fields, Conditional Random Fields
  • the component type of each word sample includes entity, attribute, attribute value, description, relationship and so on.
  • the dependency classification device further includes a deletion module for obtaining, for each word sample, the component type of the word sample;
  • the deleted samples of multiple words are "Baidu”, “Q1", “Total Revenue” and "24.1 billion yuan”.
  • the encoding module 204 is configured to encode the sentence sample through the word encoding layer to obtain the second word vector sequence and the third word vector sequence of the sentence sample.
  • the encoding of the sentence samples by the word encoding layer includes:
  • the position information of the word sample is the sequence number of the word sample in the sentence sample, or the reverse sequence number of the word sample in the sentence sample.
  • the target word sample to which the word sample belongs is determined, the component type of the target word sample is determined as the type information of the word sample, and the type information of the word sample is encoded by the second word encoding sublayer.
  • the third word vector of the predicate word sample For example, the third word vectors of the word samples whose component types are entity, attribute, attribute value, description, and relationship are "001", "010", “011”, "100", and "101", respectively.
  • the calculation module 205 is configured to calculate the word vectors of the multiple word samples according to the first word vector sequence, the second word vector sequence, and the third word vector sequence of the sentence sample through the word coding layer.
  • the calculating, by the word encoding layer, the word vectors of the plurality of word samples according to the first word vector sequence, the second word vector sequence, and the third word vector sequence of the sentence sample includes:
  • For each target word sample generating a feature vector of the target word sample according to the first word vector, the second word vector, and the third word vector of the target word sample;
  • the word vector of the word sample is calculated according to the feature vectors of the multiple target word samples.
  • the generating the feature vector of the target word sample according to the first word vector, the second word vector, and the third word vector of the target word sample includes:
  • the calculating the word vector of the word sample according to the feature vectors of the multiple target word samples includes:
  • the determining module 206 is configured to determine the core word vector and the dependent word vector of the plurality of word samples according to the word vectors of the plurality of word samples through the perception layer.
  • the perceptual layer includes two different perceptrons, a core word perceptron and a dependent word perceptron.
  • For each word sample encode the word vector of the word sample by the core word sensor to obtain the core word vector of the word sample;
  • the word vector of the word sample is encoded by the dependent word sensor to obtain the dependent word vector of the word sample.
  • the core word vector of the word sample is the vector representation of the word sample; when the word sample corresponds to the dependency item in the dependency relationship, the value of the word sample
  • the dependent word vector is a vector representation of the word sample.
  • the dependency relationship points from the core item to the dependency item.
  • the first classification module 207 is configured to classify the dependency relationship of any two word samples according to the core word vectors and dependent word vectors of any two word samples through the affine classification layer.
  • the core word vector of the u-th word sample and the dependent word vector of the v-th word sample are input into the affine classification layer; the u-th word sample is processed through the affine classification layer Calculate the core word vector of the v-th word sample and the dependent word vector of the v-th word sample, and output the first score vector; determine the dependency relationship type corresponding to the dimension with the highest score in the first score vector as the u-th
  • the word sample points to the target dependency relationship type of the v-th word sample.
  • the u-th word sample corresponds to a dependency item in the dependency relationship
  • the v-th word sample corresponds to a core item in the dependency relationship
  • the core word vector of the word sample is input to the affine classification layer; the dependent word vector of the u-th word sample and the core word vector of the v-th word sample are calculated through the affine classification layer, and the first word vector is output Two score vectors; determining the dependency relationship type corresponding to the dimension with the highest score in the second score vector as the target dependency relationship type from the v-th word sample to the u-th word sample.
  • the above classification results can also be stored in a node of a blockchain.
  • the training module 208 is configured to train the classification model according to the dependency relationship classification results of the any two word samples and the dependency relationship labels of the any two word samples in the sentence samples to obtain the trained classification Model.
  • the training the classification model according to the dependency relationship classification result of the any two word samples and the dependency relationship labels of the any two word samples in the sentence sample includes:
  • the parameters in the classification model are optimized according to the loss value based on a back propagation algorithm.
  • the determining a plurality of tag weights according to the dependency relationship tags in the sentence sample includes:
  • the label type of the dependency relationship label includes a first label type indicating that there is no dependency relationship and a second label type indicating that there is a dependency relationship;
  • the first label weight is determined as the label weight of the dependency label of the first label type
  • the second label weight is determined as the label weight of the dependency label of the second label type.
  • the first tag type includes "UNK”
  • the second tag type includes "root”, “subj”, “obj”, “pred”, “adv”, and so on.
  • the weight of the first label is determined to be 0.1
  • the weight of the second label is determined to be 1.
  • the weight of the first label is determined to be 0.2
  • the weight of the second label is determined to be 0.9.
  • the first tag type indicates that there is no dependency relationship between any two word samples (that is, there is no dependency arc from the word sample corresponding to the core item to the word sample pointed to by the dependent item).
  • the dependency relationship classification result of the word pair of the i-th group is y i
  • the dependency label of the word pair of the i-th group is y′ i
  • the label weight of the word pair of the i-th group is w i .
  • the loss value is ce
  • y i and y′ i are onehot vectors.
  • the dependency classification device further includes a stop module for evaluating the classification capability of the classification model through the Macro-F1 model;
  • the training of the classification model is stopped, and the trained classification model is obtained.
  • the second classification module 209 is configured to classify the target sentence by the word dependency relationship through the trained classification model.
  • the word dependence relationship classification of the target sentence by the trained classification model can obtain the two-way dependence relationship type of any two words in the target sentence. For the jth word and the kth word in the target sentence, when the jth word corresponds to the core item in the dependency relationship, and the kth word corresponds to the dependency item in the dependency relationship, use the trained The latter classification model outputs a third score vector; the dependency type corresponding to the dimension with the highest score in the third score vector is determined as the target dependency type from the jth word to the kth word.
  • the dependency classification device 20 of the second embodiment encodes sentence samples through the BERT layer and the word encoding layer, which improves the efficiency of training the classification model.
  • the affine classification layer classifies the dependency relationship of any two word samples according to the core word vectors and dependent word vectors of any two word samples, which increases the adaptability of the scene and enables the classification model to be able to target sentences. Any two words in the categorization of dependencies. Training the classification model according to the dependency classification results of the any two word samples and the dependency labels of the any two word samples in the sentence samples to obtain a trained classification model; through the training The latter classification model classifies the target sentence based on the word dependence relationship, which improves the accuracy of the classification.
  • This embodiment provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium.
  • the computer-readable storage medium may be nonvolatile or volatile.
  • the classification model includes a BERT layer, a word encoding layer, a word segmentation layer, a word encoding layer, a perception layer, and an affine classification layer;
  • each module in the above-mentioned device embodiment is realized, for example, the modules 201-209 in FIG. 2:
  • the obtaining module 201 is used to obtain sentence samples, target sentences, and classification models, the classification models including a BERT layer, a word coding layer, a word segmentation layer, a word coding layer, a perception layer, and an affine classification layer;
  • a generating module 202 configured to generate the first word vector sequence of the sentence sample through the BERT layer
  • the word segmentation module 203 is configured to segment the sentence sample through the word segmentation layer to obtain multiple word samples of the sentence sample;
  • the encoding module 204 is configured to encode the sentence sample through the word encoding layer to obtain the second word vector sequence and the third word vector sequence of the sentence sample;
  • the calculation module 205 is configured to calculate the word vectors of the multiple word samples according to the first word vector sequence, the second word vector sequence, and the third word vector sequence of the sentence sample through the word coding layer;
  • the determining module 206 is configured to determine the core word vector and the dependent word vector of the plurality of word samples according to the word vectors of the plurality of word samples through the perception layer;
  • the first classification module 207 is configured to classify the dependency relationship of any two word samples according to the core word vectors and dependent word vectors of any two word samples through the affine classification layer;
  • the training module 208 is configured to train the classification model according to the dependency relationship classification results of the any two word samples and the dependency relationship labels of the any two word samples in the sentence samples to obtain the trained classification Model;
  • the second classification module 209 is configured to classify the target sentence by the word dependency relationship through the trained classification model.
  • FIG. 3 is a schematic diagram of the computer equipment provided in the fourth embodiment of the application.
  • the computer device 30 includes a memory 301, a processor 302, and computer-readable instructions 303 stored in the memory 301 and running on the processor 302, such as a dependency classification program.
  • the processor 302 executes the computer-readable instruction 303, the steps in the above embodiment of the dependency classification method are implemented, for example, steps 101-109 shown in FIG. 1:
  • the classification model includes a BERT layer, a word encoding layer, a word segmentation layer, a word encoding layer, a perception layer, and an affine classification layer;
  • each module in the above-mentioned device embodiment is realized, for example, the modules 201-209 in FIG. 2:
  • the obtaining module 201 is used to obtain sentence samples, target sentences, and classification models, the classification models including a BERT layer, a word coding layer, a word segmentation layer, a word coding layer, a perception layer, and an affine classification layer;
  • a generating module 202 configured to generate the first word vector sequence of the sentence sample through the BERT layer
  • the word segmentation module 203 is configured to segment the sentence sample through the word segmentation layer to obtain multiple word samples of the sentence sample;
  • the encoding module 204 is configured to encode the sentence sample through the word encoding layer to obtain the second word vector sequence and the third word vector sequence of the sentence sample;
  • the calculation module 205 is configured to calculate the word vectors of the multiple word samples according to the first word vector sequence, the second word vector sequence, and the third word vector sequence of the sentence sample through the word coding layer;
  • the determining module 206 is configured to determine the core word vector and the dependent word vector of the plurality of word samples according to the word vectors of the plurality of word samples through the perception layer;
  • the first classification module 207 is configured to classify the dependency relationship of any two word samples according to the core word vectors and dependent word vectors of any two word samples through the affine classification layer;
  • the training module 208 is configured to train the classification model according to the dependency relationship classification results of the any two word samples and the dependency relationship labels of the any two word samples in the sentence samples to obtain the trained classification Model;
  • the second classification module 209 is configured to classify the target sentence by the word dependency relationship through the trained classification model.
  • the computer-readable instruction 303 may be divided into one or more modules, and the one or more modules are stored in the memory 301 and executed by the processor 302 to complete the method.
  • the one or more modules may be a series of computer-readable instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instruction 303 in the computer device 30.
  • the computer-readable instruction 303 can be divided into the acquisition module 201, the generation module 202, the word segmentation module 203, the encoding module 204, the calculation module 205, the determination module 206, the first classification module 207, and the training module 208 in FIG.
  • the second classification module 209 refer to the second embodiment for the specific functions of each module.
  • the computer device 30 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram 3 is only an example of the computer device 30 and does not constitute a limitation on the computer device 30. It may include more or less components than those shown in the figure, or combine certain components, or be different.
  • the computer device 30 may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 302 may be a central processing unit (Central Processing Unit, CPU), other general processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor 302 may also be any conventional processor, etc.
  • the processor 302 is the control center of the computer equipment 30, which uses various interfaces and lines to connect the entire computer equipment 30 Various parts.
  • the memory 301 can be used to store the computer-readable instructions 303, and the processor 302 executes or executes the computer-readable instructions or modules stored in the memory 301 and calls data stored in the memory 301 to implement Various functions of the computer device 30.
  • the memory 301 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.); the storage data area may Data and the like created in accordance with the use of the computer device 30 are stored.
  • the memory 301 may include a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), at least one disk storage device, flash memory Devices, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), or other non-volatile/volatile storage devices.
  • the integrated module of the computer device 30 may be stored in a computer-readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile. Based on this understanding, this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions can be stored in a computer-readable storage medium.
  • the computer-readable instruction when executed by the processor, it can implement the steps of the foregoing method embodiments.
  • the computer-readable instructions may be in the form of source code, object code, executable file, or some intermediate forms, etc.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer-readable instructions, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, read only memory (ROM), random access memory ( RAM).
  • the computer-readable storage medium may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required by at least one function, etc.; the storage data area may store Data created by the use of nodes, etc.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, or in the form of hardware plus software functional modules.
  • the above-mentioned integrated modules implemented in the form of software functional modules may be stored in a computer-readable storage medium.
  • the above-mentioned software function module is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor execute the dependency relationship described in the various embodiments of this application Part of the classification method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种依存关系分类方法及相关设备,涉及人工智能技术,所述方法通过分词层对语句样本进行分词,得到语句样本的多个词语样本(103);通过词编码层根据语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算多个词语样本的词向量(105);通过感知层根据多个词语样本的词向量确定多个词语样本的核心词向量和依存词向量(106);通过仿射分类层根据任意两个词语样本的核心词向量和依存词向量对任意两个词语样本的依存关系进行分类(107);根据任意两个词语样本的依存关系分类结果和任意两个词语样本在语句样本中的依存关系标签对分类模型进行训练(108);通过训练后的分类模型对目标语句进行词语依存关系分类(109)。该方法能够提升分类的准确性。

Description

依存关系分类方法及相关设备
本申请要求于2020年07月30日提交中国专利局,申请号为202010753501.2申请名称为“依存关系分类方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术,具体涉及一种依存关系分类方法、装置、计算机设备及计算机可读存储介质。
背景技术
依存关系分类是自然语言处理过程中的一类关键技术。发明人意识到,依存关系分类的准确性将影响自然语言处理的准确性。依存关系分类常存在分类不准确的问题。
如何提升依存关系分类的准确性成为待解决的问题。
发明内容
鉴于以上内容,有必要提出一种依存关系分类方法、装置、计算机设备及计算机可读存储介质,其可以对目标语句进行词语依存关系分类,提升分类的准确性。
本申请的第一方面提供一种依存关系分类方法,所述方法包括:
获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
通过所述BERT层生成所述语句样本的第一字向量序列;
通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
本申请的第二方面提供一种依存关系分类装置,所述装置包括:
获取模块,用于获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
生成模块,用于通过所述BERT层生成所述语句样本的第一字向量序列;
分词模块,用于通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
编码模块,用于通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
计算模块,用于通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
确定模块,用于通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
第一分类模块,用于通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
训练模块,用于根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
第二分类模块,用于通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
本申请的第三方面提供一种计算机设备,所述计算机设备包括处理器,所述处理器用于执行存储器中存储的计算机可读指令以实现以下步骤:
获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
通过所述BERT层生成所述语句样本的第一字向量序列;
通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
本申请的第四方面提供一种计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:
获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
通过所述BERT层生成所述语句样本的第一字向量序列;
通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
本申请通过BERT层、字编码层对语句样本进行编码,提升了训练所述分类模型的 效率。通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类,增加了场景适应性,使所述分类模型可以对目标语句中的任意两个词语进行依存关系分类。根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;通过所述训练后的分类模型对所述目标语句进行词语依存关系分类,提升了分类的准确性。
附图说明
图1是本申请实施例提供的依存关系分类方法的流程图。
图2是本申请实施例提供的依存关系分类装置的结构图。
图3是本申请实施例提供的计算机设备的示意图。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
优选地,本申请的依存关系分类方法应用在一个或者多个计算机设备中。所述计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
实施例一
图1是本申请实施例一提供的依存关系分类方法的流程图。所述依存关系分类方法应用于计算机设备,用于对目标语句进行词语依存关系分类,提升分类的准确性。
所述依存关系分类方法具体包括以下步骤,根据不同的需求,该流程图中步骤的顺序可以改变,某些可以省略。
101,获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层。
可以接收用户输入的语句样本、所述目标语句、所述分类模型。或从云存储设备中拉取所述语句样本、所述目标语句、所述分类模型。
所述语言样本用于对所述分类模型进行训练。所述目标语句是待分类的语句。
可以对所述BERT层和所述字编码层进行预训练。
102,通过所述BERT层生成所述语句样本的第一字向量序列。
BERT的全称是Bidirectional Encoder Representation from Transformers,即双向Transformer模型的Encoder部分。可以基于Masked LM和Next Sentence Prediction两种方法对BERT层,进行预训练。以达到通过BERT层捕捉词语和句子级别的语义表示的 目的。
所述语句样本的第一字向量序列包括了所述语句样本的语义信息。
103,通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本。
所述分词层可以包括循环神经网络(RNN,Recurrent Neural Network);或者所述分词层可以包括BiLSTM(双向长短期记忆网络)层和CRF(Conditional Random Fields,条件随机场)层。
例如,用分词层对语句样本“百度第一季度总营收为241亿元”进行分词,得到语句样本的多个词语样本为“百度”“第一季度”“总营收”“为”“241亿元”。
所述每个词语样本的成分类型包括实体、属性、属性值、描述、关系等。
在另一实施例中,所述依存关系分类方法还包括:
对于每个词语样本,获取所述词语样本的成分类型;
当所述词语样本的成分类型不是实体、属性、属性值、描述或关系时,删除所述词语样本。
如上例,删除后的多个词语样本为“百度”“第一季度”“总营收”“241亿元”。
104,通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列。
在一具体实施例中,所述通过所述字编码层对所述语句样本进行编码包括:
对于所述语句样本中的每个字样本,获取所述字样本的位置信息和类型信息;
通过所述字编码层的第一字编码子层对所述字样本的位置信息进行编码,得到所述字样本的第二字向量;
依字序组合所述语句样本中的多个字样本的第二字向量,得到所述语句样本第二字向量序列;
通过所述字编码层的第二字编码子层对所述字样本的类型信息进行编码,得到所述字样本的第三字向量;
依字序组合所述语句样本中的多个字样本的第三字向量,得到所述语句样本第三字向量序列。
所述字样本的位置信息是所述字样本在所述语句样本中的顺序序号,或所述字样本在所述语句样本中的逆序序号。
确定所述字样本所属的目标词样本,将所述目标词语样本的成分类型确定为所述字样本的类型信息,通过所述第二字编码子层将所述字样本的类型信息编码为所述字样本的第三字向量。例如,成分类型为实体、属性、属性值、描述、关系的字样本的第三字向量分别为“001”、“010”、“011”、“100”、“101”。
105,通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量。
在一具体实施例中,所述通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量包括:
对于所述多个词语样本中的每个词语样本,确定组成所述词语样本的多个目标字样本;
对于每个目标字样本,根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量;
根据所述多个目标字样本的特征向量计算所述词语样本的词向量。
在一具体实施例中,所述根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量包括:
连接所述目标字样本的第一字向量、第二字向量、第三字向量,得到所述目标字样本的特征向量;或
计算所述目标字样本的第一字向量、第二字向量和第三字向量的第一均值向量,将 所述第一均值向量确定为所述目标字样本的特征向量。
在一具体实施例中,所述根据所述多个目标字样本的特征向量计算所述词语样本的词向量包括:
计算所述多个目标字样本的特征向量的第二均值向量,将所述第二均值向量确定为所述目标字样本的特征向量。
106,通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量。
所述感知层包括两个不同的感知器,核心词感知器和依存词感知器。
对于每个词语样本,通过所述核心词感知器对所述词语样本的词向量进行编码,得到所述词语样本的核心词向量;
通过所述依存词感知器对所述词语样本的词向量进行编码,得到所述词语样本的依存词向量。
当所述词语样本对应依存关系中的核心项时,所述词语样本的核心词向量是所述词语样本的向量表示;当所述词语样本对应依存关系中的依存项时,所述词语样本的依存词向量是所述词语样本的向量表示。所述依存关系由所述核心项指向所述依存项。
107,通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类。
对于所述多个词语样本中的第u个词语样本和第v个词语样本,当所述第u个词语样本对应依存关系中的核心项,所述第v个词语样本对应依存关系中的依存项时,将所述第u个词语样本的核心词向量和所述第v个词语样本的依存词向量输入所述仿射分类层;通过所述仿射分类层对所述第u个词语样本的核心词向量和所述第v个词语样本的依存词向量进行计算,输出第一得分向量;将所述第一得分向量中得分最高的维度对应的依存关系类型确定为由所述第u个词语样本指向所述第v个词语样本的目标依存关系类型。
当所述第u个词语样本对应依存关系中的依存项,所述第v个词语样本对应依存关系中的核心项时,将所述第u个词语样本的依存词向量和所述第v个词语样本的核心词向量输入所述仿射分类层;通过所述仿射分类层对所述第u个词语样本的依存词向量和所述第v个词语样本的核心词向量进行计算,输出第二得分向量;将所述第二得分向量中得分最高的维度对应的依存关系类型确定为由所述第v个词语样本指向所述第u个词语样本的目标依存关系类型。
需要强调的是,为进一步保证上述分类结果的私密和安全性,上述分类结果还可以存储于一区块链的节点中。
108,根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型。
在一具体实施例中,所述根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练包括:
根据所述语句样本中的依存关系标签确定多个标签权重;
基于交叉熵损失算法根据所述任意两个词语样本的依存关系分类结果,所述任意两个词语样本在所述语句样本中的依存关系标签和所述多个标签权重计算损失值;
基于反向传播算法根据所述损失值对所述分类模型中的参数进行优化。
在一具体实施例中,所述根据所述语句样本中的依存关系标签确定多个标签权重包括:
获取依存关系标签的标签类型,所述依存关系标签的标签类型包括表示不存在依存关系的第一标签类型和表示存在依存关系的第二标签类型;
获取第一标签权重和第二标签权重,所述第一标签权重小于所述第二标签权重;
将所述第一标签权重确定为第一标签类型的依存关系标签的标签权重,将所述第二 标签权重确定为第二标签类型的依存关系标签的标签权重。
例如,第一标签类型包括“UNK”,第二标签类型包括“root”、“subj”、“obj”、“pred”、“adv”等。确定第一标签权重为0.1,确定第二标签权重为1。或者,确定第一标签权重为0.2,确定第二标签权重为0.9。第一标签类型表示所述任意两个词语样本之间不存在依存关系(即不存在核心项对应的词语样本指向依存项指向的词语样本的依存弧)。
将任意两个词语样本记为一组词语对,得到n组词语对。n组词语对中的第i组的词语对的依存关系分类结果为y i,第i组的词语对的依存关系标签为y′ i,第i组的词语对的标签权重为w i。损失值为ce,
Figure PCTCN2020122917-appb-000001
其中,y i和y′ i为onehot向量,当第i组的词语对的标签类型为第一标签类型时,w i值取0.1;当第i组的词语对的标签类型为第二标签类型时,w i值取1。
在另一实施例中,所述依存关系分类方法还包括:
通过Macro-F1模型评估所述分类模型的分类能力;
当所述分类模型的分类能力大于预设能力值时,停止训练所述分类模型,得到训练后的分类模型。
109,通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
通过所述训练后的分类模型对所述目标语句进行词语依存关系分类,可以得到所述目标语句中的任意两个词语的双向依存关系类型。对于所述目标语句中的第j个词语和第k个词语,当所述第j个词语对应依存关系中的核心项,所述第k个词语对应依存关系中的依存项时,用所训练后的分类模型输出第三得分向量;将所述第三得分向量中得分最高的维度对应的依存关系类型确定为由所述第j个词语指向所述第k个词语的目标依存关系类型。
当所述第k个词语对应依存关系中的核心项,所述第j个词语对应依存关系中的依存项时,用所训练后的分类模型输出第四得分向量;将所述第四得分向量中得分最高的维度对应的依存关系类型确定为由所述第k个词语指向所述第j个词语的目标依存关系类型。
实施例一的依存关系分类方法通过BERT层、字编码层对语句样本进行编码,提升了训练所述分类模型的效率。通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类,增加了场景适应性,使所述分类模型可以对目标语句中的任意两个词语进行依存关系分类。根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;通过所述训练后的分类模型对所述目标语句进行词语依存关系分类,提升了分类的准确性。
实施例二
图2是本申请实施例二提供的依存关系分类装置的结构图。所述依存关系分类装置20应用于计算机设备。所述依存关系分类装置20用于对目标语句进行词语依存关系分类,提升分类的准确性。
如图2所示,所述依存关系分类装置20可以包括获取模块201、生成模块202、分词模块203、编码模块204、计算模块205、确定模块206、第一分类模块207、训练模块208、第二分类模块209。
获取模块201,用于获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层。
可以接收用户输入的语句样本、所述目标语句、所述分类模型。或从云存储设备中拉取所述语句样本、所述目标语句、所述分类模型。
所述语言样本用于对所述分类模型进行训练。所述目标语句是待分类的语句。
生成模块202,用于通过所述BERT层生成所述语句样本的第一字向量序列。
BERT的全称是Bidirectional Encoder Representation from Transformers,即双向Transformer模型的Encoder部分。可以基于Masked LM和Next Sentence Prediction两种方法对BERT层,进行预训练。以达到通过BERT层捕捉词语和句子级别的语义表示的目的。
所述语句样本的第一字向量序列包括了所述语句样本的语义信息。
分词模块203,用于通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本。
所述分词层可以包括循环神经网络(RNN,Recurrent Neural Network);或者所述分词层可以包括BiLSTM(双向长短期记忆网络)层和CRF(Conditional Random Fields,条件随机场)层。
例如,用分词层对语句样本“百度第一季度总营收为241亿元”进行分词,得到语句样本的多个词语样本为“百度”“第一季度”“总营收”“为”“241亿元”。
所述每个词语样本的成分类型包括实体、属性、属性值、描述、关系等。
在另一实施例中,所述依存关系分类装置还包括删除模块,用于对于每个词语样本,获取所述词语样本的成分类型;
当所述词语样本的成分类型不是实体、属性、属性值、描述或关系时,删除所述词语样本。
如上例,删除后的多个词语样本为“百度”“第一季度”“总营收”“241亿元”。
编码模块204,用于通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列。
在一具体实施例中,所述通过所述字编码层对所述语句样本进行编码包括:
对于所述语句样本中的每个字样本,获取所述字样本的位置信息和类型信息;
通过所述字编码层的第一字编码子层对所述字样本的位置信息进行编码,得到所述字样本的第二字向量;
依字序组合所述语句样本中的多个字样本的第二字向量,得到所述语句样本第二字向量序列;
通过所述字编码层的第二字编码子层对所述字样本的类型信息进行编码,得到所述字样本的第三字向量;
依字序组合所述语句样本中的多个字样本的第三字向量,得到所述语句样本第三字向量序列。
所述字样本的位置信息是所述字样本在所述语句样本中的顺序序号,或所述字样本在所述语句样本中的逆序序号。
确定所述字样本所属的目标词样本,将所述目标词语样本的成分类型确定为所述字样本的类型信息,通过所述第二字编码子层将所述字样本的类型信息编码为所述字样本的第三字向量。例如,成分类型为实体、属性、属性值、描述、关系的字样本的第三字向量分别为“001”、“010”、“011”、“100”、“101”。
计算模块205,用于通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量。
在一具体实施例中,所述通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量包括:
对于所述多个词语样本中的每个词语样本,确定组成所述词语样本的多个目标字样本;
对于每个目标字样本,根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量;
根据所述多个目标字样本的特征向量计算所述词语样本的词向量。
在一具体实施例中,所述根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量包括:
连接所述目标字样本的第一字向量、第二字向量、第三字向量,得到所述目标字样本的特征向量;或
计算所述目标字样本的第一字向量、第二字向量和第三字向量的第一均值向量,将所述第一均值向量确定为所述目标字样本的特征向量。
在一具体实施例中,所述根据所述多个目标字样本的特征向量计算所述词语样本的词向量包括:
计算所述多个目标字样本的特征向量的第二均值向量,将所述第二均值向量确定为所述目标字样本的特征向量。
确定模块206,用于通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量。
所述感知层包括两个不同的感知器,核心词感知器和依存词感知器。
对于每个词语样本,通过所述核心词感知器对所述词语样本的词向量进行编码,得到所述词语样本的核心词向量;
通过所述依存词感知器对所述词语样本的词向量进行编码,得到所述词语样本的依存词向量。
当所述词语样本对应依存关系中的核心项时,所述词语样本的核心词向量是所述词语样本的向量表示;当所述词语样本对应依存关系中的依存项时,所述词语样本的依存词向量是所述词语样本的向量表示。所述依存关系由所述核心项指向所述依存项。
第一分类模块207,用于通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类。
对于所述多个词语样本中的第u个词语样本和第v个词语样本,当所述第u个词语样本对应依存关系中的核心项,所述第v个词语样本对应依存关系中的依存项时,将所述第u个词语样本的核心词向量和所述第v个词语样本的依存词向量输入所述仿射分类层;通过所述仿射分类层对所述第u个词语样本的核心词向量和所述第v个词语样本的依存词向量进行计算,输出第一得分向量;将所述第一得分向量中得分最高的维度对应的依存关系类型确定为由所述第u个词语样本指向所述第v个词语样本的目标依存关系类型。
当所述第u个词语样本对应依存关系中的依存项,所述第v个词语样本对应依存关系中的核心项时,将所述第u个词语样本的依存词向量和所述第v个词语样本的核心词向量输入所述仿射分类层;通过所述仿射分类层对所述第u个词语样本的依存词向量和所述第v个词语样本的核心词向量进行计算,输出第二得分向量;将所述第二得分向量中得分最高的维度对应的依存关系类型确定为由所述第v个词语样本指向所述第u个词语样本的目标依存关系类型。
需要强调的是,为进一步保证上述分类结果的私密和安全性,上述分类结果还可以存储于一区块链的节点中。
训练模块208,用于根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型。
在一具体实施例中,所述根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练包括:
根据所述语句样本中的依存关系标签确定多个标签权重;
基于交叉熵损失算法根据所述任意两个词语样本的依存关系分类结果,所述任意两个词语样本在所述语句样本中的依存关系标签和所述多个标签权重计算损失值;
基于反向传播算法根据所述损失值对所述分类模型中的参数进行优化。
在一具体实施例中,所述根据所述语句样本中的依存关系标签确定多个标签权重包括:
获取依存关系标签的标签类型,所述依存关系标签的标签类型包括表示不存在依存关系的第一标签类型和表示存在依存关系的第二标签类型;
获取第一标签权重和第二标签权重,所述第一标签权重小于所述第二标签权重;
将所述第一标签权重确定为第一标签类型的依存关系标签的标签权重,将所述第二标签权重确定为第二标签类型的依存关系标签的标签权重。
例如,第一标签类型包括“UNK”,第二标签类型包括“root”、“subj”、“obj”、“pred”、“adv”等。确定第一标签权重为0.1,确定第二标签权重为1。或者,确定第一标签权重为0.2,确定第二标签权重为0.9。第一标签类型表示所述任意两个词语样本之间不存在依存关系(即不存在核心项对应的词语样本指向依存项指向的词语样本的依存弧)。
将任意两个词语样本记为一组词语对,得到n组词语对。n组词语对中的第i组的词语对的依存关系分类结果为y i,第i组的词语对的依存关系标签为y′ i,第i组的词语对的标签权重为w i。损失值为ce,
Figure PCTCN2020122917-appb-000002
其中,y i和y′ i为onehot向量,当第i组的词语对的标签类型为第一标签类型时,w i值取0.1;当第i组的词语对的标签类型为第二标签类型时,w i值取1。
在另一实施例中,所述依存关系分类装置还包括停止模块,用于通过Macro-F1模型评估所述分类模型的分类能力;
当所述分类模型的分类能力大于预设能力值时,停止训练所述分类模型,得到训练后的分类模型。
第二分类模块209,用于通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
通过所述训练后的分类模型对所述目标语句进行词语依存关系分类,可以得到所述目标语句中的任意两个词语的双向依存关系类型。对于所述目标语句中的第j个词语和第k个词语,当所述第j个词语对应依存关系中的核心项,所述第k个词语对应依存关系中的依存项时,用所训练后的分类模型输出第三得分向量;将所述第三得分向量中得分最高的维度对应的依存关系类型确定为由所述第j个词语指向所述第k个词语的目标依存关系类型。
当所述第k个词语对应依存关系中的核心项,所述第j个词语对应依存关系中的依存项时,用所训练后的分类模型输出第四得分向量;将所述第四得分向量中得分最高的维度对应的依存关系类型确定为由所述第k个词语指向所述第j个词语的目标依存关系类型。
实施例二的依存关系分类装置20通过BERT层、字编码层对语句样本进行编码,提升了训练所述分类模型的效率。通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类,增加了场景适应性,使所述分类模型可以对目标语句中的任意两个词语进行依存关系分类。根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;通过所述训练后的分类模型对所述目标语句进行词语依存关系分类,提升了分类的准确性。
实施例三
本实施例提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机可读指令,所述计算机可读存储介质可以是非易失性,也可以是易失性。该计算机可读指令被处理器执行时实现上述依存关系分类方法实施例中的步骤,例如图1所示的步骤101-109:
101,获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
102,通过所述BERT层生成所述语句样本的第一字向量序列;
103,通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
104,通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
105,通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
106,通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
107,通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
108,根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
109,通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
或者,该计算机可读指令被处理器执行时实现上述装置实施例中各模块的功能,例如图2中的模块201-209:
获取模块201,用于获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
生成模块202,用于通过所述BERT层生成所述语句样本的第一字向量序列;
分词模块203,用于通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
编码模块204,用于通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
计算模块205,用于通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
确定模块206,用于通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
第一分类模块207,用于通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
训练模块208,用于根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
第二分类模块209,用于通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
实施例四
图3为本申请实施例四提供的计算机设备的示意图。所述计算机设备30包括存储器301、处理器302以及存储在所述存储器301中并可在所述处理器302上运行的计算机可读指令303,例如依存关系分类程序。所述处理器302执行所述计算机可读指令303时实现上述依存关系分类方法实施例中的步骤,例如图1所示的步骤101-109:
101,获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
102,通过所述BERT层生成所述语句样本的第一字向量序列;
103,通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
104,通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量 序列和第三字向量序列;
105,通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
106,通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
107,通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
108,根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
109,通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
或者,该计算机可读指令被处理器执行时实现上述装置实施例中各模块的功能,例如图2中的模块201-209:
获取模块201,用于获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
生成模块202,用于通过所述BERT层生成所述语句样本的第一字向量序列;
分词模块203,用于通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
编码模块204,用于通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
计算模块205,用于通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
确定模块206,用于通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
第一分类模块207,用于通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
训练模块208,用于根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
第二分类模块209,用于通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
示例性的,所述计算机可读指令303可以被分割成一个或多个模块,所述一个或者多个模块被存储在所述存储器301中,并由所述处理器302执行,以完成本方法。所述一个或多个模块可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令303在所述计算机设备30中的执行过程。例如,所述计算机可读指令303可以被分割成图2中的获取模块201、生成模块202、分词模块203、编码模块204、计算模块205、确定模块206、第一分类模块207、训练模块208、第二分类模块209,各模块具体功能参见实施例二。
所述计算机设备30可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图3仅仅是计算机设备30的示例,并不构成对计算机设备30的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述计算机设备30还可以包括输入输出设备、网络接入设备、总线等。
所称处理器302可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器302也可以是任何常规的处理器等,所述处理器302 是所述计算机设备30的控制中心,利用各种接口和线路连接整个计算机设备30的各个部分。
所述存储器301可用于存储所述计算机可读指令303,所述处理器302通过运行或执行存储在所述存储器301内的计算机可读指令或模块,以及调用存储在存储器301内的数据,实现所述计算机设备30的各种功能。所述存储器301可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据计算机设备30的使用所创建的数据等。此外,存储器301可以包括硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)或其他非易失性/易失性存储器件。
所述计算机设备30集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。所述计算机可读存储介质可以是非易失性,也可以是易失性。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一计算机可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质可以包括:能够携带所述计算机可读指令的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、只读存储器(ROM)、随机存取存储器(RAM)。
进一步地,所述计算机可读存储介质可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据区块链节点的使用所创建的数据等。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述依存关系分类方法的部分步骤。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围 内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他模块或步骤,单数不排除复数。系统权利要求中陈述的多个模块或装置也可以由一个模块或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种依存关系分类方法,其中,所述依存关系分类方法包括:
    获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
    通过所述BERT层生成所述语句样本的第一字向量序列;
    通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
    通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
    通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
    通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
    通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
    根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
    通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
  2. 如权利要求1所述的依存关系分类方法,其中,所述通过所述字编码层对所述语句样本进行编码包括:
    对于所述语句样本中的每个字样本,获取所述字样本的位置信息和类型信息;
    通过所述字编码层的第一字编码子层对所述字样本的位置信息进行编码,得到所述字样本的第二字向量;
    依字序组合所述语句样本中的多个字样本的第二字向量,得到所述语句样本第二字向量序列;
    通过所述字编码层的第二字编码子层对所述字样本的类型信息进行编码,得到所述字样本的第三字向量;
    依字序组合所述语句样本中的多个字样本的第三字向量,得到所述语句样本第三字向量序列。
  3. 如权利要求1所述的依存关系分类方法,其中,所述通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量包括:
    对于所述多个词语样本中的每个词语样本,确定组成所述词语样本的多个目标字样本;
    对于每个目标字样本,根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量;
    根据所述多个目标字样本的特征向量计算所述词语样本的词向量。
  4. 如权利要求3所述的依存关系分类方法,其中,所述根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量包括:
    连接所述目标字样本的第一字向量、第二字向量、第三字向量,得到所述目标字样本的特征向量;或
    计算所述目标字样本的第一字向量、第二字向量和第三字向量的第一均值向量,将所述第一均值向量确定为所述目标字样本的特征向量。
  5. 如权利要求3所述的依存关系分类方法,其中,所述根据所述多个目标字样本的特征向量计算所述词语样本的词向量包括:
    计算所述多个目标字样本的特征向量的第二均值向量,将所述第二均值向量确定为 所述目标字样本的特征向量。
  6. 如权利要求1所述的依存关系分类方法,其中,所述根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练包括:
    根据所述语句样本中的依存关系标签确定多个标签权重;
    基于交叉熵损失算法根据所述任意两个词语样本的依存关系分类结果,所述任意两个词语样本在所述语句样本中的依存关系标签和所述多个标签权重计算损失值;
    基于反向传播算法根据所述损失值对所述分类模型中的参数进行优化。
  7. 如权利要求6所述的依存关系分类方法,其中,所述根据所述语句样本中的依存关系标签确定多个标签权重包括:
    获取依存关系标签的标签类型,所述依存关系标签的标签类型包括表示不存在依存关系的第一标签类型和表示存在依存关系的第二标签类型;
    获取第一标签权重和第二标签权重,所述第一标签权重小于所述第二标签权重;
    将所述第一标签权重确定为第一标签类型的依存关系标签的标签权重,将所述第二标签权重确定为第二标签类型的依存关系标签的标签权重。
  8. 一种依存关系分类装置,其中,所述依存关系分类装置包括:
    获取模块,用于获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
    生成模块,用于通过所述BERT层生成所述语句样本的第一字向量序列;
    分词模块,用于通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
    编码模块,用于通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
    计算模块,用于通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
    确定模块,用于通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
    第一分类模块,用于通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
    训练模块,用于根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
    第二分类模块,用于通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
  9. 一种计算机设备,其中,所述计算机设备包括处理器,所述处理器用于执行存储器中存储的计算机可读指令以实现以下步骤:
    获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
    通过所述BERT层生成所述语句样本的第一字向量序列;
    通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
    通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
    通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
    通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
    通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
    根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
    通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
  10. 如权利要求9所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述通过所述字编码层对所述语句样本进行编码时,包括:
    对于所述语句样本中的每个字样本,获取所述字样本的位置信息和类型信息;
    通过所述字编码层的第一字编码子层对所述字样本的位置信息进行编码,得到所述字样本的第二字向量;
    依字序组合所述语句样本中的多个字样本的第二字向量,得到所述语句样本第二字向量序列;
    通过所述字编码层的第二字编码子层对所述字样本的类型信息进行编码,得到所述字样本的第三字向量;
    依字序组合所述语句样本中的多个字样本的第三字向量,得到所述语句样本第三字向量序列。
  11. 如权利要求9所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量时,包括:
    对于所述多个词语样本中的每个词语样本,确定组成所述词语样本的多个目标字样本;
    对于每个目标字样本,根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量;
    根据所述多个目标字样本的特征向量计算所述词语样本的词向量。
  12. 如权利要求11所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量时,包括:
    连接所述目标字样本的第一字向量、第二字向量、第三字向量,得到所述目标字样本的特征向量;或
    计算所述目标字样本的第一字向量、第二字向量和第三字向量的第一均值向量,将所述第一均值向量确定为所述目标字样本的特征向量。
  13. 如权利要求11所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现根据所述多个目标字样本的特征向量计算所述词语样本的词向量时,包括:
    计算所述多个目标字样本的特征向量的第二均值向量,将所述第二均值向量确定为所述目标字样本的特征向量。
  14. 如权利要求9所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练时,包括:
    根据所述语句样本中的依存关系标签确定多个标签权重;
    基于交叉熵损失算法根据所述任意两个词语样本的依存关系分类结果,所述任意两个词语样本在所述语句样本中的依存关系标签和所述多个标签权重计算损失值;
    基于反向传播算法根据所述损失值对所述分类模型中的参数进行优化。
  15. 如权利要求14所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述根据所述语句样本中的依存关系标签确定多个标签权重时,包括:
    获取依存关系标签的标签类型,所述依存关系标签的标签类型包括表示不存在依存关系的第一标签类型和表示存在依存关系的第二标签类型;
    获取第一标签权重和第二标签权重,所述第一标签权重小于所述第二标签权重;
    将所述第一标签权重确定为第一标签类型的依存关系标签的标签权重,将所述第二标签权重确定为第二标签类型的依存关系标签的标签权重。
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现以下步骤:
    获取语句样本、目标语句和分类模型,所述分类模型包括BERT层、字编码层、分词层、词编码层、感知层和仿射分类层;
    通过所述BERT层生成所述语句样本的第一字向量序列;
    通过所述分词层对所述语句样本进行分词,得到所述语句样本的多个词语样本;
    通过所述字编码层对所述语句样本进行编码,得到所述语句样本的第二字向量序列和第三字向量序列;
    通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量;
    通过所述感知层根据所述多个词语样本的词向量确定所述多个词语样本的核心词向量和依存词向量;
    通过所述仿射分类层根据任意两个词语样本的核心词向量和依存词向量对所述任意两个词语样本的依存关系进行分类;
    根据所述任意两个词语样本的依存关系分类结果和所述任意两个词语样本在所述语句样本中的依存关系标签对所述分类模型进行训练,得到训练后的分类模型;
    通过所述训练后的分类模型对所述目标语句进行词语依存关系分类。
  17. 如权利要求16所述的存储介质,其中,所述计算机可读指令被所述处理器执行以实现所述通过所述字编码层对所述语句样本进行编码时,包括:
    对于所述语句样本中的每个字样本,获取所述字样本的位置信息和类型信息;
    通过所述字编码层的第一字编码子层对所述字样本的位置信息进行编码,得到所述字样本的第二字向量;
    依字序组合所述语句样本中的多个字样本的第二字向量,得到所述语句样本第二字向量序列;
    通过所述字编码层的第二字编码子层对所述字样本的类型信息进行编码,得到所述字样本的第三字向量;
    依字序组合所述语句样本中的多个字样本的第三字向量,得到所述语句样本第三字向量序列。
  18. 如权利要求16所述的存储介质,其中,所述计算机可读指令被所述处理器执行以实现所述通过所述词编码层根据所述语句样本的第一字向量序列、第二字向量序列、第三字向量序列计算所述多个词语样本的词向量时,包括:
    对于所述多个词语样本中的每个词语样本,确定组成所述词语样本的多个目标字样本;
    对于每个目标字样本,根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量;
    根据所述多个目标字样本的特征向量计算所述词语样本的词向量。
  19. 如权利要求18所述的存储介质,其中,所述计算机可读指令被所述处理器执行以实现所述根据所述目标字样本的第一字向量、第二字向量、第三字向量生成所述目标字样本的特征向量时,包括:
    连接所述目标字样本的第一字向量、第二字向量、第三字向量,得到所述目标字样本的特征向量;或
    计算所述目标字样本的第一字向量、第二字向量和第三字向量的第一均值向量,将所述第一均值向量确定为所述目标字样本的特征向量。
  20. 如权利要求18所述的存储介质,其中,所述计算机可读指令被所述处理器执行以实现所述根据所述多个目标字样本的特征向量计算所述词语样本的词向量时,包括:
    计算所述多个目标字样本的特征向量的第二均值向量,将所述第二均值向量确定为所述目标字样本的特征向量。
PCT/CN2020/122917 2020-07-30 2020-10-22 依存关系分类方法及相关设备 WO2021147404A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010753501.2 2020-07-30
CN202010753501.2A CN112036439B (zh) 2020-07-30 2020-07-30 依存关系分类方法及相关设备

Publications (1)

Publication Number Publication Date
WO2021147404A1 true WO2021147404A1 (zh) 2021-07-29

Family

ID=73583628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122917 WO2021147404A1 (zh) 2020-07-30 2020-10-22 依存关系分类方法及相关设备

Country Status (2)

Country Link
CN (1) CN112036439B (zh)
WO (1) WO2021147404A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705216A (zh) * 2021-08-31 2021-11-26 新华三大数据技术有限公司 依赖关系的检测方法、装置及设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613311A (zh) * 2021-01-07 2021-04-06 北京捷通华声科技股份有限公司 一种信息处理方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8914279B1 (en) * 2011-09-23 2014-12-16 Google Inc. Efficient parsing with structured prediction cascades
CN105335348A (zh) * 2014-08-07 2016-02-17 阿里巴巴集团控股有限公司 基于目标语句的依存句法分析方法、装置及服务器
CN106250367A (zh) * 2016-07-27 2016-12-21 昆明理工大学 基于改进的Nivre算法构建越南语依存树库的方法
CN110705253A (zh) * 2019-08-29 2020-01-17 昆明理工大学 基于迁移学习的缅甸语依存句法分析方法及装置
CN111221976A (zh) * 2019-11-14 2020-06-02 北京京航计算通讯研究所 基于bert算法模型的知识图谱构建方法
CN111414749A (zh) * 2020-03-18 2020-07-14 哈尔滨理工大学 基于深度神经网络的社交文本依存句法分析系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108628974B (zh) * 2018-04-25 2023-04-18 平安科技(深圳)有限公司 舆情信息分类方法、装置、计算机设备和存储介质
CN109815333B (zh) * 2019-01-14 2021-05-28 金蝶软件(中国)有限公司 信息获取方法、装置、计算机设备和存储介质
CN111274790B (zh) * 2020-02-13 2023-05-16 东南大学 基于句法依存图的篇章级事件嵌入方法及装置
CN111460812B (zh) * 2020-03-02 2024-05-31 平安科技(深圳)有限公司 语句情感分类方法及相关设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8914279B1 (en) * 2011-09-23 2014-12-16 Google Inc. Efficient parsing with structured prediction cascades
CN105335348A (zh) * 2014-08-07 2016-02-17 阿里巴巴集团控股有限公司 基于目标语句的依存句法分析方法、装置及服务器
CN106250367A (zh) * 2016-07-27 2016-12-21 昆明理工大学 基于改进的Nivre算法构建越南语依存树库的方法
CN110705253A (zh) * 2019-08-29 2020-01-17 昆明理工大学 基于迁移学习的缅甸语依存句法分析方法及装置
CN111221976A (zh) * 2019-11-14 2020-06-02 北京京航计算通讯研究所 基于bert算法模型的知识图谱构建方法
CN111414749A (zh) * 2020-03-18 2020-07-14 哈尔滨理工大学 基于深度神经网络的社交文本依存句法分析系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705216A (zh) * 2021-08-31 2021-11-26 新华三大数据技术有限公司 依赖关系的检测方法、装置及设备
CN113705216B (zh) * 2021-08-31 2024-04-19 新华三大数据技术有限公司 依赖关系的检测方法、装置及设备

Also Published As

Publication number Publication date
CN112036439B (zh) 2023-09-01
CN112036439A (zh) 2020-12-04

Similar Documents

Publication Publication Date Title
Baly et al. We can detect your bias: Predicting the political ideology of news articles
WO2021217843A1 (zh) 企业舆情分析方法、装置、电子设备及介质
CN108959482B (zh) 基于深度学习的单轮对话数据分类方法、装置和电子设备
WO2022105115A1 (zh) 问答对匹配方法、装置、电子设备及存储介质
CN110597961B (zh) 一种文本类目标注方法、装置、电子设备及存储介质
US11526804B2 (en) Machine learning model training for reviewing documents
WO2021174922A1 (zh) 语句情感分类方法及相关设备
CN114330354B (zh) 一种基于词汇增强的事件抽取方法、装置及存储介质
WO2021212681A1 (zh) 语义角色标注方法、装置、计算机设备及存储介质
WO2021147404A1 (zh) 依存关系分类方法及相关设备
CN113886571A (zh) 实体识别方法、装置、电子设备及计算机可读存储介质
WO2021174875A1 (zh) 人脸特征向量动态调整方法及相关设备
CN112163099A (zh) 基于知识图谱的文本识别方法、装置、存储介质和服务器
CN114090794A (zh) 基于人工智能的事理图谱构建方法及相关设备
CN113627151B (zh) 跨模态数据的匹配方法、装置、设备及介质
CN112183881A (zh) 一种基于社交网络的舆情事件预测方法、设备及存储介质
WO2022116438A1 (zh) 客服违规质检方法、装置、计算机设备及存储介质
CN112597302A (zh) 基于多维评论表示的虚假评论检测方法
CN113934835B (zh) 结合关键词和语义理解表征的检索式回复对话方法及系统
Liu et al. A BERT‐Based Aspect‐Level Sentiment Analysis Algorithm for Cross‐Domain Text
CN111552865A (zh) 用户兴趣画像方法及相关设备
CN114722832A (zh) 一种摘要提取方法、装置、设备以及存储介质
Huang et al. An effective multimodal representation and fusion method for multimodal intent recognition
Wang et al. Application of an emotional classification model in e-commerce text based on an improved transformer model
WO2021147405A1 (zh) 客服语句质检方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915383

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915383

Country of ref document: EP

Kind code of ref document: A1