CN109271522A - Comment sensibility classification method and system based on depth mixed model transfer learning - Google Patents

Comment sensibility classification method and system based on depth mixed model transfer learning Download PDF

Info

Publication number
CN109271522A
CN109271522A CN201811383793.4A CN201811383793A CN109271522A CN 109271522 A CN109271522 A CN 109271522A CN 201811383793 A CN201811383793 A CN 201811383793A CN 109271522 A CN109271522 A CN 109271522A
Authority
CN
China
Prior art keywords
comment
mixed model
transfer learning
training
depth mixed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811383793.4A
Other languages
Chinese (zh)
Other versions
CN109271522B (en
Inventor
代明军
谢立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201811383793.4A priority Critical patent/CN109271522B/en
Publication of CN109271522A publication Critical patent/CN109271522A/en
Application granted granted Critical
Publication of CN109271522B publication Critical patent/CN109271522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides comment sensibility classification method and system based on depth mixed model transfer learning, and the comment sensibility classification method acquires comment on commodity, and pre-process to the source domain set of data samples of comment on commodity the following steps are included: step S1;Pretreated data are mapped as term vector by step S2;Step S3 carries out depth mixed model pre-training to the source domain set of data samples of comment on commodity;Step S4 carries out the fine tuning of depth mixed model to the target domain set of data samples of comment on commodity;Step S5 carries out emotional semantic classification to the comment on commodity of target domain.Training speed of the present invention is fast and training difficulty is low, it is only necessary to which several wheel training can be obtained compared with high-class precision, and can also obtain preferable classifying quality when noise is more or the data set of negligible amounts is trained, and small to data set dependence, robustness is good;The present invention also effectively increases transportable ability, reaches the purpose of nicety of grading after improving transfer learning.

Description

Comment sensibility classification method and system based on depth mixed model transfer learning
Technical field
The present invention relates to a kind of comment sensibility classification methods more particularly to a kind of based on depth mixed model transfer learning Sensibility classification method is commented on, and is related to commenting using the comment sensibility classification method based on depth mixed model transfer learning By emotional semantic classification system.
Background technique
In the prior art, mainly include following two method to the emotional semantic classification of comment on commodity: the first, based on cluster Cross-cutting transfer learning comment on commodity sensibility classification method, principle are as follows: utilize word unrelated between source domain and aiming field field Language is related the field of different field by spectral clustering using the similarity of source domain and the word of aiming field as intermediary Word is aligned in unified cluster, reduces the FIELD Data collection related term of source domain data set and target domain in this way Difference between language.When the otherness between correlation word between field becomes smaller, the classifier after source domain training is in target domain Classifying quality just get a promotion.This method requirement using the similarity size based on the word for changing source domain and aiming field When the data characteristics of source domain A has to similarity higher with the data characteristics of aiming field B, relatively good migration effect is just had, Such as clothes and quilt, mobile phone and plate.Since this method is the migration strategy based on clear data level, for source domain and mesh The natural similarity of the data characteristics in mark field respectively requires height, is not suitable for source domain and differs biggish scene with aiming field (such as Hotel and clothes), transportable ability is limited, and migration effect is poor.
Second, principle: the comment sensibility classification method based on multi-source field instance migration is passed through using multi-source study The characteristics of sample combines multiple source domains is migrated from different source domains, promotes the stability and validity of transfer learning.It is first More initial weight is first distributed to aiming field sample, and resampling is carried out to data to improve reference injustice in each step The phenomenon that weighing apparatus;Joined dynamic mechanism improves TrAdaBoost migration algorithm, reduces the little source domain of target domain correlation The convergence rate of the weight of sample helps learning objective domain task using source domain sample and aiming field template jointly, with this original Reason reaches making full use of to the knowledge in all source domains with positive transfer effect, improves the classification in model in aiming field. Method based on multi-source field instance migration is exactly to utilize existing multi-source numeric field data collection mixing portion target source domain together in fact Training, and the weight in all source domain data with positive transfer effect sample is increased using iterative optimization method, being reached with this has Effect using in the sample of existing multi-source domain to target domain classification have it is all have positive transfer action component come reduce to target lead The dependence of numeric field data collection.It can be seen that the classifying quality significant portion of TrAdaBoost algorithm is but depended in source data and mesh Mark the original similarity degree of numeric field data and the quality and quantity of source domain data set.Simultaneously this method training difficulty also compared with Greatly, the sample especially at the beginning of initial training in data (target numeric field data when especially) if in Noise it is relatively more, repeatedly Generation number controls bad, will be greatly reduced the precision of classifier.From practice, the robustness of this moving method Difference, accuracy are generally relatively low.
Summary of the invention
The technical problem to be solved by the present invention is to need to provide one kind to reduce dependence of the model to data set, improve Its transportable ability, and further increase the comment based on depth mixed model transfer learning of the nicety of grading after transfer learning Sensibility classification method, and further provide for using the comment sensibility classification method based on depth mixed model transfer learning Comment on emotional semantic classification system.
In this regard, the present invention provides a kind of comment sensibility classification method based on depth mixed model transfer learning, including with Lower step:
Step S1 acquires comment on commodity, and pre-processes to the source domain set of data samples of comment on commodity;
Pretreated data are mapped as term vector by step S2;
Step S3 carries out depth mixed model pre-training to the source domain set of data samples of comment on commodity;
Step S4 carries out the fine tuning of depth mixed model to the target domain set of data samples of comment on commodity;
Step S5 carries out emotional semantic classification to the comment on commodity of target domain;
Wherein, the step S3 includes following sub-step:
Step S301 carries out convolution algorithm to the processed term vector data of source domain that step S2 is obtained;
Node after convolution algorithm is input to a cycling element network by step S302;
The node of door cycling element network is input to the weighted transformation matrix of attention mechanism, obtained by step S303 To default dimension vector, and become bivector;
Numerical value in bivector after activation is set 1 more than preset threshold by activation primitive by step S304, and numerical value is small 0 is set in preset threshold, obtains the prediction label of corresponding sample
Step S305 utilizes cross entropy cost function calculation feeling polarities label y and prediction labelBetween penalty values L;
Step S306 is iterated using the backpropagation of batch gradient descent algorithm and realization, realizes depth mixed model Pre-training.
A further improvement of the present invention is that passing through activation primitive in the step S304Make The calculated value σ (Z) of bivector is calculated for output layer activation primitivej, wherein ZjIn bivector Z for fully-connected network output J-th of element, j be bivector Z element subscript, K=2.
A further improvement of the present invention is that passing through cross entropy cost function in the step S305Calculate feeling polarities label y and prediction labelBetween penalty values L.
A further improvement of the present invention is that the step S4 includes following sub-step:
The target domain tape label data set P of preset quantity is repeated step S1 and step S2 by step S401, while in depth Pond layer is added after spending mixed model;
Step S402, by the resulting weight W of step S3 pre-trainingSIt is directed into depth mixed model;
Step S403, by step S3 obtain by pretreated target domain data PSIt is finely adjusted.
A further improvement of the present invention is that in the step S403, the weight parameter of cycling element network on the doorInto Row freezing, then inverse iteration updates j times, and then by the way that the updated weight parameter of convolutional layer iteration, attention model change For updated weight parameter, the updated weight parameter of door cycling element network iteration and the complete updated power of articulamentum iteration Weight parameter is added, and obtains whole weight parameter W of the depth mixed model after target domain is finely tunedY, to realize fine tuning.
A further improvement of the present invention is that the step S403 includes following sub-step:
Step S4031, to the target domain data P of inputSCarry out convolution algorithm;
The output of convolutional layer is passed through pond layer by step S4032;
The node of pond layer is input to a cycling element network by step S4033;
The weighted transformation matrix of the output of door cycling element network to attention mechanism is obtained default dimension by step S4034 Vector is spent, and is become bivector;
Numerical value in bivector after activation is set 1 more than preset threshold by activation primitive, numerical value by step S4035 0 is set less than preset threshold, obtains the prediction label of corresponding sample
Step S4036 utilizes cross entropy cost function calculation feeling polarities label y and prediction labelBetween penalty values L;
Step S4037 is used as optimization direction according to penalty values L is reduced, simultaneously using the backpropagation of batch gradient descent algorithm Realization iterates, to realize depth mixed model pre-training.
A further improvement of the present invention is that in the back-propagation process of the step S4037, cycling element net on the door The weight parameter of networkIt is freezed.
A further improvement of the present invention is that, in pre-training, source domain set of data samples is input in the step S3 Pond layer is skipped after convolutional layer, is directly inputted into a cycling element network.
The present invention also provides a kind of comment emotional semantic classification systems based on depth mixed model transfer learning, use as above The comment sensibility classification method based on depth mixed model transfer learning, and successively include convolutional layer, pond layer, Men Xun Ring element network, attention Mechanism Model, random drop layer, full articulamentum and output layer.
Compared with prior art, the beneficial effects of the present invention are: caught using the powerful feature recognition capability of convolutional coding structure Low-level image feature is caught, is added again in conjunction with the weight transfer matrix of attention Mechanism Model to by the output after door cycling element network Power strengthens the ability of the keyword in model identification comment;Secondly, existing for text word order structure not for convolution model Tender subject reinforces model to the learning ability and context combination energy of text word order structure in conjunction with door cycling element network Power, improve transfer learning after for longer comment on commodity text classifying quality;Also, small probability is accessed in full articulamentum Dropout layers, so that fraction node is not worked (output zero setting) at random in training, prevent over-fitting, reinforce the extensive of model Ability.
Training speed of the present invention is fast and training difficulty is low, it is only necessary to and a few wheel training can be obtained compared with high-class precision, and And preferable classifying quality can be also obtained when noise is more or the training of the data set of negligible amounts, to data set dependence Small, robustness is good;On this basis, the present invention also effectively improves the transportable ability of model, has reached raising transfer learning The purpose of rear nicety of grading.
Detailed description of the invention
Fig. 1 is the workflow schematic diagram of an embodiment of the present invention;
Fig. 2 is the general flow chart of an embodiment of the present invention;
Fig. 3 is the principle process schematic diagram of the transfer learning method of an embodiment of the present invention;
Fig. 4 is the system model structure principle chart of an embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawing, preferably embodiment of the invention is described in further detail.
This example is applied to analyze the platform of a large amount of different type comments on commodity, such as electric business platform.Application mode is such as Under: the classifying quality of the sentiment classification model based on supervised learning depends on the quantity and quality of tape label data set, it is desirable that Needing to provide enough corresponding tape label data sets in different commodity fields goes study to be fitted.This example is intended to mixed using depth The transfer learning strategy for closing models coupling strengthens the generalization ability of model, reduces dependence of the model to data set.Can effectively it change It is apt to limited transportable ability of the existing technology, transfer learning effect difference strong to data set dependence and training difficulty etc. no Foot.
In this regard, as shown in Figure 1 to Figure 3, this example provides a kind of comment emotion based on depth mixed model transfer learning point Class method, comprising the following steps:
Step S1 acquires comment on commodity, and pre-processes to the source domain set of data samples of comment on commodity;
Pretreated data are mapped as term vector by step S2;
Step S3 carries out depth mixed model pre-training to the source domain set of data samples of comment on commodity;
Step S4 carries out the fine tuning of depth mixed model to the target domain set of data samples of comment on commodity;
Step S5 carries out emotional semantic classification to the comment on commodity of target domain.
In step S1 described in this example, tool is segmented for the source domain data sample for the tape label being collected into using object-oriented Collect T and carry out word segmentation processing, and symbol in source domain set of data samples T and predetermined deactivated is removed by cyclical function Word obtains pretreated text data.Source domain is also referred to as source domain, and target domain is also referred to as aiming field.Stop words can according to The demand at family is preset and customized adjustment.
Specifically, the step S1 preferably uses python participle tool (i.e. object-oriented participle tool) by what is be collected into Tape label source domain data set sample T is segmented, and removes divided-by symbol and specified by the cyclical function of python tool Stop words etc. obtains pretreated data Tpre
Step S2 described in this example includes following sub-step:
Step S201 obtains term vector by the pretreated text data that step S1 is obtained;
Step S202 is arranged the mapping dimension of term vector and the maximum length of input comment on commodity, obtains training sample and exist The distributed expression of source domain;
Step S203 obtains input data by feeling polarities label corresponding to each comment on commodity sample.
In step S201 described in this example, pretreated text data T that step S1 is obtainedpreBy based in building Word is converted to term vector T by cliction vector models;In the step S202, the mapping dimension that term vector is arranged is l, inputs quotient The maximum length h for judging opinion, obtains TS’=[Ts1,Ts2,...,Tsh]T, TS’∈Rh×l, TS’For training sample dividing in source domain Cloth expression;In the step S203, feeling polarities label corresponding to each comment on commodity sample is indicated using y, if Setting y=[1,0] indicates the positive polarity of comment on commodity sample band, and setting y=[0,1] indicates comment on commodity sample band passiveness polarity, And then obtain the input data [T of depth mixed modelS’,y]。
More specifically, the text data T that step S2 described in this example will be obtained in step S1prePass through python tool Genism4 (tool that word is converted to vector based on word2vec model) obtains term vector word2vec (Tpre)=Ts;So Setting l is the mapping dimension of term vector afterwards, and the length of input is that h (can be regarded as the sentence of the comment on commodity inputted every time most Long length, this maximum length h can carry out customized setting and adjustment according to actual needs), obtain TS=[Ts1,Ts2,..., Tsh]T, TS∈Rh×cDistribution expression for training sample in source domain, TS∈Rh×cSpecify TsThe size specification of data;R generation All specifications of table be h × c matrix, while using y indicate feeling polarities label corresponding to every sample (y=[1,0] or Y=[0,1], the positive polarity of the former sample band, the latter indicate sample band passiveness polarity), then obtain the data of input model: [TS,y].C is the mapping dimension of term vector.
Step S3 described in this example includes following sub-step:
Step S301 carries out convolution algorithm to the processed term vector data of source domain that step S2 is obtained;
Node after convolution algorithm is input to a cycling element network by step S302;
The node of door cycling element network is input to the weighted transformation matrix of attention mechanism, obtained by step S303 To default dimension vector, and become bivector;
Numerical value in bivector after activation is set 1 more than preset threshold by activation primitive by step S304, and numerical value is small 0 is set in preset threshold, obtains the prediction label of corresponding sampleThe preset threshold is the judgement of preset bivector numerical value Value, can be adjusted and be arranged according to actual needs.
Step S305 utilizes cross entropy cost function calculation feeling polarities label y and prediction labelBetween penalty values L;
Step S306, backpropagation and realization iterate, and realize depth mixed model pre-training;Preferably by batch ladder It spends descent algorithm backpropagation and realizes and iterate, to realize depth mixed model pre-training.
Wherein, in the step S305, pass through cross entropy cost function Calculate feeling polarities label y and prediction labelBetween penalty values L.
More specifically, step S3 described in this example is pre-training step, comprising the following steps: step S301 works as source domain Sample data [TS, Y] and after input layer input model, input size is that convolution kernel size is that a × b × c convolution module carries out Convolution.Wherein, a, b and c are the parameters of the size of convolution kernel, it can be understood as height, length and the width of cube pay attention to here Convolution kernel size parameter w and TS∈Rh×cIn term vector mapping dimension c data volume it is identical, indicate the wide c of convolution kernel It is identical as the mapping dimension c numerical value of term vector, academicly commonly referred to as c and b be convolution kernel length and width, a be convolution nuclear volume (or Claim convolution kernel port number).
Node after convolution is exported T by step S302 described in this exampleCBeing input to the GRU network that dimension is d, (door circulation is single Metanetwork) in;In the step S303, the node of GRU network is exported into TG(node of door cycling element network exports) passes through The node for the power mechanism that gains attention after the weighted transformation matrix of attention mechanism exports TA, the node of attention mechanism at this time is defeated T outAFor d dimensional vector;The node of attention mechanism is exported into TAAfter fully-connected network by subsequent d × 2, become one two Dimensional vector TF, bivector TFFor the output vector of fully-connected network.
In step S304 described in this example, softmax function is utilized:As output layer activation primitive, And set 1 for numerical value in the bivector element after activation is biggish, numerical value it is small set 0, the judgement that this numerical value is big and numerical value is small Standard is preset threshold, and the preset threshold is the numerical threshold that user carries out customized setting and adjustment according to actual requirement; And then obtain the prediction label of corresponding sample
More specifically, the input of Softmax function is the output bivector T of fully-connected network in the preceding paragraphF, formula In Z be output bivector TF, wherein j=1 ..., K are the element subscript of bivector Z, and constant K is constantly equal to 2 in this example, It is [1,2] that citing, which is assumed to be bivector Z, then ZjThat is j-th of element, i.e. Z1=1, Z2=2 in bivector Z, such as formula institute Description can obtain bivector Z by being calculated, being obtained after softmax function
Therefore, the output bivector Z that can obtain a cycling element network is by the result after softmax function [0.2691,0.7309].Preferably, this example sets the preset threshold of the step S304 as 0.5, sets 1 more than the preset threshold, 0 is set lower than the preset threshold, obtaining prediction result is [0,1], and therefore, the effect of softmax function is exactly to normalize number in fact Value, converts original vector interior element numerical value to 0 to 1 decimal, reflects probability size, while being also convenient for handling.
In step S305 described in this example, cross entropy cost (cross-entropy) function is utilized:As the true tag y's and prediction for measuring all samplesBetween error.
This example is n using batch degree parameter is set as in the step S306 to reduce penalty values L (loss) as target Batch gradient decline (Batch Gradient Descent, BGD) algorithm: W+=W-- η * δ * β backpropagation and realize iterate, wherein L is penalty values, for indicate miss Difference function;W is the weight parameter of depth mixed model;Out can be considered output layer, i.e. softmax function;Indicate b to c Partial derivative is sought, a is the result that b seeks c local derviation;ThenIndicate that penalty values L seeks partial derivative to weight parameter W;Expression can It is considered as output layer out and partial derivative is asked to weight parameter W;Indicate penalty values L to can be considered that output layer out seeks partial derivative;δ It indicates to softmax derivation as a result, β is that upper layer weight derivation, (according to chain rule, it is reversed for seeking local derviation to weight to output layer Successively carry out, simplify indicates here), and W+、W-The weight parameter of later moment in time and previous moment state respectively indicates that W is being carried out Iterative process, and η is learning rate.In fact, it is exactly the weight parameter using model to input numerical value that model, which carries out the essence of prediction, Operation is carried out to obtain a result, and backpropagation is that modification optimization to weight parameter makes it more be fitted our emotional semantic classification times Business.With penalty values L (loss) for optimization aim in the pre-training of the step S3, by reversed gradient derivation, (this example is preferred Using BGD algorithm), according to the direction for reducing penalty values L (loss), reach quasi- by the parameters weighting of learning rate more new model of η The effect of training set is closed, and iterates to reduce loss value and improves precision.
Assuming that all weight parameters of the original state of depth mixed model are W0.It indicates to simplify depth mixed model Weight parameter modularization, utilize WC、WG、WAAnd WFRespectively indicate convolutional layer, door cycling element network (GRU network), attention Initial weight parameter belonging to the difference of model and full articulamentum.After depth mixed model has iterated m times,Obtain the depth mixed model all parameters power fixed in source domain Weight WX.Wherein,For the weight parameter of the convolutional layer after iteration m times,For the door cycling element network after iteration m times Weight parameter,For the weight parameter of the attention model after iteration m times,It is the power of the full articulamentum after iteration m times Weight parameter, m is natural number.
Step S3 described in this example is the pre-training process to source domain, is carried out using a fairly large number of data the set pair analysis model of source domain Training, obtains the parameters weighting of the model of source domain.
It is noted that this example model in pretreatment does not use pond layer after convolutional layer;In the i.e. described step S3, In pre-training, source domain set of data samples skips pond layer after being input to convolutional layer, is directly inputted into a cycling element net In network.The reason of being arranged in this way is: in pre-training, since the de-redundancy of pond layer acts on, model can be made to lose in convolution The word order structural information that layer obtains, influences the pre-training effect of GRU network below, while reducing what model was acquired in source domain Characteristic information with positive transfer effect, so not using pond layer in source domain pre-training.GRU network is Gated Recurrent Unit network, i.e. door cycling element network.
Step S4 described in this example includes following sub-step:
The target domain tape label data set P of preset quantity is repeated step S1 and step S2 by step S401, while in depth Pond layer is added after spending mixed model;
Step S402, by depth hybrid weight W resulting after step S3 pre-trainingxIt is directed into depth mixed model;
Step S403, by step S1 and S2 obtain by pretreated target domain data PSIt is finely adjusted.PSFor mesh By data prediction, (data prediction includes participle, word mapping, removes stop words and go to accord with mark field tape label data set P Number etc.) after resulting data;The reason of doing so is: target domain tape label data set P (raw data set P) can not be direct It uses, effect is that the Weight parameter model obtained with pre-training is finely adjusted together, it is therefore an objective to combining target field band mark The weight parameter for the model that label data set P and specific trim step go modification pre-training to obtain, makes to change weight parameter Model can preferably be useful in target domain, and doing so can be the preferable mould of training effect in the source domain with larger data collection Type is moved in the only target domain of low volume data collection, and can also obtain preferable classifying quality.
Wherein, in the step S403, the weight parameter of cycling element network on the doorIt is freezed, then inverse iteration Update j times, and then by by the updated weight parameter of convolutional layer iteration, the updated weight parameter of attention model iteration, The door updated weight parameter of cycling element network iteration is added with the complete updated weight parameter of articulamentum iteration, is obtained in mesh Whole weight parameter W of depth mixed model after the fine tuning of mark fieldY, to realize fine tuning.
That is, this example step S4 is trim step, comprising the following steps: step S401, by the target domain band of preset quantity Label data collection P (the target domain tape label data set P of smallest number) repeats step S1 and step S2, while being added after model Pond layer is as shown in figure 4, by the resulting depth hybrid weight W of step S3 pre-trainingSIt is directed into depth mixed model.By step S3 Obtained PSData obtain model and are finely adjusted, and method for trimming is similar with step 3, but to the weight parameter of GRU network It is freezed, i.e., without reversely updating.It is located at and is updated j times for depth mixed model inverse iteration in fine tuning, then can obtainAndThe depth after aiming field fine tuning can be obtained Whole weight parameter W that mixed model obtainsY, indicate are as follows:Wherein, For the weight parameter of the convolutional layer after iteration j times,For the weight parameter of the door cycling element network after iteration j times, For the weight parameter of the attention model after iteration j times,It is the weight parameter of the full articulamentum after iteration j times, j is certainly So number.
The fine tuning basic step of step S403 described in this example and the pre-training step of step S3 are similar, add compared with step S3 Pond layer is entered, and the data of training have changed aiming field data set by source domain data set.More specifically, the step S403 Including following sub-step:
Step S4031, to the target domain data P of inputSCarry out convolution algorithm;
The output of convolutional layer is passed through pond layer by step S4032;The pond layer, which can be considered, simply extracts more important number According to, abandon compared with hash the step of, and pond layer be free of weight parameter;
The node of pond layer is input to a cycling element network by step S4033;
The weighted transformation matrix of the output of door cycling element network to attention mechanism is obtained default dimension by step S4034 Vector is spent, and is become bivector;
Numerical value in bivector after activation is set 1 more than preset threshold by activation primitive, numerical value by step S4035 0 is set less than preset threshold, obtains the prediction label of corresponding sample
Step S4036 utilizes cross entropy cost function calculation feeling polarities label y and prediction labelBetween penalty values L;
Step S4037 is used as optimization direction according to penalty values L is reduced, simultaneously using the backpropagation of batch gradient descent algorithm Realization iterates, to realize depth mixed model pre-training.Preferably, in the back-propagation process of the step S4037, The weight parameter of cycling element network on the doorIt is freezed.
That is, essentially identical with the step S306, the step S4037 is also according to reduction penalty values L conduct Optimize direction, using the weight parameter of back-propagation method modification depth mixed model, but to the weight parameter of GRU networkIt is freezed, i.e., without reversely updating, realizes depth mixed model pre-training.
The univers parameter of the step S4 Weight model obtained is carried out tuning by step S5 described in this example, after transfer learning Aiming field classifying quality is helped to improve to super ginseng tuning, carries out the emotional semantic classification task of target domain.
All hyper parameters that this example above step uses are preferred are as follows: the mapping dimension c=128 of term vector;The maximum of input Length is h=60;Convolution kernel size is 128 × 5 × 128;It is d=256 that GRU network, which exports dimension,;Criticizing degree parameter is n=256; Learning rate is η=1e-3;Pre-training iteration time is m=25;Fine tuning the number of iterations is j=15;Certainly, these value ranges are Preferred numerical value can be adjusted and be arranged according to actual needs in practical applications.
As shown in figure 4, this example also provides a kind of comment emotional semantic classification system based on depth mixed model transfer learning, adopt It with the comment sensibility classification method as described above based on depth mixed model transfer learning, and successively include convolutional layer, pond Change layer, door cycling element network, attention Mechanism Model, dropout layers, full articulamentum and output layer.
That is, this example is based on convolutional neural networks, Recognition with Recurrent Neural Network (using GRU network) and to pay attention in fact The depth mixed model of power mechanism being combined, convolutional layer, pond layer, door cycling element network, attention Mechanism Model, Dropout layers, full articulamentum and output layer are all convolutional neural networks, Recognition with Recurrent Neural Network (using GRU network) and pay attention to Network layer in the presence of power mechanism.
Firstly, this example captures low-level image feature using the powerful feature recognition capability of convolutional coding structure, in conjunction with AM (Attention Model, i.e. attention mechanism) the weight transfer matrix of structure weights again to by the output after GRU, strengthens model identification and comments The ability of keyword in, it is worth mentioning at this point that, in emotional semantic classification task, keyword has great shadow for classifying quality It rings.
Secondly, there is the insensitive problem for text word order structure for convolution model, reinforce model in conjunction with GRU network Learning ability, context binding ability to text word order structure, improve transfer learning after for longer comment on commodity text Classifying quality.
Finally, full articulamentum access small probability Dropout layers (refer in the training process of deep learning network, for Neural network unit, the data Layer for temporarily abandoning it from network according to certain probability), make fraction at random in training Node does not work (output zero setting), prevents over-fitting, reinforces the generalization ability of model.
Dropout may be considered random drop layer, refer in the training process of deep learning network, for neural network Unit, according to the data Layer that certain probability temporarily abandons it from network, effect in this way can prevent model over-fitting (excessively quasi- The generalization ability that credit union weakens model can transfer ability).
This example training speed is fast, and training difficulty is low, it is only necessary to which a few wheel training can be obtained compared with high-class precision.Model exists Preferable classifying quality can be also obtained when noise is more or the training of the data set of negligible amounts, Shandong small to data set dependence Stick is good.It is based on its transfer learning strategy simultaneously, binding model can take in the case where a small amount of aiming field data set Obtain good transfer learning effect.Reach and reduces model to the dependence of data set, the transportable ability and raising of raising model The purpose of nicety of grading after transfer learning.
Step S4 described in this example is model fine tuning, and the step S3 model obtained and its weight parameter, which are transferred to aiming field, to be made With the training of aiming field small data set.It is noted that the step S4 is when being finely adjusted, depth mixed model is preferably being rolled up It is directly accessed pond layer (maximum pond layer) after lamination, while freezing the weight parameter of GRU network, is i.e. door cycling element network Weight parameterRemain unchanged, only to the weight parameter of convolutional layer, attention mechanism and full articulamentum (With) do Reverse optimization updates.
This example takes the reason of this transfer learning method to be: in the pre-training of step S3, due to the de-redundant of pond layer Remaining effect can make depth mixed model lose the word order structural information obtained in convolutional layer, influence the pre-training of GRU network below Effect, while the characteristic information with positive transfer effect that model is acquired in source domain is reduced, so in source domain pre-training Without using pond layer;When step S4 is finely adjusted, for convolutional coding structure and AM structure, due to it is desirable that model has more preferably Transportable ability, can be used between the smaller field of characteristic similarity, then we to its weight parameter carry out it is reversed update optimization, Increasing model enhances the weight of the feature with positive transfer effect the ability for capturing aiming field keyword.
For GRU network, since it is series model, more new task requires accent and starts training pattern every time, causes Its backpropagation speed is slow, so that entire transfer learning process slows.In view of in source domain GRU network have already passed through compared with The multiple training of large data sets has learnt to enough language features, in order to improve entire transfer learning process speed, institute GRU network weight is reformed with this example and freezes to handle, it is not carried out reversely to update optimization then, i.e. door cycling element in step S3 The weight parameter of networkIt remains unchanged.Meanwhile pond layer is used in the trim process of step S4, strengthen model convolutional layer and catches Aiming field keyword ability is caught, the characteristic information acted on without positive transfer is removed;Final as model simultaneously for full articulamentum " classifier ", reversely update is carried out to it can greatly improve the effect of category of model after transfer learning.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that Specific implementation of the invention is only limited to these instructions.For those of ordinary skill in the art to which the present invention belongs, exist Under the premise of not departing from present inventive concept, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to of the invention Protection scope.

Claims (9)

1. a kind of comment sensibility classification method based on depth mixed model transfer learning, which comprises the following steps:
Step S1 acquires comment on commodity, and pre-processes to the source domain set of data samples of comment on commodity;
Pretreated data are mapped as term vector by step S2;
Step S3 carries out depth mixed model pre-training to the source domain set of data samples of comment on commodity;
Step S4 carries out the fine tuning of depth mixed model to the target domain set of data samples of comment on commodity;
Step S5 carries out emotional semantic classification to the comment on commodity of target domain;
Wherein, the step S3 includes following sub-step:
Step S301 carries out convolution algorithm to the processed term vector data of source domain that step S2 is obtained;
Node after convolution algorithm is input to a cycling element network by step S302;
The node of door cycling element network is input to the weighted transformation matrix of attention mechanism, obtained pre- by step S303 If dimension vector, and become bivector;
Numerical value in bivector after activation is set 1 more than preset threshold by activation primitive by step S304, and numerical value is less than pre- If threshold value sets 0, the prediction label of corresponding sample is obtained
Step S305 utilizes cross entropy cost function calculation feeling polarities label y and prediction labelBetween penalty values L;
Step S306 is iterated using the backpropagation of batch gradient descent algorithm and realization, realizes that depth mixed model is instructed in advance Practice.
2. the comment sensibility classification method according to claim 1 based on depth mixed model transfer learning, feature exist In passing through activation primitive in the step S304Bivector is calculated as output layer activation primitive Calculated value σ (Z)j, wherein ZjFor j-th of element in the bivector Z of fully-connected network output, j is the member of bivector Z Plain subscript, K=2.
3. the comment sensibility classification method according to claim 1 based on depth mixed model transfer learning, feature exist In passing through cross entropy cost function in the step S305Calculate emotion pole Property label y and prediction labelBetween penalty values L.
4. according to claim 1 to the comment emotional semantic classification side based on depth mixed model transfer learning described in 3 any one Method, which is characterized in that the step S4 includes following sub-step:
The target domain tape label data set P of preset quantity is repeated step S1 and step S2 by step S401, while mixed in depth Pond layer is added after molding type;
Step S402, by the resulting weight W of step S3 pre-trainingSIt is directed into depth mixed model;
Step S403, by step S3 obtain by pretreated target domain data PSIt is finely adjusted.
5. the comment sensibility classification method according to claim 4 based on depth mixed model transfer learning, feature exist In, in the step S403, the weight parameter of cycling element network on the doorIt is freezed, then inverse iteration updates j times, into And by by the updated weight parameter of convolutional layer iteration, the updated weight parameter of attention model iteration, door cycling element The updated weight parameter of network iteration is added with the complete updated weight parameter of articulamentum iteration, obtains finely tuning in target domain Whole weight parameter W of depth mixed model afterwardsY, to realize fine tuning.
6. the comment sensibility classification method according to claim 4 based on depth mixed model transfer learning, feature exist In the step S403 includes following sub-step:
Step S4031, to the target domain data P of inputSCarry out convolution algorithm;
The output of convolutional layer is passed through pond layer by step S4032;
The node of pond layer is input to a cycling element network by step S4033;
Step S4034, by the weighted transformation matrix of the output of door cycling element network to attention mechanism, obtain default dimension to Amount, and become bivector;
Numerical value in bivector after activation is set 1 more than preset threshold by activation primitive by step S4035, and numerical value is less than Preset threshold sets 0, obtains the prediction label of corresponding sample
Step S4036 utilizes cross entropy cost function calculation feeling polarities label y and prediction labelBetween penalty values L;
Step S4037 using the backpropagation of batch gradient descent algorithm and is realized according to penalty values L is reduced as optimization direction It iterates, to realize depth mixed model pre-training.
7. the comment sensibility classification method according to claim 6 based on depth mixed model transfer learning, feature exist In, in the back-propagation process of the step S4037, the weight parameter of cycling element network on the doorIt is freezed.
8. according to claim 1 to the comment emotional semantic classification side based on depth mixed model transfer learning described in 3 any one Method, which is characterized in that in the step S3, in pre-training, source domain set of data samples skips pond after being input to convolutional layer Change layer, is directly inputted into a cycling element network.
9. a kind of comment emotional semantic classification system based on depth mixed model transfer learning, which is characterized in that use such as right It is required that the comment sensibility classification method based on depth mixed model transfer learning described in 1 to 8 any one, and successively include volume Lamination, pond layer, door cycling element network, attention Mechanism Model, random drop layer, full articulamentum and output layer.
CN201811383793.4A 2018-11-20 2018-11-20 Comment emotion classification method and system based on deep hybrid model transfer learning Active CN109271522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811383793.4A CN109271522B (en) 2018-11-20 2018-11-20 Comment emotion classification method and system based on deep hybrid model transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811383793.4A CN109271522B (en) 2018-11-20 2018-11-20 Comment emotion classification method and system based on deep hybrid model transfer learning

Publications (2)

Publication Number Publication Date
CN109271522A true CN109271522A (en) 2019-01-25
CN109271522B CN109271522B (en) 2021-07-30

Family

ID=65190306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811383793.4A Active CN109271522B (en) 2018-11-20 2018-11-20 Comment emotion classification method and system based on deep hybrid model transfer learning

Country Status (1)

Country Link
CN (1) CN109271522B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829500A (en) * 2019-01-31 2019-05-31 华南理工大学 A kind of position composition and automatic clustering method
CN109885670A (en) * 2019-02-13 2019-06-14 北京航空航天大学 A kind of interaction attention coding sentiment analysis method towards topic text
CN110163368A (en) * 2019-04-18 2019-08-23 腾讯科技(深圳)有限公司 Deep learning model training method, apparatus and system based on mixed-precision
CN110188822A (en) * 2019-05-30 2019-08-30 盐城工学院 A kind of domain is to the one-dimensional convolutional neural networks intelligent failure diagnosis method of anti-adaptive
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN110472115A (en) * 2019-08-08 2019-11-19 东北大学 A kind of social networks text emotion fine grit classification method based on deep learning
CN110489753A (en) * 2019-08-15 2019-11-22 昆明理工大学 Improve the corresponding cross-cutting sensibility classification method of study of neuromechanism of feature selecting
CN110489567A (en) * 2019-08-26 2019-11-22 重庆邮电大学 A kind of node information acquisition method and its device based on across a network Feature Mapping
CN110584654A (en) * 2019-10-09 2019-12-20 中山大学 Multi-mode convolutional neural network-based electrocardiosignal classification method
CN110728294A (en) * 2019-08-30 2020-01-24 北京影谱科技股份有限公司 Cross-domain image classification model construction method and device based on transfer learning
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method
CN111209964A (en) * 2020-01-06 2020-05-29 武汉市盛隽科技有限公司 Model training method, metal fracture analysis method based on deep learning and application
CN111488972A (en) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 Data migration method and device, electronic equipment and storage medium
CN111680160A (en) * 2020-06-16 2020-09-18 西北师范大学 Deep migration learning method for text emotion classification
CN111708864A (en) * 2020-06-11 2020-09-25 兰州理工大学 User comment text emotion analysis method and device
CN111767987A (en) * 2020-06-28 2020-10-13 北京百度网讯科技有限公司 Data processing method, device and equipment based on recurrent neural network
CN111782802A (en) * 2020-05-15 2020-10-16 北京极兆技术有限公司 Method and system for obtaining national economy manufacturing industry corresponding to commodity based on machine learning
CN112015896A (en) * 2020-08-27 2020-12-01 腾讯科技(深圳)有限公司 Emotion classification method and device based on artificial intelligence
CN112317957A (en) * 2020-10-09 2021-02-05 五邑大学 Laser welding method, laser welding apparatus, and storage medium therefor
CN112861984A (en) * 2021-02-25 2021-05-28 西华大学 Speech emotion classification method based on feature fusion and ensemble learning
CN116468959A (en) * 2023-06-15 2023-07-21 清软微视(杭州)科技有限公司 Industrial defect classification method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778407A (en) * 2012-10-23 2014-05-07 南开大学 Gesture recognition algorithm based on conditional random fields under transfer learning framework
CN106485251A (en) * 2016-10-08 2017-03-08 天津工业大学 Egg embryo classification based on deep learning
CN107025284A (en) * 2017-04-06 2017-08-08 中南大学 The recognition methods of network comment text emotion tendency and convolutional neural networks model
US20180060652A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Unsupervised Deep Representation Learning for Fine-grained Body Part Recognition
CN107967257A (en) * 2017-11-20 2018-04-27 哈尔滨工业大学 A kind of tandem type composition generation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778407A (en) * 2012-10-23 2014-05-07 南开大学 Gesture recognition algorithm based on conditional random fields under transfer learning framework
US20180060652A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Unsupervised Deep Representation Learning for Fine-grained Body Part Recognition
CN106485251A (en) * 2016-10-08 2017-03-08 天津工业大学 Egg embryo classification based on deep learning
CN107025284A (en) * 2017-04-06 2017-08-08 中南大学 The recognition methods of network comment text emotion tendency and convolutional neural networks model
CN107967257A (en) * 2017-11-20 2018-04-27 哈尔滨工业大学 A kind of tandem type composition generation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUANJUN ZHAO等: ""Deep Transfer Learning for Social Media Cross-Domain Sentiment Classification"", 《CHINESE NATIONAL CONFERENCE ON SOCIAL MEDIA PROCESSING》 *
赵传君等: ""基于集成深度迁移学习的多源跨领域情感分类"", 《山西大学学报(自然科学版)》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829500B (en) * 2019-01-31 2023-05-02 华南理工大学 Position composition and automatic clustering method
CN109829500A (en) * 2019-01-31 2019-05-31 华南理工大学 A kind of position composition and automatic clustering method
CN109885670A (en) * 2019-02-13 2019-06-14 北京航空航天大学 A kind of interaction attention coding sentiment analysis method towards topic text
CN110163368B (en) * 2019-04-18 2023-10-20 腾讯科技(深圳)有限公司 Deep learning model training method, device and system based on mixed precision
CN110163368A (en) * 2019-04-18 2019-08-23 腾讯科技(深圳)有限公司 Deep learning model training method, apparatus and system based on mixed-precision
CN110188822A (en) * 2019-05-30 2019-08-30 盐城工学院 A kind of domain is to the one-dimensional convolutional neural networks intelligent failure diagnosis method of anti-adaptive
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN110210381B (en) * 2019-05-30 2023-08-25 盐城工学院 Domain separation self-adaptive one-dimensional convolutional neural network intelligent fault diagnosis method
CN110472115A (en) * 2019-08-08 2019-11-19 东北大学 A kind of social networks text emotion fine grit classification method based on deep learning
CN110472115B (en) * 2019-08-08 2022-08-02 东北大学 Social network text emotion fine-grained classification method based on deep learning
CN110489753A (en) * 2019-08-15 2019-11-22 昆明理工大学 Improve the corresponding cross-cutting sensibility classification method of study of neuromechanism of feature selecting
CN110489753B (en) * 2019-08-15 2022-06-14 昆明理工大学 Neural structure corresponding learning cross-domain emotion classification method for improving feature selection
CN110489567A (en) * 2019-08-26 2019-11-22 重庆邮电大学 A kind of node information acquisition method and its device based on across a network Feature Mapping
CN110489567B (en) * 2019-08-26 2022-03-22 重庆邮电大学 Node information acquisition method and device based on cross-network feature mapping
CN110728294A (en) * 2019-08-30 2020-01-24 北京影谱科技股份有限公司 Cross-domain image classification model construction method and device based on transfer learning
CN110584654A (en) * 2019-10-09 2019-12-20 中山大学 Multi-mode convolutional neural network-based electrocardiosignal classification method
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method
CN111209964A (en) * 2020-01-06 2020-05-29 武汉市盛隽科技有限公司 Model training method, metal fracture analysis method based on deep learning and application
CN111488972A (en) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 Data migration method and device, electronic equipment and storage medium
CN111488972B (en) * 2020-04-09 2023-08-08 北京百度网讯科技有限公司 Data migration method, device, electronic equipment and storage medium
CN111782802B (en) * 2020-05-15 2023-11-24 北京极兆技术有限公司 Method and system for obtaining commodity corresponding to national economy manufacturing industry based on machine learning
CN111782802A (en) * 2020-05-15 2020-10-16 北京极兆技术有限公司 Method and system for obtaining national economy manufacturing industry corresponding to commodity based on machine learning
CN111708864A (en) * 2020-06-11 2020-09-25 兰州理工大学 User comment text emotion analysis method and device
CN111680160A (en) * 2020-06-16 2020-09-18 西北师范大学 Deep migration learning method for text emotion classification
CN111767987A (en) * 2020-06-28 2020-10-13 北京百度网讯科技有限公司 Data processing method, device and equipment based on recurrent neural network
CN111767987B (en) * 2020-06-28 2024-02-20 北京百度网讯科技有限公司 Data processing method, device and equipment based on cyclic neural network
CN112015896A (en) * 2020-08-27 2020-12-01 腾讯科技(深圳)有限公司 Emotion classification method and device based on artificial intelligence
CN112015896B (en) * 2020-08-27 2024-02-06 腾讯科技(深圳)有限公司 Emotion classification method and device based on artificial intelligence
CN112317957A (en) * 2020-10-09 2021-02-05 五邑大学 Laser welding method, laser welding apparatus, and storage medium therefor
CN112861984B (en) * 2021-02-25 2022-07-01 西华大学 Speech emotion classification method based on feature fusion and ensemble learning
CN112861984A (en) * 2021-02-25 2021-05-28 西华大学 Speech emotion classification method based on feature fusion and ensemble learning
CN116468959A (en) * 2023-06-15 2023-07-21 清软微视(杭州)科技有限公司 Industrial defect classification method, device, electronic equipment and storage medium
CN116468959B (en) * 2023-06-15 2023-09-08 清软微视(杭州)科技有限公司 Industrial defect classification method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109271522B (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN109271522A (en) Comment sensibility classification method and system based on depth mixed model transfer learning
CN109376242B (en) Text classification method based on cyclic neural network variant and convolutional neural network
Cheng et al. Image recognition technology based on deep learning
CN109918671A (en) Electronic health record entity relation extraction method based on convolution loop neural network
CN106897371B (en) Chinese text classification system and method
CN110765260A (en) Information recommendation method based on convolutional neural network and joint attention mechanism
CN111680225B (en) WeChat financial message analysis method and system based on machine learning
CN110580287A (en) Emotion classification method based ON transfer learning and ON-LSTM
CN107480723B (en) Texture Recognition based on partial binary threshold learning network
CN111460157A (en) Cyclic convolution multitask learning method for multi-field text classification
CN110263174A (en) - subject categories the analysis method based on focus
CN113673254A (en) Knowledge distillation position detection method based on similarity maintenance
CN115952292B (en) Multi-label classification method, apparatus and computer readable medium
CN111666752A (en) Circuit teaching material entity relation extraction method based on keyword attention mechanism
CN112883931A (en) Real-time true and false motion judgment method based on long and short term memory network
Li et al. First-order sensitivity analysis for hidden neuron selection in layer-wise training of networks
CN113934835B (en) Retrieval type reply dialogue method and system combining keywords and semantic understanding representation
CN111930981A (en) Data processing method for sketch retrieval
CN115062727A (en) Graph node classification method and system based on multi-order hypergraph convolutional network
CN112989803B (en) Entity link prediction method based on topic vector learning
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
CN117216265A (en) Improved graph annotation meaning network news topic classification method
Huang et al. A Model for Legal Judgment Prediction Based on Multi-model Fusion
CN116562274A (en) Target theme determining method and device
CN116433909A (en) Similarity weighted multi-teacher network model-based semi-supervised image semantic segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant