CN110197284A - A kind of address dummy recognition methods, device and equipment - Google Patents
A kind of address dummy recognition methods, device and equipment Download PDFInfo
- Publication number
- CN110197284A CN110197284A CN201910362906.0A CN201910362906A CN110197284A CN 110197284 A CN110197284 A CN 110197284A CN 201910362906 A CN201910362906 A CN 201910362906A CN 110197284 A CN110197284 A CN 110197284A
- Authority
- CN
- China
- Prior art keywords
- address
- model
- term vector
- identified
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000000605 extraction Methods 0.000 claims abstract description 21
- 230000004044 response Effects 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 37
- 230000015654 memory Effects 0.000 claims description 13
- 238000010276 construction Methods 0.000 claims description 3
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000007787 long-term memory Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000011017 operating method Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000000630 rising effect Effects 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 241000112598 Pseudoblennius percoides Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000007363 ring formation reaction Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0609—Buyer or seller confidence or verification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- General Physics & Mathematics (AREA)
- Accounting & Taxation (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Character Discrimination (AREA)
Abstract
The present invention relates to a kind of address dummy recognition methods, device and equipment, which comprises building Address Recognition model in advance, the Address Recognition model includes: language model and disaggregated model;It is requested in response to Address Recognition, the Address Recognition request includes the title of address to be identified, generates the term vector sequence of the address to be identified;The term vector sequence inputting is subjected to feature extraction into the language model, obtains characteristic information corresponding with the address to be identified and update information;The characteristic information and the update information are input to the disaggregated model, obtain the recognition result of the address to be identified.The present invention by successively being pre-processed to address to be identified, feature extraction and classification output, obtained the recognition result of address to be identified, improved the efficiency and accuracy rate of Address Recognition, reduce identification cost.
Description
Technical field
The present invention relates to machine learning techniques field more particularly to a kind of address dummy recognition methods, device and equipment.
Background technique
As electric business ox is increasingly savage, the loss that electric business platform and brand quotient suffer is increasing, most of ox
Posting address fills in the address dummy that can not normally send with charge free when the means used are lower single, due to void when courier sends part
False address can not normally be sent with charge free, ox and after sending part person's telephonic communication, and courier is allowed to send cargo with charge free real address, thus
Achieve the purpose that store goods.It is the suspicion for being used for the true address stored goods due to ox and having a large amount of similar commodity to gather that ox, which does so,
Doubt, to prevent electric business platform from the address for really being used to store goods piping off, ox design a series of address dummies come around
Cross the air control detection of major electric business platform.
Address dummy has some general character, is that can recognize that using supervised machine learning by artificial a large amount of mark
Address dummy, but since address mark needs a large amount of manpower and material resources, and ox can constantly design novel address dummy, institute
To be not appropriate for the scene of electric business platform address dummy identification based on the supervised study largely manually to label.
Summary of the invention
Technical problem to be solved by the present invention lies in, a kind of address dummy recognition methods, device and equipment are provided, it can
Address dummy identification model is trained in such a way that unsupervised training is combined with Training, and uses the identification model
Address to be identified is identified, the efficiency and accuracy rate identified to address dummy is improved, reduces identification cost.
In order to solve the above-mentioned technical problem, in a first aspect, the present invention provides a kind of address dummy recognition methods, the side
Method includes:
Building Address Recognition model in advance, the Address Recognition model includes: language model and disaggregated model;
It is requested in response to Address Recognition, the Address Recognition request includes the title of address to be identified, to described to be identified
The title of address divide by word, term vector corresponding with each word is generated, according to each word in the address to be identified
Term vector, generate the term vector sequence of the address to be identified;
The term vector sequence inputting is subjected to feature extraction into the language model, is obtained and the address to be identified
Corresponding characteristic information, and to the update information of the address to be identified in characteristic extraction procedure;
The characteristic information and the update information are input to the disaggregated model, obtain the knowledge of the address to be identified
Other result.
Second aspect, the present invention provides a kind of address dummy identification device, described device includes:
Identification model constructs module, and for constructing Address Recognition model in advance, the Address Recognition model includes: language mould
Type and disaggregated model;
Term vector generation module, for requesting in response to Address Recognition, the Address Recognition request includes address to be identified
Title, the title of the address to be identified divide by word, corresponding with each word term vector is generated, according to described
The term vector of each word in address to be identified generates the term vector sequence of the address to be identified;
Characteristic extracting module is obtained for the term vector sequence inputting to be carried out feature extraction into the language model
The amendment of the address to be identified is believed to characteristic information corresponding with the address to be identified, and in characteristic extraction procedure
Breath;
Classification and Identification module is obtained for the characteristic information and the update information to be input to the disaggregated model
The recognition result of the address to be identified.
The third aspect, the embodiment of the invention provides a kind of equipment, the equipment includes processor and memory, described to deposit
Be stored at least one instruction, at least a Duan Chengxu, code set or instruction set in reservoir, at least one instruction, it is described extremely
A few Duan Chengxu, the code set or instruction set are loaded by the processor and are executed to realize falseness as described in relation to the first aspect
Address Recognition method.
Fourth aspect is stored at least one in the storage medium the present invention provides a kind of computer storage medium
Instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, at least a Duan Chengxu, code set or the instruction set
It is loaded by processor and executes address dummy recognition methods as described in relation to the first aspect.
The implementation of the embodiments of the present invention has the following beneficial effects:
For the present invention by constructing Address Recognition model in advance, the Address Recognition model includes being obtained by unsupervised learning
Language model and the disaggregated model that is obtained by supervised learning;In response to Address Recognition request, to address to be identified into
Row pretreatment, pretreatment here mainly generate term vector sequence corresponding with address to be identified;By the term vector sequence
It is input in the language model, obtains characteristic information corresponding with the address to be identified, and in characteristic extraction procedure
To the update information of the address to be identified;The characteristic information and the update information are input to the disaggregated model, obtained
To the recognition result of the address to be identified.The present invention by successively being pre-processed to address to be identified, feature extraction and
Classification output, has obtained the recognition result of address to be identified, has improved the efficiency and accuracy rate of Address Recognition, reduce and be identified as
This.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is application scenarios schematic diagram provided in an embodiment of the present invention;
Fig. 2 is a kind of address dummy recognition methods flow chart provided in an embodiment of the present invention;
Fig. 3 is a kind of training method flow chart of language model provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram provided in an embodiment of the present invention from encoding model Autoencoder;
Fig. 5 is a kind of construction method flow chart of disaggregated model provided in an embodiment of the present invention;
Fig. 6 is user interface schematic diagram provided in an embodiment of the present invention;
Fig. 7 is a kind of address dummy identification device schematic diagram provided in an embodiment of the present invention;
Fig. 8 is identification model building module diagram provided in an embodiment of the present invention;
Fig. 9 is language model building module diagram provided in an embodiment of the present invention;
Figure 10 is disaggregated model building module diagram provided in an embodiment of the present invention;
Figure 11 is a kind of equipment schematic diagram provided in an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, the present invention is made into one below in conjunction with attached drawing
Step ground detailed description.Obviously, described embodiment is only a part of the embodiments of the present invention, rather than whole implementation
Example.Based on the embodiments of the present invention, those of ordinary skill in the art are obtained without making creative work
Every other embodiment, shall fall within the protection scope of the present invention.
Relational language involved in the embodiment of the present invention is done first explained below:
Address dummy: the invalid information present in logistics address, and the address of specific location can not be navigated to.
Semi-supervised learning: being a kind of learning method that supervised learning is combined with unsupervised learning, and semi-supervised learning makes
With a large amount of Unlabeled data, and flag data is used simultaneously, Lai Jinhang pattern-recognition work.
RNN:Recurrent neural Network, Recognition with Recurrent Neural Network are a kind of people of node orientation connection cyclization
Artificial neural networks, its internal state can show dynamic time sequence behavior, can use its internal memory come when handling any
The list entries of sequence.
LSTM:Long Short-Term Memory, shot and long term memory network are a kind of time recurrent neural networks, are fitted
Together in processing and predicted time sequence in be spaced and postpone relatively long critical event.
Autoencoder: one kind of feedforward neural network (Feedforward Neural Network), it is once main to use
In the dimensionality reduction of data or the extraction of feature, and also it is extended for generating in model now.With other Feedforward NN
Unlike, other Feedforward NN are concerned with Output Layer and error rate, and Autoencoder is concerned with
Hidden Layer;Secondly, common Feedforward NN is generally deep, and usually only one layer of Autoencoder
Hidden Layer。
Referring to Figure 1, it illustrates application scenarios schematic diagrams provided in an embodiment of the present invention, including several user terminals
110 and server 120, the user terminal 110 includes but is not limited to smart phone, tablet computer, laptop, desktop
Brain etc..User can log in related application APP by user terminal 110 or website carries out network of relation activity, when user passes through
It include address information, service in the network service request when user terminal 110 sends network service request to server 120
Device 120 is in response to the network service request, and the address information submitted to user terminal 110 identifies, when the identification ground
Location information be legal address when, then server 120 can to the corresponding user terminal 110 of the user issue corresponding business information with
So that user completes relevant network activity;When identifying the address is address dummy, then server 120 can be refused as the use
The corresponding user terminal 110 in family provides network of relation business service, so that the user can not carry out relevant network activity.
Fig. 2 is referred to, it illustrates a kind of address dummy recognition methods, can be applied to server side, which comprises
S210. Address Recognition model is constructed in advance, and the Address Recognition model includes: language model and disaggregated model.
In the present embodiment, the identification of address dummy can be carried out by constructing Address Recognition model in advance, the address is known
Other model consists of two parts: carrying out the language model and pass through few that unsupervised training obtains by a large amount of address corpus information
The address date of amount mark carries out the disaggregated model that Training obtains.The language mould obtained based on unsupervised training
Type carries out Training to the disaggregated model, finally obtains the Address Recognition model, i.e., the described Address Recognition model is
It is obtained by two Model Fusions.
S220. requested in response to Address Recognition, Address Recognition request includes the title of address to be identified, to it is described to
The title of identification address divide by word, generates term vector corresponding with each word, according to each in the address to be identified
The term vector of a word generates the term vector sequence of the address to be identified.
In the present embodiment, the term vector of each word in address to be identified is generated using fasttext, to obtain the ground
The term vector sequence of location.Fasttext can obtain extraordinary performance, especially table in the training of term vector and sentence classification
Now rare words have been carried out with the processing in character granularity.Commonly it is characterized in bag of words, but bag of words cannot consider word
Between sequence, therefore fasttext is also added into N-gram feature, and each word is also indicated as more other than word itself
A other N-grams of character level (otherwise referred to as N member mould).
When obtaining term vector by fasttext, by the good word of document point, vocabulary is constructed.Each word is with one in vocabulary
A integer (index) replaces, and reserves " unknown word ", it is assumed that is 0;One-hotization is carried out to category.Assuming that text data is total
Share 3 classifications, corresponding category is 1,2,3 respectively, then the corresponding one-hot vector of these three categories be respectively [1,0,
0],[0,1,0],[0,0,1];To a collection of text, glossarial index sequence is converted by each text, each category is converted into one-
Hot vector.For a specific example, " I came Shenzhen yesterday " may be converted into [10,30,80,1000];It belongs to class
Other 1, its category is exactly [1,0,0].When it is 500 that the word number that a document at most uses, which is arranged, this short text vector
It just needs to mend 496 0 below, i.e. [10,30,80,1000,0,0,0 ..., 0].
S230. the term vector sequence inputting is subjected to feature extraction into the language model, obtained with described wait know
The corresponding characteristic information in other address, and to the update information of the address to be identified in characteristic extraction procedure.
By the term vector sequence inputting of address to be identified into the language model available spy corresponding with the address
Reference breath and update information.Wherein, characteristic information refers to most has information content, is best able to generation by what the language model selected
The information of table address feature to be identified;Update information is the obtained additional information of characteristic by model itself, for example, when defeated
Enter an address into language model, model can automatically modify to it, so that the address is more biased towards in normal address, change
It is dynamic bigger, then illustrate that the deviation of the address and normal address is bigger, the update information may include the amendment journey to the address
Relevant informations, the update informations such as degree account for certain weight in the subsequent result identified to the address.
S240. the characteristic information and the update information are input to the disaggregated model, obtain it is described to be identifiedly
The recognition result of location.
Characteristic information and update information that above-mentioned steps obtain are input to disaggregated model, obtain recognition result.This implementation
In example, using 0,1 classification, i.e., address to be identified for one, final recognition result is 0 or 1.Certainly, having
Body in implementation process, the recognition result that also can adjust final output is a number between 0~1, and the present embodiment, which is not done, to be had
Body limitation.
To the specific training process of language model in this present embodiment, reference can be made to Fig. 3, the training method packet of language model
It includes:
S310. no tag addresses corpus information is obtained, the title of each address in the address corpus information is carried out
Divide by word, generates term vector corresponding with each word according to the term vector of word each in the address and generate the address
Term vector sequence.
The address corpus information of no label is easier to obtain, after obtaining these corpus informations without label, is needed
Each address information is pre-processed, the term vector sequence of each address is obtained after processing.
S320. the term vector sequence of each address in the address corpus information is sequentially inputted to from encoding model, it is right
It is described to be trained from encoding model.
Here unsupervised training is carried out from encoding model using Autoencoder, Autoencoder has used a nerve
Network indicates to generate the low-dimensional of a higher-dimension input.Traditional dimension decline depends on linear method, such as principal component analysis skill
Art (principal components analysis, PCA), finds out the direction of maximum variance in high dimensional data.Pass through selection
These directions, PCA, which is substantially featured, contains the direction of final information, it is possible to find the number of a lesser dimension
As the result of dimensionality reduction.However, very big in the characteristic dimension type for linearly also causing itself that can extract of PCA method
Limitation, and Autoencoder overcomes these limitations by introducing born non-linear of neural network.
Fig. 4 is referred to, it illustrates the structural schematic diagram of Autoencoder, Autoencoder includes two main portions
Point, coding network encoder and decoding network decoder.Encoder network is used in training and deployment, and decoder
Network is only used when training.The effect of encoder network is the compression expression for finding data-oriented.For example, passing through
The input that encoder can be tieed up from one 300 generates the expression of its 30 dimension.The effect of decoder network is only encoder net
The reflection of network is the expression for the reconstruction identical as much as possible being originally inputted.In training, decoder is forced
Autoencoder selects the feature for most having information content, is finally stored in compression expression.The input of reconstruction is closer to original defeated
Enter, it is finally obtained to indicate better.
For the specific training process of Autoencoder, specifically can include:
1. it is given without label data, with unsupervised learning come learning characteristic
One input input is input in encoder encoder, a corresponding code word code will be obtained, this
Code is an expression of input, then whether need to know this code expression is exactly input.By adding one
Decoder decoder, at this time decoder will export an information, if that this information of output and at the beginning
Input signal input be like (being ideally exactly the same), that is it is obvious that just have reason to believe that this code is to lean on
Spectrum.So, so that reconstructed error is minimum, at this time just being inputted by adjusting the parameter of encoder and decoder
First of input signal illustrates, that is, coding code.Because being no label data, the source of error is exactly
Directly obtained compared with original input after reconstruct.
2. generating feature by encoder, next layer is then trained, and successively trained
The code of first layer is just obtained according to above-mentioned steps, reconstructed error minimum just illustrates that this code is exactly former input letter
Number good representation or forced point says that it and original signal are the same (different, the Essence of Information of reflection of expression
It is the same).The training method of that second layer and first layer is just without difference, code that we export first layer is as the
Two layers of input signal, equally minimum reconstructed error, will obtain the parameter of the second layer, and obtain second layer input
Code, that is, former second for inputting information express.Other layers just, train this layer with regard to the training of same method,
The parameter of front layer is fixed, and their decoder is useless, does not all need.
Through the above steps, obtained much training layer, the number of plies specifically needed can according to specific implementation situation come
Setting.Trained Autoencoder model has been obtained as a result,.
It, can be right when to Autoencoder training since to be always partial to global error minimum for the training objective of model
The data of input are modified, and the related Neurons of model can be by the information of relevant parameter record modification, and different from normal
High Error Trend can be presented in the input and output of information in a model.In to Autoencoder training process, in decoder
When output, the address information to be output from encoding model is obtained, by the address information to be output with an efficient coding
The form of one-hot is identified.
After the completion of Autoencoder model training, it is provided simultaneously with the function and update information of feature extraction
Obtain function.
S330. coding module is extracted from encoding model from trained as the language model.
Thus above-mentioned as can be seen that can use following model to carry out relevant operation is Autoencoder model
The part encoder is come so needing to pull out encoder from trained Autoencoder model, as an independence
Model, i.e. the above-mentioned language model of the present embodiment.The major function of the language model is according to the address letter for inputting the model
Breath, sufficiently extracts the address information of concentration, while can also obtain the corresponding update information in the address.
The disaggregated model of the present embodiment is constructed based on the language model, refers to Fig. 5, it illustrates one kind
The construction method of disaggregated model, which comprises
S510. obtaining has a tag addresses sample, generate with it is described have in tag addresses sample the corresponding word in each address to
Sequence is measured, described to have tag addresses sample include address dummy sample and normal address sample.
Relative to a large amount of no tag addresses corpus informations of above-mentioned acquisition, only needing to obtain herein a small amount of has tag addresses
Sample carries out Training.It is pre-processed similarly, for the address sample information of acquisition, obtains each address
Term vector sequence.
S520. the term vector sequence of each address is sequentially input in the language model, generates the characteristic information of each address
And update information.
Due to disaggregated model be trained based on language model, so need to first pass through language model obtain it is relevant
Information.
S530. using the characteristic information of each address and the update information as the input of the disaggregated model, with
Output of the label of the address as the disaggregated model, is trained the disaggregated model.
For each address, using its characteristic information and update information as the input of disaggregated model, label information conduct
The output of disaggregated model is trained the disaggregated model.Wherein, update information can be to certain nerves during model training
First parameter has an impact, and is more bonded actual disaggregated model to train.
It should be noted that stop the model parameter for updating language model when based on language model building disaggregated model,
Because language model is a trained molding and fixed model at this time for disaggregated model, do not need to it
Model parameter is updated.
Splice the upper full link sort layer of a LSTM+ in the encoder layer pulled out, finally constitutes the ground of the present embodiment
Location identification model, LSTM are controlled front output by the concept of " door " to subsequent influence, can link sentence word well
Connection between language extracts long text general idea, improves the correctness of classifier.
Disaggregated model in the present embodiment includes mainly being made of shot and long term memory network LSTM and full link sort layer,
Here consider using LSTM rather than CNN web results be because CNN consider be from spatially extracting contextual information, and
LSTM is more good to extract contextual information in sequence.Context can only be extracted by sliding window from static state compared to CNN
Information, the continuous state in structure memory and concentration time that LSTM passes through it is to accomplish to extract context in maximum efficiency
Information.The Fundamentals of Mathematics of LSTM may be considered Markov Chain, it is believed that subsequent value is the probability for having the former and some parameters
It determines.
For example, the price of similar stock occurs multiple y (price) on the same x, and entire space only has a line,
Such packing density itself is that discomfort shares CNN.Although two models can carry out Series Modeling, substantially have not
Together.RNN has a sequencing on time dimension, and the sequence of input will affect output, and CNN is polymerize from local message
Global Information is obtained, hierarchical information extraction is carried out to input.In addition in long text sequence, since CNN can only handle its window
Information in mouthful, the information of adjacent window apertures can only reach the fusion of information by the convolutional layer of later layer, this is to convolution window
It is very big for relying on mobile step-length etc. parameter, and training can be more difficult.So in the training of all language models,
LSTM effect often due to CNN effect.The present invention will be by using the Semi-supervised modelling effect of LSTM also by Experimental comparison
In the modelling effect of CNN general 2%.
Specific implementation process of the invention is illustrated with specifically example below, it is assumed that there is a large amount of address date at present,
Such as " Shanghai City districts under city administration Songjiang District, the Room 101 of the area A 50, Yueyang street garden Rong Jing ", " Harbin, Heilongjiang Province Xiangfang District rising sun liter
The street rising sun rises 503 101 kindergartens Shi Xinxin of cell ", " Qingdao of Shandong province Huangdao District (virgin rubber south) street Fu Cui 10 ", number of addresses
According in the middle including some address dummies, such as " Huaihua City Hecheng District, Hunan Province follows the mandate of heaven Lu Yuhua Xi Lu intersection macro space new city Cambridge
Influencial family 26504 ", " three tunnel Wan Jiasheng's purchase of property of Shenzhen City, Guangdong Province districts under city administration Longhua new district Longhua street, 20 100 major parts connect
Locksmith lavatory 12345 ".With " three tunnel Wan Jiasheng's purchase of property of Shenzhen City, Guangdong Province districts under city administration Longhua new district Longhua street 20 100
For major part chain store lavatory 12345 ", the address is cut by word first, obtains " wide ", " east ", " province " ... " lavatory ",
" institute ", " 1 ", " 2 ", " 3 ", " 4 ", " 5 ".Then we train the word insertion about word rank with fasttext, keep this
After part word embedding data, model training can be carried out by Autoencoder.According to RNN input pattern, by each word in address
Word insertion vector be input in model, then export be appropriate address one-hot indicate.Due to the training objective of model
It is minimum to be always partial to global error, so after model training is complete, address " Shenzhen City, Guangdong Province districts under city administration Longhua new district Longhua
Three tunnel Wan Jiasheng's purchase of property of street, 20 100 major part chain store lavatories 12345 " can be corrected for " Shenzhen City, Guangdong Province city linchpin
Three 20 chain stores of tunnel Wan Jiasheng's purchase of property of area, new district, Longhua Longhua street ", and the input of address dummy and output are in model
In can show high Error Trend.For trained Autoencoder model, encoder sufficiently extracts concentration in part
Address information, intercept encoder and splice one layer of new LSTM and full link sort layer forms new classifier.In training
When this new sorter model, encoder layers can be fixed and do not make neuron update, then go study complete using smaller learning rate
New LSTM layer and full articulamentum are until model convergence.
In conclusion completed the training of Address Recognition model, behind the step of be exactly how to access address dummy to sentence
Not Fu Wu in, service is externally provided.In the service of address dummy identification, service passes through SaaS (Software-as-a-
Service, software service) mode is supplied to caller, and caller provides address information, and SaaS service can return corresponding
Address dummy fraud point is false degree to assess address.It is specifically as follows first by Address Recognition model encapsulation at function
Be deployed in micro services framework, the input of function is an address, then address is segmented in function, splicing word be embedded in
Address Recognition model is measured and is input to, whether address dummy identifies and mark is finally encapsulated in return model by counting output
Output completes the primary calling of requestor to requestor inside function.
By taking scene single under electric business platform as an example, Fig. 6 is referred to, when user selects to buy certain commodity, is confirmed
The relevant operation of order, including the confirmation to user basic information and goods purchase information, after confirmation message is errorless,
User, which clicks, submits Order button, and the relevant information that user submits at this time can enter backstage, under the scene, background server
Address information can be extracted, and calls the identification of relative address identification service module progress address dummy.When identification, the address is
When legal address, by the request that places an order of the user, and lower single successful information is returned to;When identify the address be address dummy or
When having address dummy suspicion, then refuse the request that places an order of the user, and returns to lower single failure information.Falseness through this embodiment
Address Recognition method can just disintegrate attack of the ox to commodity when lower single, reduce electric business loss.
For the existing identifying schemes to address dummy, have some disadvantages: that one, recall rate is low, it is fixed by expert
A large amount of rules of system can only cover a part of specific address dummy, such as a series of void such as architecture storey is played tricks, and room number is played tricks
False address, algorithm can not then identify;Two, cost of labor is huge, and due to the confrontation of ox, the form layer of address dummy goes out not
Thoroughly, artificial rule identifies that these addresses need that a large amount of professional is engaged to pass through analysis of cases to formulate different identification rule
Then, and these rules have ineffectiveness, can not effectively identify address dummy with regard to failure in the general short time, this just needs consumption big
Amount manpower and material resources identify address dummy, costly;Three, flexibility is low, once formulated corresponding recognition rule, then system
Just it is not easy to make change, so that but new address dummy form when occur, identifying system cannot quickly make accordingly.
And the present invention identifies address dummy by the method for semi-supervised learning, by a large amount of address corpus train come address language
Model learns the contextual information for constructing legal address out, to identify illegal address dummy.
Address dummy recognition methods based on semi-supervised learning of the invention has the advantage that
Recall rate is high, and compared to tional identification mode, the Address Recognition model for combining deep learning can effectively learn
The effective address contextual informations such as the address whether there is out, and whether which is in, which whether there is.It is equivalent to
We used a huge neural networks to go memory the whole of China all legal address buildings and cartographic information, once input
The case where address is not consistent in the presence of the geography information or address context relation remembered with network, model judge address presence
Not firm ingredient.This makes no matter how address dummy changes, we can effectively judge its legitimacy.
Semi-supervised learning advantage is, does not need a large amount of labeled data, under mark cost high at present, only needs
A large amount of legal address and training address language model are collected, then a small amount of labeled data is cooperated to be trained at once better than traditional mould
The Address Recognition model of type;And the promotion that the address dummy identification model come can have 15% than rule model discrimination is trained,
Address Recognition efficiency is improved, cost is reduced and there is enough generalization abilities.
The present embodiment additionally provides a kind of address dummy identification device, refers to Fig. 7, and described device includes:
Identification model constructs module 710, and for constructing Address Recognition model in advance, the Address Recognition model includes: language
Say model and disaggregated model.
Term vector generation module 720, for requesting in response to Address Recognition, the Address Recognition request includes to be identifiedly
The title of location divide by word to the title of the address to be identified, term vector corresponding with each word is generated, according to institute
The term vector of each word in address to be identified is stated, the term vector sequence of the address to be identified is generated.
Characteristic extracting module 730, for the term vector sequence inputting to be carried out feature extraction into the language model,
Obtain characteristic information corresponding with the address to be identified, and the amendment in characteristic extraction procedure to the address to be identified
Information.
Classification and Identification module 740 is obtained for the characteristic information and the update information to be input to the disaggregated model
To the recognition result of the address to be identified.
Fig. 8 is referred to, the identification model building module 710 includes:
Language model constructs module 810 and disaggregated model constructs module 820.
Fig. 9 is referred to, the language model building module 810 includes:
Address corpus obtains module 910, for obtaining without tag addresses corpus information, in the address corpus information
The title of each address divide by word, term vector corresponding with each word is generated, according to word each in the address
Term vector generates the term vector sequence of the address.
From coding model training module 920, for by the term vector sequence of each address in the address corpus information successively
It is input to from encoding model, is trained to described from encoding model.
Model extraction module 930, for from the trained coding module that extracted from encoding model as the language
Model.
Wherein, described to include coded representation module from coding model training module 920, it is described from encoding model for obtaining
Address information to be output, the address information to be output is identified in the form of an efficient coding.
Figure 10 please be participate in, the disaggregated model building module 820 includes:
Sample acquisition module 1010 has tag addresses sample for obtaining, and generating has in tag addresses sample respectively with described
The corresponding term vector sequence in address, described to have tag addresses sample include address dummy sample and normal address sample;
Information generating module 1020 is generated for sequentially inputting the term vector sequence of each address in the language model
The characteristic information and update information of each address;
Disaggregated model training module 1030, for using the characteristic information of each address and the update information as institute
The input for stating disaggregated model instructs the disaggregated model using the label of the address as the output of the disaggregated model
Practice.
Any embodiment of that present invention institute providing method can be performed in the device provided in above-described embodiment, has execution this method
Corresponding functional module and beneficial effect.The not technical detail of detailed description in the above-described embodiments, reference can be made to the present invention is any
Method provided by embodiment.
The present embodiment additionally provides a kind of equipment, and referring to Figure 11, which can produce because configuration or performance are different
Raw bigger difference, may include one or more central processing units (central processing units, CPU)
1122 (for example, one or more processors) and memory 1132, one or more storage application programs 1142 or
The storage medium 1130 (such as one or more mass memory units) of data 1144.Wherein, memory 1132 and storage
Medium 1130 can be of short duration storage or persistent storage.Be stored in storage medium 1130 program may include one or one with
Upper module (diagram is not shown), each module may include to the series of instructions operation in equipment.Further, centre
Reason device 1122 can be set to communicate with storage medium 1130, and a series of fingers in storage medium 1130 are executed in equipment 1100
Enable operation.Equipment 1100 can also include one or more power supplys 1126, one or more wired or wireless networks
Interface 1150, one or more input/output interfaces 1158, and/or, one or more operating systems 1141, such as
Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..The present embodiment is either above-mentioned
Method can be implemented based on equipment shown in Figure 11.
The present embodiment additionally provides a kind of computer readable storage medium, and at least one finger is stored in the storage medium
Enable, at least a Duan Chengxu, code set or instruction set, at least one instruction, at least a Duan Chengxu, code set or instruction set by
Processor is loaded and is executed such as either the above-mentioned method of the present embodiment.
Present description provides the method operating procedures as described in embodiment or flow chart, but based on routine or without creation
The labour of property may include more or less operating procedure.The step of enumerating in embodiment and sequence are only numerous steps
One of execution sequence mode, does not represent and unique executes sequence.System in practice or when interrupting product and executing, can be with
It is executed according to embodiment or method shown in the drawings sequence or parallel executes (such as parallel processor or multiple threads
Environment).
Structure shown in the present embodiment, only part-structure relevant to application scheme, is not constituted to this
The restriction for the equipment that application scheme is applied thereon, specific equipment may include more or fewer components than showing,
Perhaps certain components or the arrangement with different components are combined.It is to be understood that method disclosed in the present embodiment,
Device etc., may be implemented in other ways.For example, the apparatus embodiments described above are merely exemplary, for example,
The division of the module is only a kind of division of logic function, and there may be another division manner in actual implementation, such as more
A unit or assembly can be combined or can be integrated into another system, or some features can be ignored or not executed.It is another
Point, shown or discussed mutual coupling, direct-coupling or communication connection can be through some interfaces, device or
The indirect coupling or communication connection of unit module.
Based on this understanding, technical solution of the present invention substantially in other words the part that contributes to existing technology or
The all or part of person's technical solution can be embodied in the form of software products, which is stored in one
In a storage medium, including some instructions are used so that computer equipment (it can be personal computer, server, or
Network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.And storage medium above-mentioned includes:
USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random
Access Memory), the various media that can store program code such as magnetic or disk.
Those skilled in the art further appreciate that, respectively show in conjunction with what embodiment disclosed in this specification described
Example unit and algorithm steps, being implemented in combination with electronic hardware, computer software or the two, in order to clearly demonstrate
The interchangeability of hardware and software generally describes each exemplary composition and step according to function in the above description
Suddenly.These functions are implemented in hardware or software actually, the specific application and design constraint item depending on technical solution
Part.Professional technician can use different methods to achieve the described function each specific application, but this reality
Now it should not be considered as beyond the scope of the present invention.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before
Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.
Claims (10)
1. a kind of address dummy recognition methods characterized by comprising
Building Address Recognition model in advance, the Address Recognition model includes: language model and disaggregated model;
It is requested in response to Address Recognition, the Address Recognition request includes the title of address to be identified, to the address to be identified
Title divide by word, corresponding with each word term vector is generated, according to the word of each word in the address to be identified
Vector generates the term vector sequence of the address to be identified;
The term vector sequence inputting is subjected to feature extraction into the language model, is obtained corresponding with the address to be identified
Characteristic information, and to the update information of the address to be identified in characteristic extraction procedure;
The characteristic information and the update information are input to the disaggregated model, obtain the identification knot of the address to be identified
Fruit.
2. a kind of address dummy recognition methods according to claim 1, which is characterized in that the method also includes constructing institute
The step of the step of stating language model, the building language model includes:
It obtains without tag addresses corpus information, the title of each address in the address corpus information divide by word,
It generates term vector corresponding with each word and the term vector of the address is generated according to the term vector of word each in the address
Sequence;
The term vector sequence of each address in the address corpus information is sequentially inputted to from encoding model, to described from coding
Model is trained;
From the trained coding module that extracted from encoding model as the language model.
3. a kind of address dummy recognition methods according to claim 2, which is characterized in that described to believe the address corpus
The term vector sequence of each address is sequentially inputted to from encoding model in breath, includes: to described be trained from encoding model
The address information to be output from encoding model is obtained, by the address information to be output with the shape of an efficient coding
Formula is identified.
4. a kind of address dummy recognition methods according to claim 1, which is characterized in that the method also includes being based on institute
Predicate says that the step of disaggregated model described in model construction, described the step of constructing disaggregated model based on the language model include:
Acquisition has a tag addresses sample, and generating has in tag addresses sample the corresponding term vector sequence in each address, institute with described
Having stated tag addresses sample includes address dummy sample and normal address sample;
The term vector sequence of each address is sequentially input in the language model, the characteristic information and amendment letter of each address are generated
Breath;
Using the characteristic information of each address and the update information as the input of the disaggregated model, with the address
Output of the label as the disaggregated model, is trained the disaggregated model.
5. a kind of address dummy recognition methods according to claim 4, which is characterized in that when based on the language model structure
When building the disaggregated model, stop the model parameter for updating the language model.
6. a kind of address dummy recognition methods according to claim 1, which is characterized in that the disaggregated model includes length
Phase memory network and full link sort layer.
7. a kind of address dummy identification device characterized by comprising
Identification model construct module, in advance construct Address Recognition model, the Address Recognition model include: language model and
Disaggregated model;
Term vector generation module, for requesting in response to Address Recognition, the Address Recognition request includes the name of address to be identified
Claim, the title of the address to be identified divide by word, term vector corresponding with each word is generated, according to described wait know
The term vector of each word in other address generates the term vector sequence of the address to be identified;
Characteristic extracting module, for the term vector sequence inputting to be carried out feature extraction into the language model, obtain with
The corresponding characteristic information in the address to be identified, and to the update information of the address to be identified in characteristic extraction procedure;
Classification and Identification module obtains described for the characteristic information and the update information to be input to the disaggregated model
The recognition result of address to be identified.
8. a kind of address dummy identification device according to claim 7, which is characterized in that the identification model constructs module
Module is constructed including language model, the language model building module includes:
Address corpus obtains module, for obtaining without tag addresses corpus information, to each of described address corpus information
The title of location divide by word, generates corresponding with each word term vector, according to the term vector of word each in the address,
Generate the term vector sequence of the address;
From coding model training module, for the term vector sequence of each address in the address corpus information to be sequentially inputted to certainly
In encoding model, it is trained to described from encoding model;
Model extraction module, for from the trained coding module that extracted from encoding model as the language model.
9. a kind of address dummy identification device according to claim 7, which is characterized in that the identification model constructs module
Module is constructed including disaggregated model, the disaggregated model building module includes:
Sample acquisition module has tag addresses sample for obtaining, and generate has each address in tag addresses sample opposite with described
The term vector sequence answered, described to have tag addresses sample include address dummy sample and normal address sample;
Information generating module generates each address for sequentially inputting the term vector sequence of each address in the language model
Characteristic information and update information;
Disaggregated model training module, for using the characteristic information of each address and the update information as the classification mould
The input of type is trained the disaggregated model using the label of the address as the output of the disaggregated model.
10. a kind of equipment, which is characterized in that the equipment includes processor and memory, is stored at least in the memory
One instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the generation
Code collection or instruction set are loaded by the processor and are executed to realize that address dummy as claimed in any one of claims 1 to 6 such as is known
Other method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910362906.0A CN110197284B (en) | 2019-04-30 | 2019-04-30 | False address identification method, false address identification device and false address identification equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910362906.0A CN110197284B (en) | 2019-04-30 | 2019-04-30 | False address identification method, false address identification device and false address identification equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110197284A true CN110197284A (en) | 2019-09-03 |
CN110197284B CN110197284B (en) | 2024-05-14 |
Family
ID=67752205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910362906.0A Active CN110197284B (en) | 2019-04-30 | 2019-04-30 | False address identification method, false address identification device and false address identification equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110197284B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807685A (en) * | 2019-10-22 | 2020-02-18 | 上海钧正网络科技有限公司 | Information processing method, device, terminal and readable storage medium |
CN111695355A (en) * | 2020-05-26 | 2020-09-22 | 平安银行股份有限公司 | Address text recognition method, device, medium and electronic equipment |
CN111859956A (en) * | 2020-07-09 | 2020-10-30 | 睿智合创(北京)科技有限公司 | Address word segmentation method for financial industry |
CN112487120A (en) * | 2020-11-30 | 2021-03-12 | 上海寻梦信息技术有限公司 | Method, device and equipment for classifying recipient addresses and storage medium |
CN112749560A (en) * | 2019-10-30 | 2021-05-04 | 阿里巴巴集团控股有限公司 | Address text processing method, device and equipment and computer storage medium |
CN112818666A (en) * | 2021-01-29 | 2021-05-18 | 上海寻梦信息技术有限公司 | Address recognition method and device, electronic equipment and storage medium |
CN112818667A (en) * | 2021-01-29 | 2021-05-18 | 上海寻梦信息技术有限公司 | Address correction method, system, device and storage medium |
CN113111164A (en) * | 2020-02-13 | 2021-07-13 | 北京明亿科技有限公司 | Method and device for extracting information of alarm receiving and processing text residence based on deep learning model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104965819A (en) * | 2015-07-12 | 2015-10-07 | 大连理工大学 | Biomedical event trigger word identification method based on syntactic word vector |
CN108509539A (en) * | 2018-03-16 | 2018-09-07 | 联想(北京)有限公司 | Information processing method electronic equipment |
CN108805583A (en) * | 2018-05-18 | 2018-11-13 | 连连银通电子支付有限公司 | Electric business fraud detection method, device, equipment and medium based on address of cache |
CN108876545A (en) * | 2018-06-22 | 2018-11-23 | 北京小米移动软件有限公司 | Order recognition methods, device and readable storage medium storing program for executing |
CN108920457A (en) * | 2018-06-15 | 2018-11-30 | 腾讯大地通途(北京)科技有限公司 | Address Recognition method and apparatus and storage medium |
-
2019
- 2019-04-30 CN CN201910362906.0A patent/CN110197284B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104965819A (en) * | 2015-07-12 | 2015-10-07 | 大连理工大学 | Biomedical event trigger word identification method based on syntactic word vector |
CN108509539A (en) * | 2018-03-16 | 2018-09-07 | 联想(北京)有限公司 | Information processing method electronic equipment |
CN108805583A (en) * | 2018-05-18 | 2018-11-13 | 连连银通电子支付有限公司 | Electric business fraud detection method, device, equipment and medium based on address of cache |
CN108920457A (en) * | 2018-06-15 | 2018-11-30 | 腾讯大地通途(北京)科技有限公司 | Address Recognition method and apparatus and storage medium |
CN108876545A (en) * | 2018-06-22 | 2018-11-23 | 北京小米移动软件有限公司 | Order recognition methods, device and readable storage medium storing program for executing |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807685A (en) * | 2019-10-22 | 2020-02-18 | 上海钧正网络科技有限公司 | Information processing method, device, terminal and readable storage medium |
CN112749560A (en) * | 2019-10-30 | 2021-05-04 | 阿里巴巴集团控股有限公司 | Address text processing method, device and equipment and computer storage medium |
CN113111164A (en) * | 2020-02-13 | 2021-07-13 | 北京明亿科技有限公司 | Method and device for extracting information of alarm receiving and processing text residence based on deep learning model |
CN111695355A (en) * | 2020-05-26 | 2020-09-22 | 平安银行股份有限公司 | Address text recognition method, device, medium and electronic equipment |
CN111695355B (en) * | 2020-05-26 | 2024-05-14 | 平安银行股份有限公司 | Address text recognition method and device, medium and electronic equipment |
CN111859956A (en) * | 2020-07-09 | 2020-10-30 | 睿智合创(北京)科技有限公司 | Address word segmentation method for financial industry |
CN112487120A (en) * | 2020-11-30 | 2021-03-12 | 上海寻梦信息技术有限公司 | Method, device and equipment for classifying recipient addresses and storage medium |
CN112818666A (en) * | 2021-01-29 | 2021-05-18 | 上海寻梦信息技术有限公司 | Address recognition method and device, electronic equipment and storage medium |
CN112818667A (en) * | 2021-01-29 | 2021-05-18 | 上海寻梦信息技术有限公司 | Address correction method, system, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110197284B (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110197284A (en) | A kind of address dummy recognition methods, device and equipment | |
CN109960800B (en) | Weak supervision text classification method and device based on active learning | |
CN106202010B (en) | Method and apparatus based on deep neural network building Law Text syntax tree | |
US11449537B2 (en) | Detecting affective characteristics of text with gated convolutional encoder-decoder framework | |
CN106383816B (en) | The recognition methods of Chinese minority area place name based on deep learning | |
US20200104409A1 (en) | Method and system for extracting information from graphs | |
JP2023539532A (en) | Text classification model training method, text classification method, device, equipment, storage medium and computer program | |
CN110083700A (en) | A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks | |
CN109359297A (en) | A kind of Relation extraction method and system | |
CN110598070B (en) | Application type identification method and device, server and storage medium | |
CN109189862A (en) | A kind of construction of knowledge base method towards scientific and technological information analysis | |
CN109416695A (en) | Local service information is provided in automatic chatting | |
CN109271513B (en) | Text classification method, computer readable storage medium and system | |
CN109919175A (en) | A kind of more classification methods of entity of combination attribute information | |
CN116664719A (en) | Image redrawing model training method, image redrawing method and device | |
CN110084323A (en) | End-to-end semanteme resolution system and training method | |
CN112347245A (en) | Viewpoint mining method and device for investment and financing field mechanism and electronic equipment | |
CN113011126A (en) | Text processing method and device, electronic equipment and computer readable storage medium | |
CN115905538A (en) | Event multi-label classification method, device, equipment and medium based on knowledge graph | |
CN110334340B (en) | Semantic analysis method and device based on rule fusion and readable storage medium | |
CN115017879A (en) | Text comparison method, computer device and computer storage medium | |
CN114330704A (en) | Statement generation model updating method and device, computer equipment and storage medium | |
CN113761188A (en) | Text label determination method and device, computer equipment and storage medium | |
CN113127604A (en) | Comment text-based fine-grained item recommendation method and system | |
CN112989182A (en) | Information processing method, information processing apparatus, information processing device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |