CN108124065A - A kind of method junk call content being identified with disposal - Google Patents

A kind of method junk call content being identified with disposal Download PDF

Info

Publication number
CN108124065A
CN108124065A CN201711266910.4A CN201711266910A CN108124065A CN 108124065 A CN108124065 A CN 108124065A CN 201711266910 A CN201711266910 A CN 201711266910A CN 108124065 A CN108124065 A CN 108124065A
Authority
CN
China
Prior art keywords
disposal
identified
call
word
junk call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711266910.4A
Other languages
Chinese (zh)
Inventor
陈晓莉
刘亭
丁帆
丁一帆
徐菁
林建洪
徐佳丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Ponshine Information Technology Co Ltd
Original Assignee
Zhejiang Ponshine Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Ponshine Information Technology Co Ltd filed Critical Zhejiang Ponshine Information Technology Co Ltd
Priority to CN201711266910.4A priority Critical patent/CN108124065A/en
Publication of CN108124065A publication Critical patent/CN108124065A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/436Arrangements for screening incoming calls, i.e. evaluating the characteristics of a call before deciding whether to answer it
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Abstract

The invention discloses a kind of methods junk call content being identified with disposal.Specific steps of the present invention include S1, acquisition dialog context;S2, the dialog context is converted into text message;S3, using stammering, participle instrument does word segmentation processing to the text message;S4, phone examination model is obtained according to LSTM algorithms and DNN algorithms by the word of the word segmentation processing;S5, the telephone class obtained according to softmax graders, output by phone examination model;S6, call of the softmax graders output for junk call is interrupted.Present invention identification and the method for disposal junk call, can analyze the content in call in real time and block instantly, realize the real-time blocking to junk call.

Description

A kind of method junk call content being identified with disposal
Technical field
The invention belongs to technical field of telecommunications more particularly to it is a kind of junk call content is identified with disposal Method.
Background technology
With the continuous development of the worldwide communication technology, people are increasing to the dependence of mobile communication.It is mobile The rapid development of communication is while bringing convenient, but also some are declared for commercial object using mobile communication It passes advertisement, promote the sale of products or telecommunication fraud, should in time be found with control method by effectively monitoring and filter interception.
Therefore junk call is automatically identified, and junk call is interrupted in time, protect the life of people and property peace Entirely, it is purpose of the present invention place.
As disclosed in the patent of invention of Publication No. CN103731832A a kind of anti-phone, short message fraud system and The audio file content of upload can be converted into text file, changed by method, sound identification module, the sound identification module Text file can be analyzed and processed by intelligent analysis module, intelligent analysis module passes through natural language processing pair first Above- mentioned information is analyzed, and extracts keyword message, such as telephone number, account No., prize-winning content are identified, and with Information in fraud information database is matched, and the fraud information of various word classes is mainly stored in fraud information database; Audio fingerprint database, for having collected the audio-frequency fingerprint information of various swindle recording.The scheme of the invention is put forth effort on will be in call Appearance is matched with database information, and fraud information is regarded as if successful match, but the style for nowadays swindling molecule is more, the hair Database in bright scheme only preserves swindle molecule used fraud information, cause the information that the database preserves compared with To be narrow, it is impossible to more comprehensively defend various harassing calls.
In view of in above-mentioned existing technology anti-telephone fraud scheme there are the defects of, the present inventor, which is based on being engaged in such product, to be set Meter manufacture abundant for many years practical experience and professional knowledge, and coordinate the utilization of scientific principle, actively it is subject to research and innovation, to create If a kind of method junk call content being identified with disposal, can improve it is general it is existing to junk call content into The method of row identification and disposal, makes it have more practicability.By constantly studying, designing, and by study repeatedly sample and After improvement, the present invention having practical value is created finally.
The content of the invention
For above-mentioned technical problem existing in the prior art, junk call content is known the present invention provides a kind of Not with the method for disposal, the voice content of call is analyzed the present invention is based on depth learning technology, so as to identify rubbish electricity Words, and junk call is interrupted in time using the technology of taking out stitches of gateway exchange, analysis intercepting system in real time is built, it is existing to solve There is technology to lack and find the problem of junk call variation characteristic and improve the energy accurately identified to junk call in time Power.
To reach above-mentioned technical purpose, the present invention adopts the following technical scheme that:
The method with disposal is identified in a kind of content to junk call, including:
S1, acquisition dialog context;
S2, the dialog context is converted into text message;
S3, using stammering, participle instrument does word segmentation processing to the text message;
S4, phone examination model is obtained according to LSTM algorithms and DNN algorithms by the word of the word segmentation processing;
S5, the telephone class obtained according to softmax graders, output by phone examination model;
S6, call of the softmax graders output for junk call is interrupted.
As a preference of the present invention, it is further included between step S2 and S3:The text envelope is removed using the method for regularization Non-textual portions in breath.
As a preference of the present invention, prepending non-significant word, if the word is matched with the invalid word, deletes institute's predicate Language.
As a preference of the present invention, the word is converted by term vector using Word2Vec methods, using LSTM algorithms The term vector is converted into sentence vector, obtaining the phone using the input vector of the sentence vector as DNN algorithms screens mould Type.
As a preference of the present invention, the text message is divided into training sample and test sample, with the training sample It predicts that the phone screens model, verifies that the phone screens model with the test sample.
As a preference of the present invention, model tuning is screened to the phone using the self-learning capability of neutral net.
As a preference of the present invention, adding in activation primitive in the neutral net, and weights are adjusted to stablizing constant or reach To specified threshold.
As a preference of the present invention, the softmax graders classify to phone type, rubbish electricity is specifically divided into Words and normal telephone.
As a preference of the present invention, the dialog context is that the voice generated in communication process is gathered using recording device Data.
Technical solution provided by the invention can include the following benefits:
1st, the present invention is preset with invalid word, removes normal vocabulary, only selects key vocabulary and carries out examination, shortens The ability accurately identified to junk call is improved while recognition time;2nd, with reference to stammerer participle instrument and Word2Vec methods process the text message of call, and the word to screen model judgement by phone is prepared;3rd, this hair It is bright that phone examination model is established according to LSTM algorithms and DNN algorithms, examination is carried out to phone automatically;4th, neutral net is passed through Self-learning capability screens model to obtained preliminary phone and carries out tuning, if required precision is also once not achieved after training, Can be with repetition training, until meeting training precision, the accuracy that the present invention judges junk call is high;It is if the 5th, described Softmax graders output phone be junk call, can in real time to phone carry out interrupt operation, the present invention call when Can judge whether be junk call in short time, and recording processing can just need not be judged to lead to after hanging up the telephone The classification of words realizes and automatically identifies junk call, and to purpose that junk call interrupts in time.
Description of the drawings
Fig. 1 is the operating diagram of the present invention;
Fig. 2 is deep learning algorithm schematic diagram of the present invention;
Fig. 3 is Word2Vec algorithm arrangements schematic diagram of the present invention;
Fig. 4 is LSTM algorithm arrangements schematic diagram of the present invention;
Fig. 5 is 5 operating diagram of the embodiment of the present invention;
Fig. 6 is 6 operating diagram of the embodiment of the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, the every other reality that those of ordinary skill in the art are obtained without making creative work Example is applied, belongs to the scope of protection of the invention.
Embodiment 1:
Step A1:Dialog context is gathered using recording device, speech recognition equipment identifies the recording data, and this is recorded Sound data conversion is into text data;
Step A2:Non-textual portions in text data is removed using regularization method;
Step A3:By sample according to 3:1 ratio cut partition is training sample and test sample;
Step A4:Text segments:Word segmentation processing is done to short message text using stammerer participle instrument;
Step A5:Prepending non-significant word removes the invalid matched word of word in text;
Step A6:The word segmented is converted by term vector using Word2Vec technologies, word is done at vectorization Reason;
Step A7:Term vector is converted by sentence vector using LSTM algorithms;
Step A8:Using sentence vector as the input vector of DNN disaggregated models;
Step A9:The result of probability value maximum is chosen as output classification;
Step A10:Counting loss function carries out backpropagation;
Step A11:Weighed value adjusting, until weights are stablized constant or reach specified threshold;
Step A12:Model prediction and assessment, by test sample input model, the accuracy rate of computation model, recall rate, F Value;
Step A13:Model tuning, using neutral net self-learning capability to model carry out tuning, reduce actual value with Gap between ideal value;
Step A14:Telephone class exports, and finally junk call is known according to the output result of softmax graders Not.
The result of softmax graders output is junk call or normal telephone, if the knot of softmax graders output Fruit is junk call, then the call is interrupted in time.
Embodiment 2
As shown in Fig. 2, the present embodiment is with " I is public security bureau, your account is accused of washing dirty money, you is asked to coordinate investigation " Example, design of the invention is the hybrid production style based on LSTM and DNN.
In fig. 2, model is divided into 3 layers, first layer by using Word2Vec by the word in text be converted to word to Amount;The second layer is LSTM layers, and term vector caused by first layer is input to LSTM layers, using LSTM algorithm structures, before calculating Influence of the word to current word afterwards, most each individual term vector is converted into sentence vector at last;Third layer is DNN layers, and the second layer is given birth to Into sentence vector as input layer, by hidden layer, using softmax activation primitives, telephone class is exported in output layer.
Embodiment 3
As shown in figure 3, the present embodiment specifically introduces the effect of Word2Vec algorithms in the present invention.
The problem of natural language understanding, will be converted into the problem of machine can be handled, and the first step must be by these symbolic numbers The expression of text is mapped in the vector space of k dimensions by word.Word2Vec algorithms will be in the corpus that segmented Cliction language is converted to term vector, and the term vector trained by Word2Vec is as follows:
vi=(a0,a1,L,ad) (1)
(1) in formula, d is the dimension of term vector.
Word2Vec algorithms implement process:
Step A61:Keyword in phone text feature storehouse is counted, it is assumed that have m keyword;
Step A62:One word first with one-hot-vector is converted into the vector x of a n dimension, is with " arrearage " Example:
" arrearage " → [0,0,0,0,1..., 0,0]
Step A63:There is m neuron in hidden layer, it is known that input layer is a n-dimensional vector and is connected entirely with hidden layer, So it needs in the hidden neuron that the weight matrix w of a n*m is 1*m the DUAL PROBLEMS OF VECTOR MAPPING that n is tieed up to latitude;
Step A64:Also with full connection from hidden layer to output layer, added in when output unit is calculated Softmax graders, can be in the hope of final vectorial w by back transfer, can by being multiplied i.e. x*w with initial term vector In the hope of the vectorial W (i) of final term vector, that is, 1*m;
X*w=W (i)=[Wi1 Wi2… Wim]
Step A65:The corresponding term vector of rubbish keyword of each appearance of taking on the telephone is added, obtains belonging to this The text vector d to take on the telephone.
Embodiment 4
As shown in figure 4, the present embodiment specifically introduces the effect of LSTM algorithms in the present invention.
LSTM is a kind of special RNN (Recurrent Neural Network), and a LSTM unit is by a cell With three doors (forgeing door, input gate, out gate) composition.This special structure is exactly based on, which LSTM could select believe Breath passes into silence, which information is remembered.Certain moment each component of t, LSTM unit does following update:
ft=σ (Wfht-1+Ufxt+bf)
it=σ (Wiht-1+Uixt+bi)
at=tanh (Waht-1+Uaxt+ba)
Ct=Ct-1e ft+it e at
ot=σ (Woht-1+Uoxt+bo)
ht=ot e tanh(Ct)
Wherein, σ represents sigmoid activation primitives, and e accumulates for Hadamard, xtFor the input vector of t moment, htIt is hiding State, Uf,Ui,Ua,UoRespectively xtThe not weight matrix of fellow disciple, and Wf,Wi,Wa,WoFor htThe not weight matrix of fellow disciple, bf,bi, ba,boFor the biasing of each door, ft,it,Ct,otIt represents respectively and forgets door, input gate, mnemon state and out gate.
LSTM layers of input is Word2Vec layers of output term vector, and Word2Vec layers of an output is corresponding to one The LSTM inputs of moment t.The input layer of DNN disaggregated models is sent into LSTM layers of output, is activated in output layer using softmax Function calculates phone and belongs to probability of all categories, maximum probability is selected to be exported as telephone class.
Phone text term vector is converted into sentence vector and is as follows by LSTM algorithms:
Step A71:The term vector for taking on the telephone text by one arranges in sequence, it is assumed that one takes on the telephone by m term vector structure Into i.e. x1,x2,L,xm
Step A72:Initialization model parameter Wf,Uf,bf,Wa,Ua,Wi,Ui,bi,Wo,Uo,bo
Step A73:By x1It is incoming to forget door f2, f2=σ (W2h1+U2x2+b2), door weights W is forgotten in update2,U2,b2
Step A74:Update input gate parameter, it=σ (Wiht-1+Uixt+bi), at=tanh (Waht-1+Uaxt+ba),
Wherein Wi,Ui,bi,Wa,Ua,baFor the coefficient and bias of linear relationship, σ is sigmoid activation primitives;
Step A75:Model output state updates, Ct=Ct-1e ft+it e at, wherein e is Hadamard products;
Step A76:Update out gate parameter, ot=σ (Woht-1+Uoxt+bo), ht=ot e tanh(Ct), output is current Sequence index predicted value:
Step A77:Step A72 to A76 is repeated, finally exports predicted value
Embodiment 5
As shown in figure 5, the present embodiment specifically introduces the effect of DNN disaggregated models in the present invention.
The basic structure of DNN models includes input layer, several hidden layers and output layer.
DNN models using Softmax graders, classify to phone type in output layer, be divided into junk call or Normal telephone.Softmax formula are as follows:
Wherein P (y=i | x, θ) it is the probability that sample x belongs to i classes.
The process of DNN neural network algorithms can be divided into two stages:First stage is successively calculated by input layer Each layer neuron is output and input, until output layer.Second stage is that each layer nerve is successively calculated by output layer The output error of member, and principle is declined to adjust the connection weight of each layer and Node B threshold according to error gradient, make amended The final output of network can be close to desired value.It, can be with repetition training, directly if required precision is also once not achieved after training Until training precision is met.
Embodiment 6
As shown in fig. 6, simply introduce the network weight regulation mechanism of the application use
If input vector X=x1, x2..., xm)T, be each evaluation index value, hidden layer output vector h=h1, h2..., hL)T, y is the reality output of network, is effect assessment value.The weights of input layer i to hidden layer node j are Wij, it is hidden Weights containing node layer to output node layer are Vj, θjWithThe threshold value of hidden layer and output layer is represented respectively.Then
Wherein f (x) is activation primitive, and activation primitive is chosen to be sigmoid functions here, i.e., Sigmoid functions are by variable mappings to 0, between 1.
(1) calculating network reality output and the error of preferable output
In t moment, by the reality output y of networki(t) the target output d provided with samplei(t) it is compared, output production Raw error εi(t) it is defined as follows:
εi(t)=di(t)-yi(t)
Control of the generated error signal drives to learning algorithm, the purpose is to the input weight of neuron into A series of calibrations of row are adjusted, and the purpose for calibrating adjustment is by iteration step by step, makes output signal yi(t) become closer to Target exports di(t), which can minimize to realize by cost function E (t).
(2) adjustment amount of calculating network weights
The adjustment amplitude of weight is
ΔWij(t)=η εi(t)xi(t)
ΔVj(t)=η εi(t)hj(t)
Wherein η is that a numerical value is positive constant, represents learning rate.
Weight after adjustment is
Wij(t+1)=α Wij(t)+ΔWij(t)
Vj(t+1)=α Vj(t)+ΔVj(t)
α be momentum item, Δ Wij(t) for by the weighed value adjusting amplitude of input layer to hidden layer, Δ Vj(t) it is by hidden layer To the weighed value adjusting amplitude of output layer.
In the present invention, the overall of DNN neural network algorithms realizes that flow is as follows:
Step B1:Network connection weights, Node B threshold are initialized, including input layer to hidden layer, hidden layer to output layer Connection weight and deviation;
Step B2:Taking a sample, each desired value of sample is the input value of network as input;
Step B3:Calculate hidden layer node output, export as the cumulative of each input layer and connection weight and, and band Enter activation primitive to be exported;
Step B4:Calculate output node layer output, export as hidden layer node input and the cumulative of connection weight and, and Activation primitive is brought into be exported;
Step B5:Calculate hidden layer and output layer error, error for model real output value and desired output it Difference;
Step B6:Connection weight and Node B threshold are updated, by error back propagation, each connection weight is adjusted;
Step B7:Judge whether to take out whole samples, if not provided, return to step B2, is cycled, if so, performing Following step;
Step B8:Whether error in judgement is less than specified threshold, if not, return to step B3, continues cycling through, if so, knot Shu Xunhuan.
Those skilled in the art will readily occur to the disclosure after considering specification and putting into practice invention disclosed herein Other embodiments.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes Or adaptive change follow the disclosure general principle and including the disclosure it is undocumented in the art it is known often Knowledge or conventional techniques.Description and embodiments are considered only as illustratively, and the true scope and spirit of the disclosure are by following Claim point out.

Claims (9)

1. the method with disposal is identified in a kind of content to junk call, which is characterized in that including:
S1, acquisition dialog context;
S2, the dialog context is converted into text message;
S3, using stammering, participle instrument does word segmentation processing to the text message;
S4, phone examination model is obtained according to LSTM algorithms and DNN algorithms by the word of the word segmentation processing;
S5, the telephone class obtained according to softmax graders, output by phone examination model;
S6, call of the softmax graders output for junk call is interrupted.
2. the method with disposal is identified in the content according to claim 1 to junk call, which is characterized in that step It is further included between S2 and S3:Non-textual portions in the text message is removed using the method for regularization.
3. the method with disposal is identified in the content according to claim 1 to junk call, which is characterized in that default Invalid word if the word is matched with the invalid word, deletes the word.
4. the method with disposal is identified in the content according to claim 1 to junk call, which is characterized in that uses The word is converted into term vector by Word2Vec methods, the term vector is converted into sentence vector using LSTM algorithms, with institute It states sentence vector and obtains the phone examination model as the input vector of DNN algorithms.
5. the method with disposal is identified in the content according to claim 2 to junk call, which is characterized in that by institute It states text message and is divided into training sample and test sample, predict that the phone screens model with the training sample, with the survey This verification of sample phone screens model.
6. the method with disposal is identified in the content according to claim 5 to junk call, which is characterized in that utilizes The self-learning capability of neutral net screens model tuning to the phone.
7. the method with disposal is identified in the content according to claim 6 to junk call, which is characterized in that in institute It states neutral net and adds in activation primitive, and adjust weights to stablizing constant or reach specified threshold.
8. the method with disposal is identified in the content according to claim 1 to junk call, which is characterized in that described Softmax graders classify to phone type, are specifically divided into junk call and normal telephone.
9. the method with disposal is identified in the content according to claim 1 to junk call, which is characterized in that described Dialog context is that the voice data generated in communication process is gathered using recording device.
CN201711266910.4A 2017-12-05 2017-12-05 A kind of method junk call content being identified with disposal Pending CN108124065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711266910.4A CN108124065A (en) 2017-12-05 2017-12-05 A kind of method junk call content being identified with disposal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711266910.4A CN108124065A (en) 2017-12-05 2017-12-05 A kind of method junk call content being identified with disposal

Publications (1)

Publication Number Publication Date
CN108124065A true CN108124065A (en) 2018-06-05

Family

ID=62228925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711266910.4A Pending CN108124065A (en) 2017-12-05 2017-12-05 A kind of method junk call content being identified with disposal

Country Status (1)

Country Link
CN (1) CN108124065A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108924333A (en) * 2018-06-12 2018-11-30 阿里巴巴集团控股有限公司 Fraudulent call recognition methods, device and system
CN109377983A (en) * 2018-10-18 2019-02-22 深圳壹账通智能科技有限公司 A kind of harassing call hold-up interception method and relevant device based on interactive voice
CN109410496A (en) * 2018-10-25 2019-03-01 北京交通大学 Attack early warning method, apparatus and electronic equipment
CN109658939A (en) * 2019-01-26 2019-04-19 北京灵伴即时智能科技有限公司 A kind of telephonograph access failure reason recognition methods
CN109688275A (en) * 2018-12-27 2019-04-26 中国联合网络通信集团有限公司 Harassing call recognition methods, device and storage medium
CN109905282A (en) * 2019-04-09 2019-06-18 国家计算机网络与信息安全管理中心 Fraudulent call prediction technique and forecasting system based on LSTM
CN110334110A (en) * 2019-05-28 2019-10-15 平安科技(深圳)有限公司 Natural language classification method, device, computer equipment and storage medium
CN110525821A (en) * 2019-08-20 2019-12-03 北京精英系统科技有限公司 A kind of system promoting separate waste collection site operation efficiency
CN110929506A (en) * 2019-12-04 2020-03-27 杭州安恒信息技术股份有限公司 Junk information detection method, device and equipment and readable storage medium
CN112193959A (en) * 2020-09-25 2021-01-08 浙江新再灵科技股份有限公司 Method and system for detecting abnormal sound of elevator
CN112478975A (en) * 2020-12-09 2021-03-12 浙江新再灵科技股份有限公司 Elevator door fault detection method based on audio features
CN112784038A (en) * 2019-10-23 2021-05-11 阿里巴巴集团控股有限公司 Information identification method, system, computing device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140278402A1 (en) * 2013-03-14 2014-09-18 Kent S. Charugundla Automatic Channel Selective Transcription Engine
US20150019226A1 (en) * 1999-06-10 2015-01-15 West View Research, Llc Computerized information apparatus
CN106504768A (en) * 2016-10-21 2017-03-15 百度在线网络技术(北京)有限公司 Phone testing audio frequency classification method and device based on artificial intelligence
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN107222865A (en) * 2017-04-28 2017-09-29 北京大学 The communication swindle real-time detection method and system recognized based on suspicious actions
CN107306306A (en) * 2016-04-25 2017-10-31 腾讯科技(深圳)有限公司 Communicating number processing method and processing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150019226A1 (en) * 1999-06-10 2015-01-15 West View Research, Llc Computerized information apparatus
US20140278402A1 (en) * 2013-03-14 2014-09-18 Kent S. Charugundla Automatic Channel Selective Transcription Engine
CN107306306A (en) * 2016-04-25 2017-10-31 腾讯科技(深圳)有限公司 Communicating number processing method and processing device
CN106504768A (en) * 2016-10-21 2017-03-15 百度在线网络技术(北京)有限公司 Phone testing audio frequency classification method and device based on artificial intelligence
CN106599933A (en) * 2016-12-26 2017-04-26 哈尔滨工业大学 Text emotion classification method based on the joint deep learning model
CN107222865A (en) * 2017-04-28 2017-09-29 北京大学 The communication swindle real-time detection method and system recognized based on suspicious actions

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108924333A (en) * 2018-06-12 2018-11-30 阿里巴巴集团控股有限公司 Fraudulent call recognition methods, device and system
CN109377983A (en) * 2018-10-18 2019-02-22 深圳壹账通智能科技有限公司 A kind of harassing call hold-up interception method and relevant device based on interactive voice
CN109410496A (en) * 2018-10-25 2019-03-01 北京交通大学 Attack early warning method, apparatus and electronic equipment
CN109688275A (en) * 2018-12-27 2019-04-26 中国联合网络通信集团有限公司 Harassing call recognition methods, device and storage medium
CN109658939A (en) * 2019-01-26 2019-04-19 北京灵伴即时智能科技有限公司 A kind of telephonograph access failure reason recognition methods
CN109905282A (en) * 2019-04-09 2019-06-18 国家计算机网络与信息安全管理中心 Fraudulent call prediction technique and forecasting system based on LSTM
CN110334110A (en) * 2019-05-28 2019-10-15 平安科技(深圳)有限公司 Natural language classification method, device, computer equipment and storage medium
CN110525821A (en) * 2019-08-20 2019-12-03 北京精英系统科技有限公司 A kind of system promoting separate waste collection site operation efficiency
CN112784038A (en) * 2019-10-23 2021-05-11 阿里巴巴集团控股有限公司 Information identification method, system, computing device and storage medium
CN110929506A (en) * 2019-12-04 2020-03-27 杭州安恒信息技术股份有限公司 Junk information detection method, device and equipment and readable storage medium
CN112193959A (en) * 2020-09-25 2021-01-08 浙江新再灵科技股份有限公司 Method and system for detecting abnormal sound of elevator
CN112478975A (en) * 2020-12-09 2021-03-12 浙江新再灵科技股份有限公司 Elevator door fault detection method based on audio features

Similar Documents

Publication Publication Date Title
CN108124065A (en) A kind of method junk call content being identified with disposal
CN110084610B (en) Network transaction fraud detection system based on twin neural network
CN108566627A (en) A kind of method and system identifying fraud text message using deep learning
Schaaf et al. Enhancing decision tree based interpretation of deep neural networks through l1-orthogonal regularization
CN110853680A (en) double-BiLSTM structure with multi-input multi-fusion strategy for speech emotion recognition
CN105955959B (en) A kind of sensibility classification method and system
CN111309909B (en) Text emotion classification method based on hybrid model
CN109583565A (en) Forecasting Flood method based on the long memory network in short-term of attention model
CN109756632B (en) Fraud telephone analysis method based on multidimensional time sequence
CN109243494A (en) Childhood emotional recognition methods based on the long memory network in short-term of multiple attention mechanism
Partila et al. Pattern recognition methods and features selection for speech emotion recognition system
CN111222992A (en) Stock price prediction method of long-short term memory neural network based on attention mechanism
CN113488073A (en) Multi-feature fusion based counterfeit voice detection method and device
CN109308903A (en) Speech imitation method, terminal device and computer readable storage medium
CN115689008A (en) CNN-BilSTM short-term photovoltaic power prediction method and system based on ensemble empirical mode decomposition
CN114861835B (en) Noise hearing loss prediction system based on asymmetric convolution
Hu et al. pRNN: A recurrent neural network based approach for customer churn prediction in telecommunication sector
Kamalov et al. Deep learning regularization in imbalanced data
Hu et al. An efficient Long Short-Term Memory model based on Laplacian Eigenmap in artificial neural networks
Maqsudur Rahman et al. The prediction of coronavirus disease 2019 outbreak on Bangladesh perspective using machine learning: a comparative study
Chen et al. A modified HME architecture for text-dependent speaker identification
Kimura et al. New perspective of interpretability of deep neural networks
Kachwala et al. Predicting Rainfall from Historical Data Trends
Wang et al. Deep learning model for human activity recognition and prediction in smart Homes
Kajornrit et al. A comparative analysis of soft computing techniques used to estimate missing precipitation records

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180605