CN106875940A - A kind of Machine self-learning based on neutral net builds knowledge mapping training method - Google Patents

A kind of Machine self-learning based on neutral net builds knowledge mapping training method Download PDF

Info

Publication number
CN106875940A
CN106875940A CN201710127387.0A CN201710127387A CN106875940A CN 106875940 A CN106875940 A CN 106875940A CN 201710127387 A CN201710127387 A CN 201710127387A CN 106875940 A CN106875940 A CN 106875940A
Authority
CN
China
Prior art keywords
sentence
answer
vector
lambda
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710127387.0A
Other languages
Chinese (zh)
Other versions
CN106875940B (en
Inventor
刘颖博
王东亮
王洪斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Sheng Chuang Technology Co Ltd
Original Assignee
Jilin Sheng Chuang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Sheng Chuang Technology Co Ltd filed Critical Jilin Sheng Chuang Technology Co Ltd
Priority to CN201710127387.0A priority Critical patent/CN106875940B/en
Publication of CN106875940A publication Critical patent/CN106875940A/en
Application granted granted Critical
Publication of CN106875940B publication Critical patent/CN106875940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/72Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for transmitting results of analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Machine Translation (AREA)

Abstract

Knowledge mapping training method is built the invention discloses a kind of Machine self-learning based on neutral net, including:The sentence based on Nature condition that user sends is obtained, noise reduction is filtered to read statement using voice de-noising algorithm, it is determined that the feedback sentence of matching;If it does not exist, then according to neutral net dialog model, providing the answer of the sentence sent for user;Including:The coding layer that the user sends statement model is configured to first nerves network, and sending sentence to user in the first nerves network parses, and obtains for representing that user sends the first intermediate vector of statement semantics;The decoding layer of the dialogue generation model is configured to nervus opticus network, the intermediate vector is parsed in the nervus opticus network, obtain the vector group for representing sentence answer, the present invention is obtained in that less mean square error using threshold speech noise reduction algorithm, improves the signal to noise ratio of reconstructed speech signal.

Description

A kind of Machine self-learning based on neutral net builds knowledge mapping training method
Technical field
Knowledge is built the present invention relates to field in intelligent robotics, more particularly to a kind of Machine self-learning based on neutral net Collection of illustrative plates training method.
Background technology
Chat robots (chatterbot) are a programs for simulating human conversation or chat.Chat robots are produced Raw the reason for is that developer is put into oneself answer interested in database, when a problem is thrown to chat robots, It finds most close problem by similarity mode algorithm from database, then according to the corresponding relation of question and answer, Most proper answer is provided, reply to it chats companion.
However, in the scene of current robot chat, when can not be found and user's request in robot knowledge base Problem match same or like problem when, robot cannot to user return correctly suitably answer in other words Case.
The limitation of prior art this respect also results in semantic reason in addition to the problem for causing robot knowledge base limited The mistake of solution, so that user is poor with the experience of robot communication process.Additionally, in knowledge reasoning prior art knowledge Reasoning process also has a certain limitation, and traditional knowledge reasoning is all to write some rules by application developer to push away solving knowledge Reason problem.But, be exhaustive and formulate these rules and cannot imagine for developer.Because in natural language processing neck Have forever in domain and can not write not complete rule.At this moment being accomplished by robot has the learning ability of oneself, and makes inferences.
The content of the invention
The present invention has designed and developed a kind of Machine self-learning based on neutral net and has built knowledge mapping training method, uses Threshold speech noise reduction algorithm is obtained in that less mean square error, improves the signal to noise ratio of reconstructed speech signal.
A further object of the invention is to generate dialog model using neural metwork training, the model obtained using training, Robot can be freely with user talk.
The present invention provide technical scheme be:
A kind of Machine self-learning based on neutral net builds knowledge mapping training method, including:
The sentence based on Nature condition that user sends is obtained, read statement is filtered using threshold speech noise reduction algorithm Ripple noise reduction, and obtain the classification of the sentence, and obtain the sentence above of the sentence, and the sentence above classification;
The sentence classification according to sentence, it is determined that the feedback sentence of matching;
If it does not exist, then according to neutral net dialog model, providing the answer of the sentence sent for user;Including:
The coding layer that the user sends statement model is configured to first nerves network, right in the first nerves network User sends sentence and is parsed, and obtains for representing that user sends the first intermediate vector of statement semantics;
The decoding layer of the dialogue generation model is configured to nervus opticus network, to described in the nervus opticus network Intermediate vector is parsed, and obtains the vector group for representing sentence answer;And
The vector group for representing sentence answer is exported as problem answers.
Preferably, it is when being parsed to the sentence that the user sends in the first nerves network including following Step:
The sentence of user input is split into semantic minimum word unit in coding layer, obtains multiple word units, and Obtain the attribute of each word unit respectively, select comprising at least one word more than information content as centre word, and by its with to The form of amount is input to the input layer of the first nerves network as problem vector group;
In the output and previous moment of the hidden layer to the input layer of the first nerves network of the first nerves network The output of the hidden layer of the first nerves network carries out semantic parsing, and carries out linear weighted combination, forms representative sentences justice Intermediate vector.
Preferably, when being parsed to the intermediate vector in the nervus opticus network, comprise the following steps:
The intermediate vector is received in decoding layer, and the intermediate vector is defeated as the input layer of nervus opticus network Enter;
The nervus opticus network hidden layer to described in the intermediate vector and previous moment from input layer The output of the hidden layer of two neutral nets carries out semantic parsing, sequentially generates some single vectors, to form answer vector group, its Described in the semanteme of each single vector in answer vector group correspond to the semanteme of minimum word unit in answer output statement;
The answer vector group is exported in the output layer of the nervus opticus network.
Preferably, after the answer vector group is exported as answer output statement, by answer output Sentence is accordingly saved in knowledge base with dialogue read statement, is updated and is expanded with to knowledge base.
Preferably, after knowledge base matching primitives are carried out, according in knowledge base whether there is and it is described dialogue input language The conversation sentence that the matching degree of sentence reaches predetermined value asks standard signal position to set, and according to the effective of request flag signal position Property come decide whether request dialogue generation model provide answer.
Preferably, the linear weighted combination, comprises the following steps:
Step one:N groups center word data group is extracted in counting user read statement, wherein n is positive integer, in every group The probability x that heart term data ZuλTian Nei centers word occursi, previous sentence center word data group λ Tian Nei centers word appearance Probability yi, single argument regression model is set up,
yi=ω 'i·xi
Wherein, i is integer, i=1,2,3...... λ, ω 'iIt is the weighted regression coefficient in λ days;
Step 2:Solved using the least square method formula a kind of to step, the recurrence in λ days is calculated respectively Coefficient estimate:
Wherein,It is the estimate of regression coefficient;xijFor jZu centers word data group ZhongiTian centers word occurs Probability;It is the average value of j groups center word data group probability;yijIn in jth group user input sentence previous moment sentence I-th day in heart term data group;The probability that center word occurs;It is putting down for j group previous moments center word data group probability Average
Step 3:Normalized, the weighted value after being weighted:
Wherein, ωiIt is the weighted value after the weighting of user input sentence.
Preferably, the output collocation person of needing to use is selected and is used, and when exporting answer and being accurate, storage is being known Know storehouse.
Preferably, the voice de-noising algorithm, including:
A, mute frame and speech frame are divided into by end-point detection by speech frame;
B, for mute frame, calculates the power spectral value of present frame as noise power spectrum estimate, for speech frame, calculates Voice noise power Spectral Estimation value;
C, noise power spectrum estimate is subtracted by the power spectrum of speech frame, obtains the spectrum of the phonetic speech power after noise reduction;
D, the speech frame after noise reduction is drawn according to the phonetic speech power spectrum after noise reduction.
9th, the Machine self-learning based on neutral net according to claim 8 builds knowledge mapping training method, its It is characterised by, the voice noise power Spectral Estimation value computing formula is:
Wherein, I is noise power spectrum energy;Threshold valueN believes for noise Number frame number;J=1-5It is conversion coefficient, e is natural constant;π is pi;fcIt is the frequency of noise signal;τ (t)= 0.03t2+0.6t+0.1;T is decomposition scale, 1≤t≤4.
Beneficial effects of the present invention
The present invention has designed and developed a kind of Machine self-learning based on neutral net and has built knowledge mapping training method, uses Threshold speech noise reduction algorithm is obtained in that less mean square error, improves the signal to noise ratio of reconstructed speech signal.
A further object of the invention is to generate dialog model using neural metwork training, the model obtained using training, Robot can be freely with user talk.
Brief description of the drawings
Fig. 1 is the flow that the Machine self-learning based on neutral net of the present invention builds knowledge mapping training method Figure.
Specific embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings, to make those skilled in the art with reference to specification text Word can be implemented according to this.
As shown in figure 1, the Machine self-learning based on neutral net that the present invention is provided builds knowledge mapping training method, bag Include:
S100:The sentence based on Nature condition that user sends is obtained, using threshold speech noise reduction algorithm to read statement Be filtered noise reduction, and obtain the classification of the sentence, and obtain the sentence above of the sentence, and the sentence above class Not;
S200:The sentence classification according to sentence, it is determined that the feedback sentence of matching;
S300:If it does not exist, then according to neutral net dialog model, providing the answer of the sentence sent for user; Including:
S310:User send statement model coding layer be configured to first nerves network, in first nerves network to Family sends sentence and is parsed, and obtains for representing that user sends the first intermediate vector of statement semantics;
S320:The decoding layer for talking with generation model is configured to nervus opticus network, to institute in the nervus opticus network State intermediate vector to be parsed, obtain the vector group for representing sentence answer;And
S400:The vector group for representing sentence answer is exported as problem answers.
Wherein, when being parsed to the sentence that user sends in first nerves network in step S310, including following step Suddenly:
S311:The sentence of user input is split into semantic minimum word unit in coding layer, obtains multiple words Unit, and obtain the attribute of each word unit respectively, selects comprising at least one word more than information content as centre word, and by its The input layer of first nerves network is input to as problem vector group in vector form;
S312:First nerves network hidden layer to described in the output of the input layer of first nerves network and previous moment The output of the hidden layer of first nerves network carries out semantic parsing, and carries out linear weighted combination, forms the centre of representative sentences justice Vector.
In step s 320, when being parsed to the intermediate vector in nervus opticus network, comprise the following steps:
S321:Received in decoding layer:Intermediate vector, and be input into intermediate vector as the input layer of nervus opticus network;
S322:It is refreshing to the intermediate vector from input layer and previous moment second in the hidden layer of nervus opticus network Semantic parsing is carried out through the output of the hidden layer of network, some single vectors are sequentially generated, to form answer vector group, wherein answering The semanteme of each single vector in case vector group corresponds to the semanteme of minimum word unit in answer output statement;
The answer vector group is exported in the output layer of nervus opticus network.
In another embodiment, after the answer vector group is exported as answer output statement, by the answer Output statement is accordingly saved in knowledge base with dialogue read statement, is updated and is expanded with to knowledge base.
In another embodiment, after knowledge base matching primitives are carried out, whether there is and the dialogue according in knowledge base The conversation sentence that the matching degree of read statement reaches predetermined value asks standard signal position to set, and according to request flag signal position Validity come decide whether request dialogue generation model provide answer.
In another embodiment, linear weighted combination in step S312, comprises the following steps:
Step one:N groups center word data group is extracted in counting user read statement, wherein n is positive integer, in every group The probability x that heart term data ZuλTian Nei centers word occursi, previous sentence center word data group λ Tian Nei centers word appearance Probability yi, single argument regression model is set up,
yi=ω 'i·xi
Wherein, i is integer, i=1,2,3...... λ, ω 'iIt is the weighted regression coefficient in λ days;
Step 2:Solved using the least square method formula a kind of to step, the recurrence in λ days is calculated respectively Coefficient estimate:
Wherein,It is the estimate of regression coefficient;xijFor jZu centers word data group ZhongiTian centers word occurs Probability;It is the average value of j groups center word data group probability;yijIn in jth group user input sentence previous moment sentence I-th day in heart term data group;The probability that center word occurs;It is putting down for j group previous moments center word data group probability Average
Step 3:Normalized, the weighted value after being weighted:
Wherein, ωiIt is the weighted value after the weighting of user input sentence.
Preferably, the output collocation person of needing to use is selected and is used, and when exporting answer and being accurate, storage is being known Know storehouse.
In another embodiment, the threshold speech noise reduction algorithm in step S100, including:
A, mute frame and speech frame are divided into by end-point detection by speech frame;
B, for mute frame, calculates the power spectral value of present frame as noise power spectrum estimate, for speech frame, calculates Voice noise power Spectral Estimation value;
C, noise power spectrum estimate is subtracted by the power spectrum of speech frame, obtains the spectrum of the phonetic speech power after noise reduction;
D, the speech frame after noise reduction is drawn according to the phonetic speech power spectrum after noise reduction.
Voice noise power Spectral Estimation value computing formula is:
Wherein, I is noise power spectrum energy;Threshold valueN believes for noise Number frame number;J=1-5It is conversion coefficient, e is natural constant;π is pi;fcIt is the frequency of noise signal;τ (t)= 0.03t2+0.6t+0.1;T is decomposition scale, 1≤t≤4.
I.e. by voice collection device, the noise collection of illustrative plates on voice is obtained, divided into speech frame by end-point detection Mute frame and speech frame;For mute frame, the power spectral value of present frame is calculated as noise power spectrum estimate, for voice Frame, calculates:
Wherein, I is noise power spectrum energy;Threshold valueN believes for noise Number frame number;J=1-5It is conversion coefficient, e is natural constant;π is pi;fcIt is the frequency of noise signal;τ (t)= 0.03t2+0.6t+0.1;T is decomposition scale, 1≤t≤4.
Voice noise power Spectral Estimation value;The power spectrum of speech frame is subtracted into noise power spectrum estimate, after obtaining noise reduction Phonetic speech power spectrum;Speech frame after noise reduction is drawn according to the phonetic speech power spectrum after noise reduction.
The present invention has designed and developed a kind of Machine self-learning based on neutral net and has built knowledge mapping training method, uses Threshold speech noise reduction algorithm is obtained in that less mean square error, improves the signal to noise ratio of reconstructed speech signal, and using nerve Network training generates dialog model, the model obtained using training, and what robot can be freely talks with user.
Although embodiment of the present invention is disclosed as above, it is not restricted to listed in specification and implementation method With, it can be applied to various suitable the field of the invention completely, for those skilled in the art, can be easily Other modification is realized, therefore under the universal limited without departing substantially from claim and equivalency range, the present invention is not limited In specific details and shown here as the legend with description.

Claims (9)

1. a kind of Machine self-learning based on neutral net builds knowledge mapping training method, it is characterised in that including:
The sentence based on Nature condition that user sends is obtained, noise reduction is filtered to read statement using voice de-noising algorithm, And obtain the classification of the sentence, and obtain the sentence above of the sentence, and the sentence above classification;
The sentence classification according to sentence, it is determined that the feedback sentence of matching;
If it does not exist, then according to neutral net dialog model, providing the answer of the sentence sent for user;Including:
The coding layer that the user sends statement model is configured to first nerves network, to user in the first nerves network Send sentence to be parsed, obtain for representing that user sends the first intermediate vector of statement semantics;
The decoding layer of the dialogue generation model is configured to nervus opticus network, to the centre in the nervus opticus network Vector is parsed, and obtains the vector group for representing sentence answer;And
The vector group for representing sentence answer is exported as problem answers.
2. the Machine self-learning based on neutral net according to claim 1 builds knowledge mapping training method, its feature It is when being parsed to the sentence that the user sends in the first nerves network, to comprise the following steps:
The sentence of user input is split into semantic minimum word unit in coding layer, obtains multiple word units, and respectively The attribute of each word unit is obtained, is selected comprising at least one word more than information content as centre word, and by it with vector Form is input to the input layer of the first nerves network as problem vector group;
The first nerves network hidden layer to described in the output of the input layer of the first nerves network and previous moment The output of the hidden layer of first nerves network carries out semantic parsing, and carries out linear weighted combination, forms the centre of representative sentences justice Vector.
3. the Machine self-learning based on neutral net according to claim 2 builds knowledge mapping training method, its feature It is when being parsed to the intermediate vector in the nervus opticus network, to comprise the following steps:
The intermediate vector is received in decoding layer, and is input into the intermediate vector as the input layer of nervus opticus network;
It is refreshing to described in the intermediate vector and previous moment from input layer second in the hidden layer of the nervus opticus network Semantic parsing is carried out through the output of the hidden layer of network, some single vectors are sequentially generated, to form answer vector group, wherein institute The semanteme for stating each single vector in answer vector group corresponds to the semanteme of minimum word unit in answer output statement;
The answer vector group is exported in the output layer of the nervus opticus network.
4. the Machine self-learning based on neutral net according to any one of claim 1-3 builds knowledge mapping training side Method, it is characterised in that after the answer vector group is exported as answer output statement, by the answer output statement Accordingly it is saved in knowledge base with dialogue read statement, is updated and expands with to knowledge base.
5. the Machine self-learning based on neutral net according to any one of claim 1-3 builds knowledge mapping training side Method, it is characterised in that after knowledge base matching primitives are carried out, according to whether there is in knowledge base and the dialogue read statement The conversation sentence that matching degree reaches predetermined value asks standard signal position to set, and next according to the validity of request flag signal position Decide whether that request dialogue generation model provides answer.
6. the Machine self-learning based on neutral net according to claim 2 builds knowledge mapping training method, its feature It is that the linear weighted combination is comprised the following steps:
Step one:N groups center word data group is extracted in counting user read statement, wherein n is positive integer, every group of centre word The probability x that language data group λ Tian Nei centers word occursi, it is general that previous sentence center word data group λ Tian Nei centers word occurs Rate yi, single argument regression model is set up,
yi=ω 'i·xi
Wherein, i is integer, i=1,2,3...... λ, ω 'iIt is the weighted regression coefficient in λ days;
Step 2:Solved using the least square method formula a kind of to step, the regression coefficient in λ days is calculated respectively Estimate:
ω ^ i = Σ j = 1 n [ ( x i j - x ‾ ) ( y i j - y ‾ ) ] Σ j = 1 n ( x i j - x ‾ ) 2
Wherein,It is the estimate of regression coefficient;xijFor jZu centers word data group ZhongiTian centers word occur it is general Rate;It is the average value of j groups center word data group probability;yijCentre word in jth group user input sentence previous moment sentence I-th day in language data group;The probability that center word occurs;It is the average value of j group previous moments center word data group probability
Step 3:Normalized, the weighted value after being weighted:
ω i = ω ^ i ′ Σ i = 1 j ω ^ i ′
Wherein, ωiIt is the weighted value after the weighting of user input sentence.
7. the Machine self-learning based on neutral net according to claim 6 builds knowledge mapping training method, its feature It is that output the collocation person of needing to use selected and used, when exporting answer and being accurate, storage is in knowledge base.
8. the Machine self-learning based on neutral net according to claim 1 builds knowledge mapping training method, its feature It is, the voice de-noising algorithm, including:
A, mute frame and speech frame are divided into by end-point detection by speech frame;
B, for mute frame, calculates the power spectral value of present frame as noise power spectrum estimate, for speech frame, calculates voice Noise power spectrum estimate;
C, noise power spectrum estimate is subtracted by the power spectrum of speech frame, obtains the spectrum of the phonetic speech power after noise reduction;
D, the speech frame after noise reduction is drawn according to the phonetic speech power spectrum after noise reduction.
9. the Machine self-learning based on neutral net according to claim 8 builds knowledge mapping training method, its feature It is that the voice noise power Spectral Estimation value computing formula is:
f ( I ) = I I &GreaterEqual; &lambda; 0 I &le; &lambda; 2 3 ( 2 I - &lambda; ) 2 &lambda; - 2 ( 2 I - &lambda; ) 3 &lambda; 2 &lambda; 2 < I < &lambda; 3 ( 2 I + &lambda; ) 2 &lambda; - 2 ( 2 I + &lambda; ) 3 &lambda; 2 - &lambda; &le; I < - &lambda; 2
Wherein, I is noise power spectrum energy;Threshold valueN is the frame of noise signal Number;J=1-5It is conversion coefficient, e is natural constant;π is pi;fcIt is the frequency of noise signal;τ (t)=0.03t2+0.6t +0.1;T is decomposition scale, 1≤t≤4.
CN201710127387.0A 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network Active CN106875940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710127387.0A CN106875940B (en) 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710127387.0A CN106875940B (en) 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network

Publications (2)

Publication Number Publication Date
CN106875940A true CN106875940A (en) 2017-06-20
CN106875940B CN106875940B (en) 2020-08-14

Family

ID=59171199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710127387.0A Active CN106875940B (en) 2017-03-06 2017-03-06 Machine self-learning construction knowledge graph training method based on neural network

Country Status (1)

Country Link
CN (1) CN106875940B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108449A (en) * 2017-12-27 2018-06-01 哈尔滨福满科技有限责任公司 A kind of implementation method based on multi-source heterogeneous data question answering system and the system towards medical field
CN108389614A (en) * 2018-03-02 2018-08-10 西安交通大学 The method for building medical image collection of illustrative plates based on image segmentation and convolutional neural networks
WO2019024704A1 (en) * 2017-08-03 2019-02-07 阿里巴巴集团控股有限公司 Entity annotation method, intention recognition method and corresponding devices, and computer storage medium
CN109933773A (en) * 2017-12-15 2019-06-25 上海擎语信息科技有限公司 A kind of multiple semantic sentence analysis system and method
CN110349463A (en) * 2019-07-10 2019-10-18 南京硅基智能科技有限公司 A kind of reverse tutoring system and method
CN112309183A (en) * 2020-11-12 2021-02-02 江苏经贸职业技术学院 Interactive listening and speaking exercise system suitable for foreign language teaching
CN112487173A (en) * 2020-12-18 2021-03-12 北京百度网讯科技有限公司 Man-machine conversation method, device and storage medium
CN112528039A (en) * 2020-12-16 2021-03-19 中国联合网络通信集团有限公司 Word processing method, device, equipment and storage medium
CN113316752A (en) * 2019-01-24 2021-08-27 索尼半导体解决方案公司 Voltage control device
WO2021190389A1 (en) * 2020-03-25 2021-09-30 阿里巴巴集团控股有限公司 Speech processing method, speech encoder, speech decoder, and speech recognition system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647069A (en) * 2002-04-11 2005-07-27 株式会社PtoPA Conversation control system and conversation control method
US20060020473A1 (en) * 2004-07-26 2006-01-26 Atsuo Hiroe Method, apparatus, and program for dialogue, and storage medium including a program stored therein
CN104217226A (en) * 2014-09-09 2014-12-17 天津大学 Dialogue act identification method based on deep neural networks and conditional random fields
CN105704013A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Context-based topic updating data processing method and apparatus
CN105787560A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Dialogue data interaction processing method and device based on recurrent neural network
CN106055662A (en) * 2016-06-02 2016-10-26 竹间智能科技(上海)有限公司 Emotion-based intelligent conversation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647069A (en) * 2002-04-11 2005-07-27 株式会社PtoPA Conversation control system and conversation control method
US20060020473A1 (en) * 2004-07-26 2006-01-26 Atsuo Hiroe Method, apparatus, and program for dialogue, and storage medium including a program stored therein
CN104217226A (en) * 2014-09-09 2014-12-17 天津大学 Dialogue act identification method based on deep neural networks and conditional random fields
CN105704013A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Context-based topic updating data processing method and apparatus
CN105787560A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Dialogue data interaction processing method and device based on recurrent neural network
CN106055662A (en) * 2016-06-02 2016-10-26 竹间智能科技(上海)有限公司 Emotion-based intelligent conversation method and system

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019024704A1 (en) * 2017-08-03 2019-02-07 阿里巴巴集团控股有限公司 Entity annotation method, intention recognition method and corresponding devices, and computer storage medium
CN109388793A (en) * 2017-08-03 2019-02-26 阿里巴巴集团控股有限公司 Entity mask method, intension recognizing method and corresponding intrument, computer storage medium
CN109388793B (en) * 2017-08-03 2023-04-07 阿里巴巴集团控股有限公司 Entity marking method, intention identification method, corresponding device and computer storage medium
CN109933773B (en) * 2017-12-15 2023-05-26 上海擎语信息科技有限公司 Multiple semantic statement analysis system and method
CN109933773A (en) * 2017-12-15 2019-06-25 上海擎语信息科技有限公司 A kind of multiple semantic sentence analysis system and method
CN108108449A (en) * 2017-12-27 2018-06-01 哈尔滨福满科技有限责任公司 A kind of implementation method based on multi-source heterogeneous data question answering system and the system towards medical field
CN108389614B (en) * 2018-03-02 2021-01-19 西安交通大学 Method for constructing medical image map based on image segmentation and convolutional neural network
CN108389614A (en) * 2018-03-02 2018-08-10 西安交通大学 The method for building medical image collection of illustrative plates based on image segmentation and convolutional neural networks
CN113316752A (en) * 2019-01-24 2021-08-27 索尼半导体解决方案公司 Voltage control device
CN110349463A (en) * 2019-07-10 2019-10-18 南京硅基智能科技有限公司 A kind of reverse tutoring system and method
WO2021190389A1 (en) * 2020-03-25 2021-09-30 阿里巴巴集团控股有限公司 Speech processing method, speech encoder, speech decoder, and speech recognition system
CN112309183A (en) * 2020-11-12 2021-02-02 江苏经贸职业技术学院 Interactive listening and speaking exercise system suitable for foreign language teaching
CN112528039A (en) * 2020-12-16 2021-03-19 中国联合网络通信集团有限公司 Word processing method, device, equipment and storage medium
CN112487173A (en) * 2020-12-18 2021-03-12 北京百度网讯科技有限公司 Man-machine conversation method, device and storage medium
CN112487173B (en) * 2020-12-18 2021-09-10 北京百度网讯科技有限公司 Man-machine conversation method, device and storage medium

Also Published As

Publication number Publication date
CN106875940B (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN106875940A (en) A kind of Machine self-learning based on neutral net builds knowledge mapping training method
CN108536681B (en) Intelligent question-answering method, device, equipment and storage medium based on emotion analysis
CN108734276B (en) Simulated learning dialogue generation method based on confrontation generation network
CN105787560B (en) Dialogue data interaction processing method and device based on Recognition with Recurrent Neural Network
CN109036465B (en) Speech emotion recognition method
CN106095950B (en) Professor is intended to answer generation method in a kind of human-computer dialogue
Rothe et al. Question asking as program generation
CN108829756B (en) Method for solving multi-turn video question and answer by using hierarchical attention context network
CN106779053A (en) The knowledge point of a kind of allowed for influencing factors and neutral net is known the real situation method
CN109710744A (en) A kind of data matching method, device, equipment and storage medium
CN110457661A (en) Spatial term method, apparatus, equipment and storage medium
CN113761156A (en) Data processing method, device and medium for man-machine interaction conversation and electronic equipment
CN115393933A (en) Video face emotion recognition method based on frame attention mechanism
Platonov et al. A spoken dialogue system for spatial question answering in a physical blocks world
Liu et al. Perspective-corrected spatial referring expression generation for human–robot interaction
Adewale et al. Pixie: a social chatbot
CN109190116A (en) Semantic analytic method, system, electronic equipment and storage medium
Yang et al. Multi-intent text classification using dual channel convolutional neural network
CN114168769B (en) Visual question-answering method based on GAT relation reasoning
CN113761149A (en) Dialogue information processing method, device, computer equipment and storage medium
Katsumi et al. Optimization of information-seeking dialogue strategy for argumentation-based dialogue system
CN114625986A (en) Method, device and equipment for sorting search results and storage medium
CN115408500A (en) Question-answer consistency evaluation method and device, electronic equipment and medium
Singh et al. Human perception based criminal identification through human robot interaction
CN115186072A (en) Knowledge graph visual question-answering method based on double-process cognitive theory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Dongliang

Inventor after: Liu Yingbo

Inventor after: Wang Hongbin

Inventor after: Jiang Yuji

Inventor after: Yao Xing

Inventor after: Yu Yanlong

Inventor after: Li Xiaowen

Inventor after: Zhang Zhiwei

Inventor after: Zhang Lingling

Inventor before: Liu Yingbo

Inventor before: Wang Dongliang

Inventor before: Wang Hongbin

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant