CN108563669A - A kind of intelligence system of automatic realization app operations - Google Patents

A kind of intelligence system of automatic realization app operations Download PDF

Info

Publication number
CN108563669A
CN108563669A CN201810017031.6A CN201810017031A CN108563669A CN 108563669 A CN108563669 A CN 108563669A CN 201810017031 A CN201810017031 A CN 201810017031A CN 108563669 A CN108563669 A CN 108563669A
Authority
CN
China
Prior art keywords
parameter
user
interface
app
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810017031.6A
Other languages
Chinese (zh)
Other versions
CN108563669B (en
Inventor
高徐睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810017031.6A priority Critical patent/CN108563669B/en
Publication of CN108563669A publication Critical patent/CN108563669A/en
Application granted granted Critical
Publication of CN108563669B publication Critical patent/CN108563669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The present invention relates to a kind of intelligence systems that can realize app operations automatically, including:Session management module, in advance trained intention assessment model and parameter extraction model, user view and parameter correspondence, user's rules of interaction, user view and interface correspondence, interface and parameter correspondence, non-extracting parameter list;Also, the operational process of the intelligence system is as follows:S1 is pre-processed;S2 judges whether 1 and to handle in user's interaction;S3 Ellipsis recoverings;S4 intention assessments;S5 parameter extractions;S6 updates non-extracting parameter list;S7 judges whether 2 and to handle in user's interaction;S8 completes app operations;S9 exits flow.Using the present invention, it can so that operating app is more convenient, is quick.

Description

A kind of intelligence system of automatic realization app operations
Technical field
The present invention relates to intelligence system fields more particularly to one kind being automatically brought into operation app intelligence systems.
Background technology
The fast development of smart mobile phone, everybody more and more uses mobile phone app.But when user uses app, especially exist It when time is more urgent, can feel comparatively laborious, such as user's Amap enquiring route, need the line for being introduced into Amap Road interface could be inquired after inputting departure place and destination.
Invention content
Technical problem to be solved by the invention is to provide a kind of intelligence systems that can realize app operations automatically so that uses Family is more convenient, quick using app.
In order to solve the above-mentioned technical problem, the present invention adopts the following technical scheme that:
A kind of intelligence system of automatic realization app operation, which is characterized in that the system includes:Session management module; Advance trained intention assessment, intention qualifier model and parameter extraction model;User view and parameter correspondence;User Rules of interaction;User view and interface correspondence;Interface and parameter correspondence;Non- extracting parameter list;Also, the intelligence The operational process of energy system is as follows:
S1 is pre-processed:Natural language question sentence input by user is pre-processed, carry out additional character, go stop-word, Participle;
S2 judges whether 1 and to handle in user's interaction:Whether it is empty according to non-extracting parameter list, judges to work as Whether preceding question sentence if non-extracting parameter list is empty, is not at user in among user's interactively pick-up rest parameter value Among interaction, S3 is executed;Otherwise, parameter value input by user is stored, after being removed in the parameter never extracting parameter list, is held Row S7;
S3 Ellipsis recoverings:Context Ellipsis recovering is carried out to pretreated question sentence;
S4 intention assessments:It is intended to and is intended to qualifier identification model, to the question sentence after Ellipsis recovering, extracts it and be intended to and anticipate Figure qualifier;
S5 parameter extractions:According to the user view identified, the corresponding parameter extraction model of the intention, extraction is called to omit Parameter and parameter value after recovery in question sentence, are stored in map, and the key of map is parameter, and value is parameter value;
S6 updates non-extracting parameter list:According to user view and parameter correspondence, which ginseng of user view judged The also non-extracting parameter value of number, these parameters are stored in non-extracting parameter list;
S7 judges whether 2 and to handle in user's interaction:If list is empty for non-extracting parameter, S8 is executed;It is no Then, a parameter is never arbitrarily taken in extracting parameter list, according to the user's rules of interaction pre-established, generation is interacted with user Question sentence, then execute S9;
S8 completes app operations:The operation of app is completed by calling a series of interface;
S9 exits flow.
Preferably, the training method of the intention assessment model is as follows:Using trained term vector model, by user After question sentence is converted into term vector, it is trained using the neural network model training tool increased income.
It is further preferred that the training method of the term vector model is directly to be instructed to term vector to form using group Practice.
It is further preferred that the term vector training method is as follows:1. indicating that dictionary, w are the element in D, X tables with D Show continuous n members phrase all in training set, x is the element in X, represents positive sample, is by the centre in continuous n members phrase x Word replaces with the continuous n members phrase of w, represents negative sample;
2. being introduced into scoring functions f (x) in window, give a mark to x, it is correct to measure putting in order for n members phrase in x Degree passes through formula
Optimize window scoring functions f (x);
3. being trained to the optimization of window scoring functions f (x) using traditional neural network;
By training obtain be Distributed Representation forms term vector.The present invention using The term vector of Distributed Representation forms, i.e., dense, low-dimensional real vector, its every one-dimensional representation word One potential feature of language, this feature capture useful syntax and semantic feature.Its main feature is that by the different syntaxes of word and Each dimension that semantic feature is distributed to term vector indicates up.
Further, choosing of the markov decision process to replacement phrase is introduced in the training method of the term vector model It takes and optimizes.
Further, the step of introducing markov decision process optimizes the selection for replacing phrase is as follows: Markov Chain is constructed first, and moment n corresponds to and replaces substitute w every time, and state s corresponds to continuous n members phrase x, and action a corresponds to choosing Transition probability p (the s of the substitute w taken, moment nn+1|sn, an) correspond to p (xw| x, w), transition probability matrix can pass through training The statistical nature of collection obtains, Reward Program v (sn+1|sn, an) correspond to v (xw| x, w)=f (x)-f (xw);Then the long-term phase is constructed Hope Total Return Vπ(s)=Eπ{∑λn-1v(sn+1|sn, an)|s0, wherein π is tactful chain, and λ is discount factor, utilizes transition probability The expression formula is deformed into Vπ(sn)=∑ { p (sn+1|sn, an)|v(sn+1|sn, an)+λVπ(sn+1)|};Finally by value iteration Algorithm obtains optimal policy chain π*,
Further, the flow of the S8 is as follows:First according to user view, from user view and interface correspondence In map, a series of interfaces completed needed for app operations are searched, then call these interfaces successively;When calling interface, first from In interface parameters correspondence map, the required parameter of the interface is searched, is then stored in map from parameter value, gets parameter pair The parameter value answered, to complete the calling of interface;After calling all interfaces successively, the operation of the app is also just completed;Most Afterwards, according to intention qualifier filter result.
The premise that the intelligence system is realized is to train intention assessment model and parameter extraction mould in advance with session management Type, and store user view and parameter correspondence, with user's rules of interaction, user view and interface correspondence, interface with Parameter correspondence.
Session management:Different users is distinguished according to different sessions, it is right when user is more than not interact certain time The user conversation does expired processing.
To a session, when opening the session, initializing non-extracting parameter, list is empty.It is each in handling session When user's question sentence, the initialization operation is no longer executed.
It is intended to and is intended to qualifier identification model:For identifying user view and being intended to qualifier.To the user of collection from Right language question sentence, is pre-processed, including is gone additional character, removed stop-word, carry out ansj participles.To pretreated question sentence mark It notes the user view of the question sentence and is intended to qualifier.Using trained term vector model, by user's question sentence be converted into word to After amount, using the neural network model training tool increased income, training obtains user view and is intended to qualifier identification model.
User view:That is on mobile phone app, operation that user can carry out.For example, to Amap, behaviour that user can carry out Making (i.e. user view) has the address for inquiring certain place, enquiring route, inquiry public bus network etc..Taxi-hailing software, user are dripped to drop The operation (i.e. user view) that can be carried out has:Make express, take a taxi, make special train etc..
So-called intention and intention qualifier identification model, can regard a disaggregated model as, and user's question sentence is divided into certain one kind. Such as to Amap, user's question sentence is divided into inquiry address class, enquiring route class etc..
Term vector model training:Common term vector training method, needs first train language model, can just obtain term vector It indicates, this will lead to the waste of computing resource.The present invention is directly trained term vector form using group.The word of the present invention Vectorial training method is as follows:1. indicating that dictionary, w are the elements in D with D, X indicates continuous n members phrase all in training set, x It is the element in X, represents positive sample, xwIt is the continuous n members phrase that the medium term in continuous n members phrase x is replaced with to w, represents Negative sample;2. being introduced into scoring functions f (x) in window, give a mark to x, it is correct to measure putting in order for n members phrase in x Degree passes through formulaOptimize window scoring functions f (x);3. using traditional Neural network is trained the optimization of window scoring functions f (x);4. introducing choosing of the markov decision process to replacement phrase It takes and optimizes, further speed up training process, construct Markov Chain first, moment n corresponds to replaces substitute w, shape every time State s corresponds to continuous n members phrase x, and action a corresponds to the substitute w, the transition probability p (s of moment n chosenn+1|sn, an) correspond to p (xw | x, w), transition probability matrix can be obtained by the statistical nature of training set, Reward Program v (sn+1|sn, an) correspond to v (xw| x, W)=f (x)-f (xw);Then construction is long-term it is expected Total Return Vπ(s)=Eπ{∑λn-1v(sn+1|sn, an)|s0, wherein π is plan Slightly chain, λ is discount factor, and the expression formula is deformed into V using transition probabilityπ(sn)=∑ { p (sn+1|sn, an)|v(sn+1|sn, an)+λVπ(sn+1)|};Optimal policy chain π is obtained finally by Iteration algorithm*,Iteratively solve Bellman equation groups.
The term vector training method for introducing markov decision process has following advantage:
Compared with traditional term vector model, this method does not need advance train language model, but utilizes group to form Directly training term vector model, reduces trained complexity.Markov decision process is innovatively introduced, reduces and searches The dimension in rope space.
For wiki Chinese language materials, after pretreatment (such as removing additional character, participle), size is about 1G.It utilizes Genism platforms run term vector training algorithm on the laptop of+4 core of 16g memories.Classical word2vec algorithms are big 43min is about run, CW algorithms run 37min, and (the i.e. above-mentioned Markov that introduces is determined using term vector training method of the present invention The term vector training method of plan process) run 27min.It will thus be seen that the calculation amount of the term vector training algorithm of the present invention It is much smaller.
Parameter extraction model:Each user view corresponds to a parameter extraction model.According to the user view identified, adjust With corresponding parameter extraction model, parameter and parameter value in user's question sentence are extracted.Training method is similar with intention assessment model, Only mark is different.To user's question sentences, the parameter and parameter value in question sentence are marked, is separated with tab key between multiple parameters, Such as:" how going to the XuanWu Lake from crow temple " is labeled as " departure place:Crow temple destination:The XuanWu Lake ".
User view and its corresponding parameter extraction model storage path are stored in a map by we, the key of map For user view, value is that parameter extraction model stores path, with the corresponding parameter extraction model of easy-to-look-up user view.
User view and parameter correspondence:For generating non-extracting parameter list, to judge that active user's question sentence is It is no in in user's interactively pick-up rest parameter value.User view is stored in parameter correspondence in map, and key is user It is intended to, value is the corresponding parameter list of user view.
User's rules of interaction:Because the purpose interacted with user is extraction rest parameter value, therefore formulate a simple rule Then.Such as:Would you please input<Parameter>.
User view and interface correspondence:Storage realize user view namely app operation, a series of interfaces.Storage In map, key is user view, and value is the interface list for realizing app operations, is sequential between these interfaces.
Interface and parameter correspondence:Store the parameter needed for calling interface.It being stored in map, key is interface name, Value is to realize parameter list needed for interface, is sequential between these parameters.
In order to realize that the target for being automatically performed app operations, the present invention realize that steps are as follows:
【Pretreatment】:Natural language question sentence input by user is pre-processed, carry out additional character, go stop-word, Participle.
Whether 1 in user's interaction:Whether be empty according to non-extracting parameter list, judge current question sentence whether in Among user's interactively pick-up rest parameter value.If list is empty for non-extracting parameter, it is not among user's interaction, executes【It saves Slightly restore】This step.Otherwise, parameter value input by user (it is assumed that input by user just only have parameter value) is stored, by this After being removed in parameter never extracting parameter list, execute【Whether 2 in user's interaction】
【Ellipsis recovering】:Context Ellipsis recovering is carried out to pretreated question sentence.It is right according to the question sentence before user several times Current question sentence carries out completion, to reduce the number with user's interactively pick-up rest parameter.Such as:The last question sentence of user is the " Black Warrior Lake is at which ", next question sentence is " how being gone from crow temple ", carries out context Ellipsis recovering to the latter question sentence, reverts to " from chicken How ring temple goes to the XuanWu Lake ", it is thus not necessarily to be interacted with user to extract destination parameter.
【Intention assessment】:Intention assessment model is called, to the question sentence after Ellipsis recovering, it is extracted and is intended to and is intended to modification Word.
【Parameter extraction】:According to the user view identified, the corresponding parameter extraction model of the intention, extraction is called to omit Parameter and parameter value after recovery in question sentence, are stored in map, and the key of map is parameter, and value is parameter value.As " why Go to the XuanWu Lake ", it is obtaining the result is that destination through parameter extraction model:The XuanWu Lake, wherein parameter are destination, parameter value It is the XuanWu Lake.When storage, " destination " is stored in the key of map, and " XuanWu Lake " is stored in the value of map.
【The non-extracting parameter list of update】:According to user view and parameter correspondence, which ginseng of user view judged The also non-extracting parameter value of number, these parameters are stored in non-extracting parameter list.Such as:" how going to the XuanWu Lake ", user view are Route inquiry, the user view need " departure place " and " destination " two parameters, and【Parameter extraction】This step is only extracted " destination " this parameter, therefore non-extracting parameter list is:{ departure place }.
【Whether 2 in user's interaction】:If list is empty for non-extracting parameter, execute【Complete app operations】Step.It is no Then, a parameter is never arbitrarily taken in extracting parameter list, according to the user's rules of interaction pre-established, generation is interacted with user Question sentence, then【Exit flow】.Such as, there is parameter " destination " in non-extracting parameter list, according to user's rules of interaction, generate Interact question sentence with user and be:Would you please input destination.
【Complete app operations】The operation of app can be completed by calling a series of interface.First according to user view, From a series of interfaces (having sequence) completed needed for app operations in user view and interface correspondence map, are searched, then according to Secondary these interfaces of calling.When calling interface, first from interface parameters correspondence map, the required parameter of the interface is searched, Then it is stored in map from parameter value, the corresponding parameter value of parameter is got, to complete the calling of interface.It is all when calling successively Interface after, also just complete the operation of the app.Finally, according to intention qualifier filter result.
【Exit flow】.
Using above-mentioned technical proposal, the present invention may be implemented to be automatically brought into operation app.
Description of the drawings
Fig. 1 realizes the intelligence system overall flow figure of app operations automatically
Fig. 2 completes app operational flowcharts according to user view, the parameter of extraction
Specific implementation mode
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent.
To those skilled in the art, it is to be appreciated that certain known flows and its explanation, which may be omitted, in attached drawing 's.
The following further describes the technical solution of the present invention with reference to the accompanying drawings and examples.
The embodiment that system is combined with Amap, such as user want to inquire Nanjing Aero-Space University Jiangning school district to profound Route most fast Wu Hu can operate in accordance with the following steps.Where the search box of Amap being initially opened in user when the page, List is empty for the non-extracting parameter of initialization.
Step 1, user inputs in the search box of Amap homepage:What the route most fast to the XuanWu Lake be.
Step 2, user's question sentence is pre-processed, removes additional character, stop-word, participle.
Step 3, list is empty for non-extracting parameter, be not at in user's interactively pick-up rest parameter.
Step 4, Ellipsis recovering judgement is carried out.Because of first question sentence that the question sentence is user, Ellipsis recovering need not be carried out.
Step 5, user's question sentence is inputted and is intended to and is intended to qualifier identification model, identify that user's is intended that " inquiry Route ", it is intended that qualifier be " most fast ".The intention and be intended to qualifier identification model be using trained word to Model is measured, after converting user's question sentence to term vector, is obtained using the neural network model training tool training increased income.It is described Term vector training method is as follows:1. indicating that dictionary, w are the elements in D with D, X indicates continuous n members phrase all in training set, X is the element in X, represents positive sample, xwIt is the continuous n members phrase that the medium term in continuous n members phrase x is replaced with to w, represents Negative sample;2. being introduced into scoring functions f (x) in window, give a mark to x, it is correct to measure putting in order for n members phrase in x Degree passes through formulaOptimize window scoring functions f (x);3. using traditional Neural network is trained the optimization of window scoring functions f (x);4. introducing choosing of the markov decision process to replacement phrase It takes and optimizes, further speed up training process, construct Markov Chain first, moment n corresponds to replaces substitute w, shape every time State s corresponds to continuous n members phrase x, and action a corresponds to the substitute w, the transition probability p (s of moment n chosenn+1|sn, an) correspond to p (xw | x, w), transition probability matrix can be obtained by the statistical nature of training set, Reward Program v (sn+1|sn, an) correspond to v (xw| x, W)=f (x)-f (xw);Then construction is long-term it is expected Total Return Vπ(s)=Eπ{∑λn-1v(sn+1|sn, an)|s0, wherein π is plan Slightly chain, λ is discount factor, and the expression formula is deformed into V using transition probabilityπ(sn)=∑ { p (sn+1|sn, an)|v(sn+1|sn, an)+λVπ(sn+1)|};Optimal policy chain π is obtained finally by Iteration algorithm*,Iteratively solve Bellman equation groups.
Step 6, according to user view " enquiring route ", the corresponding parameter extraction model of the user view, extraction ginseng are called Number and parameter value are:Destination:The XuanWu Lake.
Step 7, according to user view and parameter correspondence, generating non-extracting parameter list is:{ departure place }.
Step 8, non-extracting parameter list not be sky, take any one parameter in non-extracting parameter list, according to user Rules of interaction generates and interacts question sentence with user.Here it is that " please input out by the question sentence that interacted with user that parameter " departure place " generates Hair ground:”.
So far first round interaction is completed, and starts next round interaction.
Step 9, after user inputs departure place, it is pre-processed, additional character, stop-word, participle are removed.
Step 10, non-extracting parameter list be { departure place }, be not sky, in in user's interactively pick-up parametric procedure.
Step 11, it is assumed that input by user is exactly parameter value.Parameter value input by user is stored, non-extracting parameter row are updated Table removes parameter " departure place ".Non- extracting parameter list at this time is:{}.
Step 12, list is empty for non-extracting parameter, and all parameter extractions are completed.
Step 13, it according to user view " enquiring route ", searches and completes the required interface list of app operations, successively Call these interfaces.When calling interface, parameter needed for the interface is first looked for, then searches the corresponding parameter value of parameter.It will look into It finds parameter value and is passed to interface, complete the calling of interface.According to user view qualifier " most fast ", total interface tune is completed in screening With rear return as a result, used time minimum circuit is showed user.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this All any modification, equivalent and improvement etc., should be included in the claims in the present invention made by within the spirit and principle of invention Protection domain within.

Claims (7)

1. a kind of intelligence system of automatic realization app operation, which is characterized in that the system includes:Session management module;In advance First trained intention and intention qualifier identification model and parameter extraction model;User view and parameter correspondence;User Rules of interaction;User view and interface correspondence;Interface and parameter correspondence;Non- extracting parameter list;Also, the intelligence The operational process of energy system is as follows:
S1 is pre-processed:Natural language question sentence input by user is pre-processed, additional character is carried out, removes stop-word, participle;
S2 judges whether 1 and to handle in user's interaction:Whether it is empty according to non-extracting parameter list, judgement is currently asked Whether sentence if non-extracting parameter list is empty, is not at user interaction in among user's interactively pick-up rest parameter value Among, execute S3;Otherwise, parameter value input by user is stored, after being removed in the parameter never extracting parameter list, is executed S7;
S3 Ellipsis recoverings:Context Ellipsis recovering is carried out to pretreated question sentence;
S4 intention assessments:It calls and is intended to and is intended to qualifier identification model, to the question sentence after Ellipsis recovering, extract it and be intended to and anticipate Figure qualifier;
S5 parameter extractions:According to the user view identified, the corresponding parameter extraction model of the intention is called, extracts Ellipsis recovering Parameter and parameter value in question sentence afterwards, is stored in map, and the key of map is parameter, and value is parameter value;
S6 updates non-extracting parameter list:According to user view and parameter correspondence, which parameter of user view is judged also These parameters are stored in non-extracting parameter list by non-extracting parameter value;
S7 judges whether 2 and to handle in user's interaction:If list is empty for non-extracting parameter, S8 is executed;Otherwise, from A parameter is arbitrarily taken in non-extracting parameter list, according to the user's rules of interaction pre-established, what generation was interacted with user asks Sentence, then executes S9;
S8 completes app operations:The operation of app is completed by calling a series of interface;
S9 exits flow.
2. a kind of intelligence system of automatic realization app operations according to claim 1, it is characterised in that the intention assessment mould The training method of type is as follows:Utilize what is increased income after converting user's question sentence to term vector using trained term vector model Neural network model training tool is trained.
3. a kind of intelligence system of automatic realization app operations according to claim 2, it is characterised in that the instruction of the term vector It is directly to be trained to term vector to form using group to practice method.
4. a kind of intelligence system of automatic realization app operations according to claim 3, it is characterised in that the term vector instruction It is as follows to practice method:1. indicating that dictionary, w are the elements in D with D, X indicates continuous n members phrase all in training set, and x is in X Element represents positive sample, xwIt is the continuous n members phrase that the medium term in continuous n members phrase x is replaced with to w, represents negative sample;
2. being introduced into scoring functions f (x) in window, give a mark to x, to measure the correct journey of n members phrase in x to put in order Degree, passes through formula
Optimize window scoring functions f (x);
3. being trained to the optimization of window scoring functions f (x) using traditional neural network;
By training obtain be Distributed Representation forms term vector.
5. a kind of intelligence system of automatic realization app operations according to claim 4, it is characterised in that the term vector model Training method in introduce markov decision process to replace phrase selection optimize.
6. a kind of intelligence system of automatic realization app operations according to claim 5, it is characterised in that the introducing Ma Erke The step of husband's decision process optimizes the selection for replacing phrase is as follows:Markov Chain is constructed first, and moment n corresponds to each Substitute w is replaced, state s corresponds to continuous n members phrase x, and action a corresponds to the substitute w, the transition probability p (s of moment n chosenn+1 |sn, an) correspond to p (xw| x, w), transition probability matrix can be obtained by the statistical nature of training set, Reward Program v (sn+1| sn, an) correspond to v (xw| x, w)=f (x)-f (xw);Then construction is long-term it is expected Total Return Vπ(s)=Eπ{∑λn-1v(sn+1|sn, an)|s0, wherein π is tactful chain, and λ is discount factor, and the expression formula is deformed into V using transition probabilityπ(sn)=∑ { p (sn+1 |sn, an)|v(sn+1|sn, an)+λVπ(sn+1)|};Optimal policy chain π is obtained finally by Iteration algorithm*,
7. according to a kind of any one of claim 1-6 intelligence systems of automatic realization app operations, it is characterised in that the S8 Flow it is as follows:First according to user view, from user view and interface correspondence map, searches and complete needed for app operations A series of interfaces, then call these interfaces successively;When calling interface, first from interface parameters correspondence map, search Then the required parameter of the interface stores in map from parameter value, the corresponding parameter value of parameter is got, to complete interface It calls;After calling all interfaces successively, the operation of the app is also just completed;Finally, according to intention qualifier filtering knot Fruit.
CN201810017031.6A 2018-01-09 2018-01-09 Intelligent system for automatically realizing app operation Active CN108563669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810017031.6A CN108563669B (en) 2018-01-09 2018-01-09 Intelligent system for automatically realizing app operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810017031.6A CN108563669B (en) 2018-01-09 2018-01-09 Intelligent system for automatically realizing app operation

Publications (2)

Publication Number Publication Date
CN108563669A true CN108563669A (en) 2018-09-21
CN108563669B CN108563669B (en) 2021-09-24

Family

ID=63529751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810017031.6A Active CN108563669B (en) 2018-01-09 2018-01-09 Intelligent system for automatically realizing app operation

Country Status (1)

Country Link
CN (1) CN108563669B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246609A1 (en) * 2022-06-24 2023-12-28 华为技术有限公司 Speech interaction method, electronic device and speech assistant development platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002077640A2 (en) * 2001-03-25 2002-10-03 Exiqon A/S Systems for analysis of biological materials
CN103514230A (en) * 2012-06-29 2014-01-15 北京百度网讯科技有限公司 Method and device used for training language model according to corpus sequence
US20140236578A1 (en) * 2013-02-15 2014-08-21 Nec Laboratories America, Inc. Question-Answering by Recursive Parse Tree Descent
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN105354180A (en) * 2015-08-26 2016-02-24 欧阳江 Method and system for realizing open semantic interaction service
CN106095834A (en) * 2016-06-01 2016-11-09 竹间智能科技(上海)有限公司 Intelligent dialogue method and system based on topic

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002077640A2 (en) * 2001-03-25 2002-10-03 Exiqon A/S Systems for analysis of biological materials
CN103514230A (en) * 2012-06-29 2014-01-15 北京百度网讯科技有限公司 Method and device used for training language model according to corpus sequence
US20140236578A1 (en) * 2013-02-15 2014-08-21 Nec Laboratories America, Inc. Question-Answering by Recursive Parse Tree Descent
CN105354180A (en) * 2015-08-26 2016-02-24 欧阳江 Method and system for realizing open semantic interaction service
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN106095834A (en) * 2016-06-01 2016-11-09 竹间智能科技(上海)有限公司 Intelligent dialogue method and system based on topic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
洪维恩: "《Java完全自学手册》", 31 May 2009 *
黄丽霞、周丽霞、赵丽梅: "《信息检索教程》", 31 July 2014 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246609A1 (en) * 2022-06-24 2023-12-28 华为技术有限公司 Speech interaction method, electronic device and speech assistant development platform

Also Published As

Publication number Publication date
CN108563669B (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN105868184B (en) A kind of Chinese personal name recognition method based on Recognition with Recurrent Neural Network
Li et al. TDEER: An efficient translating decoding schema for joint extraction of entities and relations
CN110347894A (en) Knowledge mapping processing method, device, computer equipment and storage medium based on crawler
CN111062451B (en) Image description generation method based on text guide graph model
CN108388651A (en) A kind of file classification method based on the kernel of graph and convolutional neural networks
CN108595708A (en) A kind of exception information file classification method of knowledge based collection of illustrative plates
CN107577662A (en) Towards the semantic understanding system and method for Chinese text
CN106886580A (en) A kind of picture feeling polarities analysis method based on deep learning
CN108549658A (en) A kind of deep learning video answering method and system based on the upper attention mechanism of syntactic analysis tree
CN110188195B (en) Text intention recognition method, device and equipment based on deep learning
Li et al. R-vgae: Relational-variational graph autoencoder for unsupervised prerequisite chain learning
CN106682224B (en) Data entry method, system and database
CN107357785A (en) Theme feature word abstracting method and system, feeling polarities determination methods and system
CN112860896A (en) Corpus generalization method and man-machine conversation emotion analysis method for industrial field
CN105975497A (en) Automatic microblog topic recommendation method and device
CN109446299A (en) The method and system of searching email content based on event recognition
CN117312531A (en) Power distribution network fault attribution analysis method based on large language model with enhanced knowledge graph
Akdemir et al. Multimodal and multilingual understanding of smells using vilbert and muniter
CN108563669A (en) A kind of intelligence system of automatic realization app operations
CN115270774B (en) Big data keyword dictionary construction method for semi-supervised learning
Amin Cases without borders: automating knowledge acquisition approach using deep autoencoders and siamese networks in case-based reasoning
CN113869049B (en) Fact extraction method and device with legal attribute based on legal consultation problem
CN116795948A (en) Intent recognition method and dialogue system for patent question-answering service
CN114969347A (en) Defect duplication checking implementation method and device, terminal equipment and storage medium
CN114510567A (en) Clustering-based new idea finding method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant