CN109785836A - Exchange method and device - Google Patents

Exchange method and device Download PDF

Info

Publication number
CN109785836A
CN109785836A CN201910079020.5A CN201910079020A CN109785836A CN 109785836 A CN109785836 A CN 109785836A CN 201910079020 A CN201910079020 A CN 201910079020A CN 109785836 A CN109785836 A CN 109785836A
Authority
CN
China
Prior art keywords
user
voice
execute
smart machine
wake
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910079020.5A
Other languages
Chinese (zh)
Other versions
CN109785836B (en
Inventor
袁煜然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201910079020.5A priority Critical patent/CN109785836B/en
Publication of CN109785836A publication Critical patent/CN109785836A/en
Application granted granted Critical
Publication of CN109785836B publication Critical patent/CN109785836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the present application discloses exchange method and device.One specific embodiment of this method includes: the voice in response to getting user, reference information is based on using judgment models are waken up, judge whether execute wake operation, reference information include: the corresponding text of voice of the user, smart machine the last time execute wake operation at the time of and judge whether execute wake operation at the time of between duration;In response to obtaining judging result, operation associated with the judging result is executed.On the one hand, avoid relied in interactive process each time the input such as title comprising smart machine voice can interact waking up smart machine just caused by the cumbersome problem of interaction, simplify interaction flow.On the other hand, smart machine can be interacted with user by the voice of natural language form.

Description

Exchange method and device
Technical field
This application involves computer fields, and in particular to field of human-computer interaction more particularly to exchange method and device.
Background technique
The smart machine of such as intelligent sound box can be interacted by way of voice with user, to the instruction phase of user The voice for the operation for hoping it execute is analyzed, and determines the operation that user it is expected that it is executed, and is executed user and it is expected its behaviour executed Make.At present, it usually needs the voice that user issues such as title comprising smart machine wakes up smart machine first, then, then User is executed by smart machine and it is expected its operation executed.On the one hand, cause interactive process relatively complicated, on the other hand, intelligence Equipment can not be interacted with user by the voice of natural language form.
Summary of the invention
The embodiment of the present application provides exchange method and device.
In a first aspect, the embodiment of the present application provides exchange method, this method comprises: in response to the language for getting user Sound is based on reference information using judgment models are waken up, judges whether to execute wake operation, reference information includes: the user At the time of at the time of the corresponding text of voice, smart machine the last time execute wake operation with judging whether to execute wake operation Between duration;In response to obtaining judging result, operation associated with the judging result is executed.
Second aspect, the embodiment of the present application provide interactive device, which includes: judging unit, are configured to respond to In the voice for getting user, it is based on reference information using judgment models are waken up, judges whether to execute wake operation, reference information Include: at the time of the corresponding text of voice of the user, smart machine the last time executing wake operation with judge whether to hold Duration between at the time of row wake operation;Execution unit is configured to respond to obtain judging result, execute and the judgement As a result associated operation.
Exchange method and device provided by the embodiments of the present application utilize wake-up by the voice in response to getting user Judgment models are based on reference information, judge whether to execute wake operation, reference information includes: the corresponding text of voice of the user This, smart machine the last time execute wake operation at the time of and judge whether execute wake operation at the time of between duration; In response to obtaining judging result, operation associated with the judging result is executed.On the one hand, interactive process each time is avoided In rely on the input such as title comprising smart machine voice can interact waking up smart machine just caused by friendship Mutually cumbersome problem, simplifies interaction flow.On the other hand, smart machine and user can pass through the voice of natural language form It interacts.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 shows the exemplary system architecture for being suitable for being used to realize the embodiment of the present application;
Fig. 2 shows the flow charts according to one embodiment of the exchange method of the application;
Fig. 3 shows the structural schematic diagram of one embodiment of the interactive device according to the application;
Fig. 4 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system architecture for being suitable for being used to realize the embodiment of the present application.
As shown in Figure 1, system architecture may include smart machine 101, network 102, server 103.Network 102 can be Cable network or wireless network.
Smart machine 101 transmits data by network 102 and server 103, and smart machine 101 may include but unlimited In: intelligent sound box, intelligent interaction robot.Microphone on smart machine 101 can collect the user near smart machine Voice, thus, get the voice of user.Smart machine 101 can identify the voice of user, obtain the language of user The corresponding text of sound is judged whether to execute wake operation using the wake-up judgment models operated on smart machine 101, that is, judged Whether it is set as smart machine 101 to be waken up state.When available judging result, smart machine 101 executes and judgement As a result associated operation.The voice for the user that smart machine 101 can also will acquire is sent to server 103, by with The voice at family is identified, the corresponding text of voice of user is obtained, and utilizes the wake-up judgment models of operation on a server 103 Judge whether to execute wake operation, when obtaining judging result, judging result is sent to smart machine 101 by server 103.
Referring to FIG. 2, it illustrates the processes according to one embodiment of the exchange method of the application.This method include with Lower step:
Step 201, judged whether in response to getting the voice of user using judgment models are waken up based on reference information Execute wake operation.
In the present embodiment, reference information includes: the corresponding text of voice of the user got, smart machine nearest one It is secondary execute wake operation at the time of and judge whether execute wake operation at the time of between duration.
In the present embodiment, at the time of judging whether to execute wake operation and it is not specific to a certain moment.Each time judgement be At the time of being referred to as to judge whether to execute wake operation at the time of no execution wake operation.
In the present embodiment, smart machine can be executed to the state for the operation that user needs to be executed by smart machine It is referred to as to be waken up state.When smart machine is not waken up, smart machine is in standby.When smart machine gets use After the voice at family, it can be determined that whether wake operation is executed, when determining execution wake operation, executes wake operation, so that Smart machine is in and is waken up state, then, then executes the operation that user needs to be executed by smart machine.
In the present embodiment, the neural network for being used to judge whether to execute wake operation can be referred to as to wake up and judges mould Type.For judging whether that the neural network for executing wake operation is trained to obtain by advancing with training sample.Each Training sample include: for trained text, smart machine the last time execute wake operation at the time of with judge whether to execute Duration between at the time of wake operation, for the trained corresponding target output of text.User is indicated for trained text Demand.The type of the demand of user can include but is not limited to: needing to be executed operation by smart machine, needs by smart machine Query information.
For example, a training sample includes text " me is helped to consult weather condition ", the text indicates user query weather The demand of situation, it is that indicating intelligent equipment executes wake operation that this, which is used for the corresponding target output of trained text,.For another example one A training sample includes the text " me is helped to close window " for training, this is used for the corresponding target output of trained text and is Indicating intelligent equipment does not execute wake operation.
In the present embodiment, for judge whether execute wake operation neural network creation when with initial network Parameter.It can be trained by multiple training samples, iteratively adjust network parameter.By multiple training sample training Afterwards, for judge whether to execute wake operation neural network can the corresponding text of voice based on user, smart machine most It is close primary at the time of execute wake operation and at the time of judging whether to execute wake operation between duration, judge whether to execute and call out It wakes up and operates.
In other words, it can be set based on intelligence for judging whether that the neural network for executing wake operation wakes up judgment models It is standby the last at the time of execute wake operation and at the time of judging whether to execute wake operation between duration, user voice Corresponding text judges whether to wake up smart machine.
When being trained, for each training sample, the text for training in training sample can be carried out Coding, obtains the coded representation for trained text.Can to smart machine the last time execute wake operation at the time of with Duration between at the time of judging whether to execute wake operation is encoded, and the coded representation of duration is obtained.It is being for judgement In the no neural network for executing wake operation, it can be obtained based on the coded representation of coded representation, duration for trained text To prediction result, whether the pre- indicating intelligent equipment of prediction result executes wake operation.Result and mesh can be gone out according to indication predicting The loss function of difference, calculates and needs the network parameter that adjusts between mark output result, the network parameter that needs are adjusted into Row adjustment.
In the present embodiment, waking up judgment models can set according to the corresponding text of voice of the user got, intelligence It is standby the last at the time of execute wake operation and at the time of the voice of user that judgement is got between duration, judge whether Wake operation is executed, that is, judges whether to wake up smart machine.
In the present embodiment, at the time of the last time in training sample executes wake operation and judge whether to execute wake-up Duration between at the time of operation can be according to the confidence level of the demand of the text reflection user for training in training sample Setting.
In the present embodiment, when can clearly reflect that user needs by smart machine to execute behaviour for trained text When making, needing the demand by users such as smart machine query informations, then reflect the confidence of the demand of user for the text of training Degree is higher, at the time of smart machine the last time in training sample executes wake operation and judges whether to execute wake operation Duration between moment can be such as 1 hour longer, and the target output in the training sample is that indicating intelligent equipment executes wake-up Operation.When can both indicate that user needed to execute operation by smart machine, needs are inquired by smart machine for trained text In the corresponding text of daily voice, the last execute in training sample is called out by the demand of the users such as information and user Wake up operation at the time of and judge whether execute wake operation at the time of between duration can be such as 10 minutes shorter, the training sample Target output in this is that indicating intelligent equipment executes wake operation.
In the present embodiment, after being trained using training sample, wake operation is executed for judging whether Neural network be wake up judgment models can the demand of voice based on the user got corresponding text reflection user set Reliability, smart machine the last time execute wake operation at the time of and judge whether execute wake operation at the time of between when It is long, judge whether to execute wake operation.
In the present embodiment, when the corresponding text of the voice of user clearly reflects
When user needs to be executed operation by smart machine, needs the demand by users such as smart machine query informations, even if Smart machine the last time execute wake operation at the time of and judge whether execute wake operation at the time of between duration it is longer Such as 1 hour, since the corresponding text of the voice of user clearly reflects that the needs of user execute operation, needs by smart machine By the demand of the users such as smart machine query information, judging result can be execution wake operation.When for trained text both It can indicate that the needs of user execute operation by smart machine, need the demand by users such as smart machine query informations, and User gets at the time of smart machine the last time executing wake operation with judgement in the corresponding text of daily voice User voice at the time of between duration it is such as 10 minutes shorter when, judging result can for execute wake operation.Work as intelligence Duration between at the time of the last capable of executing wake operation and at the time of the voice of user that judgement is got is longer by such as 1 Hour, judging result can be not execute wake operation.
Step 202, operation associated with judging result is executed.
In the present embodiment, using wake up judgment models according to the corresponding text of voice of the user got, intelligence Duration between at the time of equipment the last time executes wake operation and at the time of the voice of user that judgement is got, judgement is After no execution wake operation, when available judging result, operation associated with judging result can be executed.Work as judgement When as a result to execute wake operation, operation associated with judging result may include executing wake operation by execution equipment, will Then smart machine setting, then executes operation associated with the voice of the user got in state is waken up.Work as judgement As a result for when not executing wake operation, operation associated with judging result may include updating accumulation not wake up duration parameters Parameter value.The parameter value that accumulation does not wake up duration parameters is that smart machine is not held after at the time of the last time executing wake operation The duration of the accumulation of row wake operation.When judging whether to execute wake operation using wake-up judgment models, can will accumulate not At the time of the parameter value of wake-up duration parameters executes wake operation as smart machine the last time in reference information and judge Duration between at the time of the voice of the user got.
In some optional implementations of the present embodiment, when using wake up judgment models according to the user's got The voice of user that the corresponding text of voice, smart machine the last time get at the time of executing wake operation with judgement when Duration between quarter can be determined the voice of the user got when judging whether to execute wake operation by wake-up judgment models Whether comprising waking up word in corresponding text.When in the corresponding text of voice for waking up the determining user got of judgment models When including waking up word, available instruction executes the judging result of wake operation.
For example, smart machine is intelligent sound box, the title of intelligent sound box is small A, wakes up the title that word is intelligent sound, The voice of the user got is " small A helps me to consult the weather of today ", at this point it is possible to determine the language of the user got The corresponding text of sound includes to wake up word, and available instruction executes the judging result of wake operation.
In some optional implementations of the present embodiment, when using wake up judgment models based on the user's got At the time of at the time of the corresponding text of voice, smart machine the last time execute wake operation with judging whether to execute wake operation Between duration when, can be judged by wake-up judgment models the corresponding text of voice of user got whether include and intelligence The associated keyword of the operation that equipment can not execute.When the corresponding text of the voice for judging the user got includes and intelligence Can the operation that can not execute of equipment associated keyword when, the available judging result for indicating not executing wake operation.It can With the associated keyword of the operation preset with smart machine can not execute, when the corresponding text of the voice of user includes referring to Show keyword associated with the operation that smart machine can not execute, obtains indicating the judging result for not executing wake operation.
For example, the voice of the user got is " me is helped to close window ", since smart machine can not execute closing window The operation at family, keyword associated with the operation that smart machine can not execute include close, window the two keywords, therefore, It can determine that the corresponding text of the voice of the user got includes key associated with the operation that smart machine can not execute Word, the available judging result for indicating not executing wake operation.
In some optional implementations of the present embodiment, when using wake up judgment models based on the user's got The corresponding text of voice, it is the last at the time of execute wake operation and at the time of the voice of user that judgement is got between Duration can play when judging whether that executing wake operation does not obtain judging result later for guided interaction sentence.Work as acquisition It, can be in response to obtaining when needing smart machine to execute the feedback voice of associated with the voice of user operation to the instruction of user The feedback voice instruction of the user got needs smart machine to execute associated with the voice of user operation, generates and gets The markup information of the voice of user.It is then possible to generate the training sample for waking up judgment models, the training of the wake-up judgment models Sample include: the voice of the user got, the user got voice markup information, the voice of the user got Markup information instruction executes wake operation.The training sample that can use the wake-up judgment models of generation continues to judge mould to wake-up Type is trained.When being trained, the voice of the user got in the training sample of generation is as wake-up judgment models Input, the markup information of the voice of the user got in the training sample of generation exports as target.
For example, the voice of the user got is " helping me to consult ... ", the voice is for inquiring some aspect Information, in advance to for judge whether execute wake operation neural network be trained when, do not utilize by indicate user The voice and instruction for needing to obtain the information of this aspect execute the training sample of the instruction information composition of wake operation to for sentencing The disconnected neural network for whether executing wake operation is trained.It can not be based on the use got correspondingly, waking up judgment models The voice at family is judged to determine whether to execute wake operation, is unable to get judging result.Draw at this point, smart machine can play Lead sound " may I ask is spoken with me ".The feedback voice of user is " yes ", and the feedback voice " yes " of the user indicates Smart machine is needed to execute operation associated with the voice of the user got.It is " good that smart machine can play feedback voice ".At this point it is possible to generate a training sample.The training sample includes that the voice of the user got " helps me to look into one Under ... ", the markup information of the voice " helping me to consult ... " of user that gets, the voice of the user got The markup information instruction of " helping me to consult ... " executes wake operation.
In some optional implementations of the present embodiment, when using wake up judgment models based on the user's got The corresponding text of voice, it is the last at the time of execute wake operation and at the time of the voice of user that judgement is got between Duration can play when judging whether that executing wake operation does not obtain judging result later for guided interaction sentence.Work as acquisition It, can be in response to when not needing smart machine to the instruction of user and execute the feedback voice of associated with the voice of user operation The feedback voice instruction of the user got does not need smart machine and executes operation associated with the voice of user, can be generated The markup information of the voice of the user got.It is then possible to generate the training sample for waking up judgment models, which judges mould The training sample of type include: the voice of the user got, the user got voice markup information, the user got Voice markup information instruction do not execute wake operation.The training sample that can use the wake-up judgment models of generation continues pair Judgment models are waken up to be trained.When being trained, the voice of the user got in the training sample of generation, which is used as, is called out The input for judgment models of waking up, the markup information of the voice of the user got in the training sample of generation is as target output.
For example, the voice of the user got is " is dinner ready ", behaviour is waken up to for judging whether to execute in advance When the neural network of work is trained, do not utilize by indicating that user needs to obtain the voice of the information of this aspect and instruction does not execute The training sample of the instruction information composition of wake operation is to for judging whether that the neural network for executing wake operation is trained. When the voice " is dinner ready " for getting user, it is unable to get judging result.At this point, smart machine can play guidance voice " may I ask is spoken with me ".It is then possible to get the feedback voice " no " of user, the feedback voice of the user " no " instruction does not need smart machine and executes operation associated with the voice of the user got.The smart machine can be with Play feedback voice " good ".At this point it is possible to generate a training sample, which includes the voice of the user got The markup information of the voice " is dinner ready " of " is dinner ready ", the user got, the voice " meal of the user got Carried out " markup information instruction do not execute wake operation.
Referring to FIG. 3, this application provides one of a kind of interactive device as the realization to method shown in above-mentioned each figure Embodiment, the Installation practice are corresponding with embodiment of the method shown in Fig. 2.Each unit in interactive device has been configured as At the specific implementation of operation can be with reference to the specific implementation of corresponding operation described in embodiment of the method.
As shown in figure 3, the interactive device of the present embodiment includes: judging unit 301, execution unit 302.Wherein, judge list Member 301 is configured to respond to get the voice of user, is based on reference information using judgment models are waken up, judges whether to execute Wake operation, reference information include: the corresponding text of voice of the user, smart machine the last time execution wake operation Moment and judge whether execute wake operation at the time of between duration;Execution unit 302 is configured to respond to be judged As a result, executing operation associated with the judging result.
In some optional implementations of the present embodiment, interactive device further include: the first collector unit is configured as In response to not obtaining judging result, guided interaction sentence is played, guided interaction sentence is for guiding user to determine the need for intelligence It can equipment execution operation associated with the voice of the user got;Feedback voice instruction in response to the user got needs Smart machine is wanted to execute operation associated with the voice of the user got, the mark of the voice of the user got described in generation Information is infused, the markup information indicating intelligent equipment of the voice of the user got executes wake operation;It generates and wakes up judgement The training sample of model, the training sample include: the voice of the voice of the user got, the user got Markup information.
In some optional implementations of the present embodiment, interactive device further include: the second collector unit is configured as In response to not obtaining judging result, guided interaction sentence is played, guided interaction sentence is for guiding user to determine the need for intelligence It can equipment execution operation associated with the voice of user;Feedback voice instruction in response to the user got does not need to execute Operation associated with the voice of user, the markup information of the voice of the user got described in generation, the use got The markup information indicating intelligent equipment of the voice at family does not execute wake operation;The training sample for waking up judgment models is generated, it is described Training sample include: the voice of the user got, the user got voice markup information.
In some optional implementations of the present embodiment, judging unit includes: the first wake-up judgment sub-unit, is matched Whether be set to using waking up judgment models to determine in the corresponding text of voice of the user got includes waking up word;In response to Determine to include waking up word in the corresponding text of voice of the user got, obtains the judgement knot that instruction executes wake operation Fruit.
In some optional implementations of the present embodiment, judging unit includes: the second wake-up judgment sub-unit, is matched It is set to and determines whether the corresponding text of voice of user includes that instruction and smart machine can not execute using wake-up judgment models Operate associated keyword;In response to determining that the corresponding text of voice of user includes that instruction and smart machine can not execute Associated keyword is operated, obtains indicating the judging result for not executing wake operation
Fig. 4 shows the structural schematic diagram for being suitable for the computer system for the electronic equipment for being used to realize the embodiment of the present application.
It, can be according to being stored in read-only storage as shown in figure 4, computer system includes central processing unit (CPU) 401 Program in device (ROM) 402 is executed from the program that storage section 408 is loaded into random access storage device (RAM) 403 Various movements appropriate and processing.In RAM 403, it is also stored with various programs and data needed for computer system operation. CPU 401, ROM 402 and RAM 403 are connected with each other by bus 404.Input/output (I/O) interface 405 is also connected to always Line 404.
I/O interface 405: importation 406 is connected to lower component;Output par, c 407;Storage section including hard disk etc. 408;And the communications portion 409 of the network interface card including LAN card, modem etc..Communications portion 409 is via all As the network of internet executes communication process.Driver 410 is also connected to I/O interface 405 as needed.Detachable media 411, Such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 410, in order to from thereon The computer program of reading is mounted into storage section 408 as needed.
Particularly, process described in embodiments herein may be implemented as computer program.For example, the application Embodiment includes a kind of computer program product comprising carries computer program on a computer-readable medium, the calculating Machine program includes the instruction for method shown in execution flow chart.The computer program can be by communications portion 409 from net It is downloaded and installed on network, and/or is mounted from detachable media 411.In the computer program by central processing unit (CPU) When 401 execution, the above-mentioned function of limiting in the present processes is executed.
Present invention also provides a kind of electronic equipment, which can be configured with one or more processors;Storage Device may include in one or more programs to execute described in above-described embodiment for storing one or more programs The instruction of operation.When one or more programs are executed by one or more processors, so that one or more processors execute The instruction of operation described in above-described embodiment.
Present invention also provides a kind of computer-readable medium, which, which can be in electronic equipment, is wrapped It includes;It is also possible to individualism, without in supplying electronic equipment.Above-mentioned computer-readable medium carries one or more Program, when one or more program is executed by electronic equipment, so that electronic equipment executes behaviour described in above-described embodiment The instruction of work.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example may include but unlimited In the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or device, or any above combination.Computer can The more specific example for reading storage medium can include but is not limited to: electrical connection, portable meter with one or more conducting wires Calculation machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In this application, computer readable storage medium can be it is any include or storage program Tangible medium, which can be executed system by message, device or device use or in connection.And in this Shen Please in, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by Message executes system, device or device use or program in connection.The journey for including on computer-readable medium Sequence code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable message of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer message Combination realize.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from the inventive concept, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of exchange method, comprising:
In response to getting the voice of user, it is based on reference information using judgment models are waken up, judges whether to execute wake operation, Reference information include: the corresponding text of voice of the user, smart machine the last time execute wake operation at the time of with sentence The duration broken between at the time of whether executing wake operation;
In response to obtaining judging result, operation associated with the judging result is executed.
2. according to the method described in claim 1, the method also includes:
In response to not obtaining judging result, guided interaction sentence is played, guided interaction sentence is for guiding user to determine whether to need Smart machine is wanted to execute operation associated with the voice of the user got;
Feedback voice instruction in response to the user got needs smart machine execution related to the voice of the user got The operation of connection, the markup information of the voice of the user got described in generation, the mark letter of the voice of the user got It ceases indicating intelligent equipment and executes wake operation;
Generate the training sample for waking up judgment models, the training sample includes: the voice of the user got, described obtains The markup information of the voice of the user got.
3. according to the method described in claim 1, the method also includes:
In response to not obtaining judging result, guided interaction sentence is played, guided interaction sentence is for guiding user to determine whether to need Smart machine is wanted to execute operation associated with the voice of user;
Feedback voice instruction in response to the user got does not need to execute operation associated with the voice of user, generates institute The markup information of the voice of the user got is stated, the markup information indicating intelligent equipment of the voice of the user got is not Execute wake operation;
Generate the training sample for waking up judgment models, the training sample includes: the voice of the user got, described obtains The markup information of the voice of the user got.
4. method described in one of -3 according to claim 1 utilizes wake-up judgment models base in response to getting the voice of user In reference information, judge whether that executing wake operation includes:
It whether include waking up word using waking up judgment models to determine in the corresponding text of voice of the user got;
Include waking up word in the corresponding text of voice in response to determining the user got, obtains instruction and execute wake operation Judging result.
5. method described in one of -3 according to claim 1 utilizes wake-up judgment models base in response to getting the voice of user In reference information, judge whether that executing wake operation includes:
Determine whether the corresponding text of voice of user includes that instruction and smart machine can not execute using judgment models are waken up Operate associated keyword;
In response to determining that the corresponding text of voice of user includes instruction pass associated with the operation that smart machine can not execute Keyword obtains indicating the judging result for not executing wake operation.
6. a kind of interactive device, comprising:
Judging unit is configured to respond to get the voice of user, is based on reference information, judgement using judgment models are waken up Whether wake operation is executed, and reference information includes: the corresponding text of voice of the user, smart machine the last time to execute and call out Wake up operation at the time of and judge whether execute wake operation at the time of between duration;
Execution unit is configured to respond to obtain judging result, executes operation associated with the judging result.
7. device according to claim 6, described device further include:
First collector unit is configured to respond to not obtain judging result, plays guided interaction sentence, and guided interaction sentence is used Smart machine, which is determined the need for, in guidance user executes operation associated with the voice of the user got;In response to obtaining To the feedback voice instruction of user need smart machine to execute associated with the voice of the user got operation, generation institute The markup information of the voice of the user got is stated, the markup information indicating intelligent equipment of the voice of the user got is held Row wake operation;The training sample for waking up judgment models is generated, the training sample includes: the language of the user got Sound, the user got voice markup information.
8. device according to claim 6, described device further include:
Second collector unit is configured to respond to not obtain judging result, plays guided interaction sentence, and guided interaction sentence is used Smart machine, which is determined the need for, in guidance user executes operation associated with the voice of user;In response to the user got The instruction of feedback voice do not need to execute associated with the voice of user operation, the voice of the user got described in generation The markup information indicating intelligent equipment of markup information, the voice of the user got does not execute wake operation;It generates and wakes up The training sample of judgment models, the training sample include: the voice of the user got, the user got The markup information of voice.
9. the device according to one of claim 6-8, judging unit include:
First wakes up judgment sub-unit, is configured as utilizing the corresponding text of voice for waking up the determining user got of judgment models It whether include waking up word in this;Include waking up word in the corresponding text of voice in response to determining the user got, obtains The judging result of wake operation is executed to instruction.
10. the device according to one of claim 6-8, judging unit include:
Second wakes up judgment sub-unit, is configured as determining whether the corresponding text of voice of user wraps using wake-up judgment models The keyword associated with the operation that smart machine can not execute that includes instruction;In response to determining the corresponding text packet of voice of user The keyword associated with the operation that smart machine can not execute that includes instruction, obtains indicating the judgement knot for not executing wake operation Fruit.
11. a kind of electronic equipment, comprising:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors Realize such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, such as right is realized when which is executed by processor It is required that any method in 1-5.
CN201910079020.5A 2019-01-28 2019-01-28 Interaction method and device Active CN109785836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910079020.5A CN109785836B (en) 2019-01-28 2019-01-28 Interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910079020.5A CN109785836B (en) 2019-01-28 2019-01-28 Interaction method and device

Publications (2)

Publication Number Publication Date
CN109785836A true CN109785836A (en) 2019-05-21
CN109785836B CN109785836B (en) 2021-03-30

Family

ID=66502610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910079020.5A Active CN109785836B (en) 2019-01-28 2019-01-28 Interaction method and device

Country Status (1)

Country Link
CN (1) CN109785836B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665708A (en) * 2016-07-29 2018-02-06 科大讯飞股份有限公司 Intelligent sound exchange method and system
US20180108343A1 (en) * 2016-10-14 2018-04-19 Soundhound, Inc. Virtual assistant configured by selection of wake-up phrase
CN108320742A (en) * 2018-01-31 2018-07-24 广东美的制冷设备有限公司 Voice interactive method, smart machine and storage medium
JP2018194844A (en) * 2017-05-19 2018-12-06 ネイバー コーポレーションNAVER Corporation Speech-controlling apparatus, method of operating the same, computer program, and recording medium
CN109036411A (en) * 2018-09-05 2018-12-18 深圳市友杰智新科技有限公司 A kind of intelligent terminal interactive voice control method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665708A (en) * 2016-07-29 2018-02-06 科大讯飞股份有限公司 Intelligent sound exchange method and system
US20180108343A1 (en) * 2016-10-14 2018-04-19 Soundhound, Inc. Virtual assistant configured by selection of wake-up phrase
JP2018194844A (en) * 2017-05-19 2018-12-06 ネイバー コーポレーションNAVER Corporation Speech-controlling apparatus, method of operating the same, computer program, and recording medium
CN108320742A (en) * 2018-01-31 2018-07-24 广东美的制冷设备有限公司 Voice interactive method, smart machine and storage medium
CN109036411A (en) * 2018-09-05 2018-12-18 深圳市友杰智新科技有限公司 A kind of intelligent terminal interactive voice control method and device

Also Published As

Publication number Publication date
CN109785836B (en) 2021-03-30

Similar Documents

Publication Publication Date Title
JP6828001B2 (en) Voice wakeup method and equipment
CN109036384B (en) Audio recognition method and device
CN108986805B (en) Method and apparatus for sending information
CN107863108B (en) Information output method and device
CN111261151B (en) Voice processing method and device, electronic equipment and storage medium
KR102309031B1 (en) Apparatus and Method for managing Intelligence Agent Service
KR102489914B1 (en) Electronic Device and method for controlling the electronic device
US20200193969A1 (en) Method and apparatus for generating model
US11056114B2 (en) Voice response interfacing with multiple smart devices of different types
CN109634132A (en) Smart home management method, device, medium and electronic equipment
CN107733722B (en) Method and apparatus for configuring voice service
JP2020034895A (en) Responding method and device
CN109545193A (en) Method and apparatus for generating model
CN109376363A (en) A kind of real-time voice interpretation method and device based on earphone
CN111312233A (en) Voice data identification method, device and system
CN111105786A (en) Multi-sampling-rate voice recognition method, device, system and storage medium
CN109684624A (en) A kind of method and apparatus in automatic identification Order Address road area
CN110570855A (en) system, method and device for controlling intelligent household equipment through conversation mechanism
CN110211564A (en) Phoneme synthesizing method and device, electronic equipment and computer-readable medium
CN112735418A (en) Voice interaction processing method and device, terminal and storage medium
CN109087627A (en) Method and apparatus for generating information
CN108628863A (en) Information acquisition method and device
CN109785836A (en) Exchange method and device
CN109887490A (en) The method and apparatus of voice for identification
US11442692B1 (en) Acoustic workflow system distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant