CN109949795A - A kind of method and device of control smart machine interaction - Google Patents
A kind of method and device of control smart machine interaction Download PDFInfo
- Publication number
- CN109949795A CN109949795A CN201910205210.7A CN201910205210A CN109949795A CN 109949795 A CN109949795 A CN 109949795A CN 201910205210 A CN201910205210 A CN 201910205210A CN 109949795 A CN109949795 A CN 109949795A
- Authority
- CN
- China
- Prior art keywords
- interactive object
- smart machine
- target language
- languages
- voice messaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of method and device of control smart machine interaction, this method includes that determining smart machine and interactive object interact the required target language used;If the languages of target language and current-configuration are inconsistent, control smart machine is interacted using target language and interactive object.By first determining that smart machine and interactive object interact the required target language used, when may be implemented in face of the interactive object of different language, it is interacted using the languages and interactive object of interactive object, is interacted so as to realize with the user of different language, corresponding service is provided.
Description
Technical field
The present embodiments relate to technical field of intelligent equipment more particularly to it is a kind of control smart machine interaction method and
Device.
Background technique
Smart machine is more and more to be applied in service scenarios, for example, service robot can be based on the mode of voice signal
Initiate human-computer interaction.The development of intelligent sound technology has pushed the universal of service robot, assists on the basis of voice technology
Other interactive means, greatly optimize man-machine interaction experience.
Currently, the human-computer interaction of smart machine is mainly based upon the voice service of single languages, only it is applicable in a country
Local languages can not service for the foreigner.Therefore, human-computer interaction can be provided for the people of different language by needing one kind
The smart machine of function.
Summary of the invention
The embodiment of the present invention provides a kind of method and device of control smart machine interaction, to be embodied as different language
User provides service.
A kind of exchange method controlling smart machine provided in an embodiment of the present invention, comprising:
Determine the target language used needed for smart machine and interactive object interact;
If the languages of the target language and current-configuration are inconsistent, control the smart machine and use the target language
It is interacted with the interactive object.
In above-mentioned technical proposal, by first determining that smart machine and interactive object interact the required target language used
Kind, when may be implemented in face of the interactive object of different language, interacted using the languages and interactive object of interactive object, thus
It may be implemented to interact with the user of different language, corresponding service be provided.
Optionally, the target language that the determining smart machine and interactive object use needed for interacting, comprising:
According to the voice messaging and/or image information for collecting the interactive object, determine the smart machine with it is described
The target language that interactive object uses needed for interacting.
In above-mentioned technical proposal, by collected voice messaging and/or image information, smart machine and interaction are first determined
The target language that object uses needed for interacting uses the language of interactive object when realizing the interactive object for facing different language
Kind is interacted with interactive object, is interacted so as to realize with the user of different language, is provided corresponding service.
Optionally, the basis collects the voice messaging and/or image information of the interactive object, determines smart machine
The target language used needed for being interacted with interactive object, comprising:
If collecting the voice messaging of the interactive object, according to the voice messaging of the interactive object, institute's predicate is identified
Message ceases corresponding languages, and the corresponding languages of the voice messaging are determined as the smart machine and the interactive object carries out
The target language used needed for interaction.
In above-mentioned technical proposal, by collected voice messaging, the corresponding languages of voice messaging are determined into smart machine
The target language used needed for interacting with interactive object uses interaction pair when realizing the interactive object for facing different language
The languages of elephant are interacted with interactive object, are interacted so as to realize with the user of different language, and corresponding clothes are provided
Business.
Optionally, the basis collects the voice messaging and/or image information of the interactive object, determines smart machine
The target language used needed for being interacted with interactive object, comprising:
Acquisition includes the image information of the interactive object;
According to described image information, the characteristic information of the interactive object is identified;
According to the characteristic information, the languages that the interactive object may use are determined;
The languages that the interactive object may use are determined as the smart machine to interact with the interactive object
The required target language used.
In above-mentioned technical proposal, by acquired image information, it will determine that interactive object may make according to characteristic information
Languages determine that smart machine and interactive object interact the required target language used, realize the friendship for facing different language
When mutual object, interacted using the languages and interactive object of interactive object, so as to realize with the user of different language into
Row interaction, provides corresponding service.
Optionally, the target language used needed for smart machine and interactive object interact is determined, comprising:
Control the languages and international languages and institute that the smart machine is currently configured using the smart machine respectively
It states interactive object and carries out interactive voice;
After the voice messaging for collecting the interactive object, according to the voice messaging of the interactive object, described in identification
The corresponding languages of voice messaging, by the corresponding languages of the voice messaging be determined as the smart machine and the interactive object into
The target language used needed for row interaction.
In above-mentioned technical proposal, use the languages of smart machine current-configuration and the world logical respectively by controlling smart machine
Interactive voice is carried out with languages and interactive object, then acquires voice messaging again, the corresponding languages of voice messaging are determined into intelligence
The target language that equipment and interactive object use needed for interacting uses friendship when realizing the interactive object for facing different language
The languages of mutual object are interacted with interactive object, are interacted so as to realize with the user of different language, are provided corresponding
Service.
Optionally, after determining that smart machine and interactive object interact the required target language used, further includes:
The display interface of the smart machine is switched to the corresponding display interface of the target language.
Optionally, information is comprised at least one of the following in the display interface:
Background information corresponding with the target language;
Text information corresponding with the target language;
Multimedia messages corresponding with the target language.
Optionally, the control smart machine is interacted using the target language and the interactive object, also
Include:
Currently used speech recognition modeling is switched into the corresponding speech recognition modeling of the target language;
By currently used semantic understanding models switching to the corresponding semantic understanding model of the target language;
By currently used speech synthesis models switching to the corresponding speech synthesis model of the target language.
Correspondingly, the embodiment of the invention also provides a kind of devices of control smart machine interaction, comprising:
Determination unit, for determining that smart machine and interactive object interact the required target language used;
Interactive unit, if the languages for the target language and current-configuration are inconsistent, controlling the smart machine makes
It is interacted with the target language and the interactive object.
Optionally, the determination unit is specifically used for:
According to the voice messaging and/or image information for collecting the interactive object, determine the smart machine with it is described
The target language that interactive object uses needed for interacting.
Optionally, the determination unit is specifically used for:
If collecting the voice messaging of the interactive object, according to the voice messaging of the interactive object, institute's predicate is identified
Message ceases corresponding languages, and the corresponding languages of the voice messaging are determined as the smart machine and the interactive object carries out
The target language used needed for interaction.
Optionally, the determination unit is specifically used for:
Acquisition includes the image information of the interactive object;
According to described image information, the characteristic information of the interactive object is identified;
According to the characteristic information, the languages that the interactive object may use are determined;
The languages that the interactive object may use are determined as the smart machine to interact with the interactive object
The required target language used.
Optionally, the determination unit is specifically used for:
Control the languages and international languages and institute that the smart machine is currently configured using the smart machine respectively
It states interactive object and carries out interactive voice;
After the voice messaging for collecting the interactive object, according to the voice messaging of the interactive object, described in identification
The corresponding languages of voice messaging, by the corresponding languages of the voice messaging be determined as the smart machine and the interactive object into
The target language used needed for row interaction.
Optionally, the determination unit is also used to:
After determining the target language used needed for smart machine and interactive object interact, by the smart machine
Display interface is switched to the corresponding display interface of the target language.
Optionally, information is comprised at least one of the following in the display interface:
Background information corresponding with the target language;
Text information corresponding with the target language;
Multimedia messages corresponding with the target language.
Optionally, the interactive unit is also used to:
Currently used speech recognition modeling is switched into the corresponding speech recognition modeling of the target language;
By currently used semantic understanding models switching to the corresponding semantic understanding model of the target language;
By currently used speech synthesis models switching to the corresponding speech synthesis model of the target language.
Correspondingly, the embodiment of the invention also provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer executable instructions, and the computer executable instructions are for making the computer execute above-mentioned control intelligence
It can the interactive method of equipment.
Correspondingly, the embodiment of the invention also provides a kind of calculating equipment, comprising:
Memory, for storing program instruction;
Processor executes above-mentioned control according to the program of acquisition for calling the program instruction stored in the memory
The method of smart machine interaction.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this
For the those of ordinary skill in field, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is a kind of schematic diagram of system architecture provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of the method for control smart machine interaction provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of the device of control smart machine interaction provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of the device of control smart machine interaction provided in an embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into
It is described in detail to one step, it is clear that described embodiments are only a part of the embodiments of the present invention, rather than whole implementation
Example.Based on the embodiments of the present invention, obtained by those of ordinary skill in the art without making creative efforts
All other embodiment, shall fall within the protection scope of the present invention.
Fig. 1 is system architecture provided in an embodiment of the present invention.Refering to what is shown in Fig. 1, the system architecture can be smart machine
100, including radio frequency (Radio Frequency, RF) circuit 110, memory 120, input unit 130, Wireless Fidelity
(wireless fidelity, WiFi) module 170, display unit 140, sensor 150, voicefrequency circuit 160, processor 180,
And the equal components of power supply 190.
Wherein, it will be understood by those skilled in the art that 100 structure of smart machine shown in Fig. 1 it is merely illustrative rather than limit
Fixed, smart machine 100 can also include perhaps combining certain components or different portions than illustrating more or fewer components
Part arrangement.
RF circuit 110 can be used for receive and send messages or communication process in, signal sends and receivees, particularly, by base station
Downlink information receive after, to processor 180 handle;In addition, the data of 100 uplink of smart machine are sent to base station.In general,
RF circuit includes but is not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (LNA, Low
Noise Amplifier), duplexer etc..In addition, RF circuit 110 can also be logical with network and other equipment by wireless communication
Letter.Any communication standard or agreement, including but not limited to global system for mobile communications (Global can be used in above-mentioned wireless communication
System for Mobile communication, GSM), general packet radio service (General Packet Radio
Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access
(Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution,
LTE), Email, short message service (Short Messaging Service, SMS) etc..
Wherein, memory 120 can be used for storing software program and module, and processor 180 is stored in storage by operation
The software program and module of device 120, thereby executing the various function application and data processing of mobile phone.Memory 120 can be led
It to include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function
Application program (such as sound-playing function, image player function) etc.;Storage data area can be stored is created according to using for mobile phone
Data (such as audio data, phone directory) built etc..In addition, memory 120 may include high-speed random access memory, may be used also
To include nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-states
Part.
Input unit 130 can be used for receiving the number or character information of input, and generate the user with smart machine 100
Setting and the related key signals of function control.Specifically, input unit 130 may include touch panel 131, picture pick-up device 132
And other input equipments 133.Picture pick-up device 132 can take pictures to user, so that the facial image of user be sent to
Processor 150 carries out recognition of face, finally identifies whether user is native.Touch panel 131, also referred to as touch screen can be received
Collecting the touch operation of user on it or nearby, (for example user is being touched using any suitable object or attachment such as finger, stylus
Control on panel 131 or the operation near touch panel 131), and corresponding attachment device is driven according to preset formula.
Optionally, touch panel 131 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined
The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 180, and can reception processing
Order that device 180 is sent simultaneously is executed.Furthermore, it is possible to a variety of using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes touch panel 131.In addition to touch panel 131 and picture pick-up device 132, input unit 130 can also include that other are defeated
Enter equipment 132.Specifically, other input equipments 132 can include but is not limited to physical keyboard, function key (such as volume control
Key, switch key etc.), trace ball, mouse, one of operating stick etc. or a variety of.
Wherein, display unit 140 can be used for showing information input by user or be supplied to the information and mobile phone of user
Various menus.Display unit 140 may include display panel 141, optionally, can using liquid crystal display (LCD,
Liquid Crystal Display), the shapes such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode)
Formula configures display panel 141.Further, touch panel 131 can cover display panel 141, when touch panel 131 detects
After touch operation on it or nearby, processor 180 is sent to determine the type of touch event, is followed by subsequent processing device 180
Corresponding visual output is provided on display panel 141 according to the type of touch event.
Wherein, the outer display panel 141 of the visual output which can identify can be used as in the embodiment of the present invention
It shows equipment, is used to display text information or image information.Although touch panel 131 and display panel 141 are to make in Fig. 1
The function that outputs and inputs of smart machine 100 is realized for two independent components, but in some embodiments it is possible to will touching
Control panel 131 and display panel 141 are integrated and that realizes smart machine 100 output and input function.
In addition, smart machine 100 may also include at least one sensor 150, for example, attitude transducer, range sensor,
And other sensors.
In addition, in embodiments of the present invention, as sensor 150, barometer, hygrometer, thermometer and red can also configure
The other sensors such as outside line sensor, details are not described herein.
Optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment light
The light and shade of line adjusts the brightness of display panel 141, and proximity sensor can close aobvious when smart machine 100 is moved in one's ear
Show panel 141 and/or backlight.
Voicefrequency circuit 160, loudspeaker 161, microphone 162 can provide the audio interface between user and smart machine 100.
Electric signal after the audio data received conversion can be transferred to processor 180 and carry out languages identification by voicefrequency circuit 160.
WiFi belongs to short range wireless transmission technology, and smart machine 100 can help user to receive and dispatch by WiFi module 170
Email, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 1
WiFi module 170 is shown, but it is understood that, and it is not belonging to must be configured into for smart machine 100, it completely can root
It is omitted within the scope of not changing the essence of the invention according to needs.
Processor 180 is the control centre of smart machine 100, utilizes various interfaces and the entire smart machine of connection
100 various pieces, by running or executing the software program and/or module that are stored in memory 120, and calling storage
Data in memory 120 execute the various functions and processing data of smart machine 100, to carry out to smart machine 100
Integral monitoring.Optionally, processor 180 may include one or more processing units;Preferably, processor 180 can integrate application
Processor and modem processor, wherein the main processing operation system of application processor, user interface and application program etc.,
Modem processor mainly handles wireless communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 180.
Optionally, smart machine 100 can also include at least one motor 190.Since smart machine 100 is that power consumption is set
Standby, motor 190 can be a kind of micro-machine, meanwhile, the watt level being capable of providing according to motor can be more for mobile phone configuration
A motor 190.
Smart machine 100 further includes the power supply (being not drawn into figure) powered to all parts.
Preferably, power supply can be logically contiguous by power-supply management system and processor 180, to pass through power management system
System realizes the functions such as management charging, electric discharge and power managed.Although being not shown, smart machine 100 can also include bluetooth mould
Block, earphone interface etc., details are not described herein.
It should be noted that the smart machine 100 in the embodiment of the present invention can be intelligent robot, intelligent sound box etc.,
It is also possible to other smart machines, such as smart phone, PAD.
Based on foregoing description, Fig. 2 illustratively shows a kind of control smart machine interaction provided in an embodiment of the present invention
Process, which can be executed by the controller of smart machine, can also be executed by cloud server, in the embodiment of the present invention
Executing subject is not defined.
As shown in Fig. 2, the process specifically includes:
Step 201, the target language used needed for smart machine and interactive object interact is determined.
Step 202, it if the languages of the target language and current-configuration are inconsistent, controls described in the smart machine use
Target language is interacted with the interactive object.
Due to the interactive object languages that used languages may be currently configured with smart machine when with robot interactive
Inconsistent, therefore, smart machine needs first to determine that the smart machine and interactive object carry out before interacting with interactive object
The target language used needed for interaction.
In the specific implementation process, smart machine interacts the required target language used with interactive object determining
When, be specifically as follows: according to the voice messaging and/or image information for collecting interactive object, determine smart machine with interact pair
The target language used as needed for interacting.That is, making needed for determining that smart machine is interacted with interactive object
When target language, it can specifically will be described below there are many implementation.
Implementation one
Interactive object actively initiates interactive voice, and in this case, smart machine can collect the voice of interactive object
Information.
When smart machine can collect the voice messaging of interactive object, so that it may according to the voice messaging of interactive object
The corresponding languages of the voice messaging are recognized, the corresponding languages of the voice messaging are then determined as smart machine and interactive object
The target language used needed for interacting.
For example, when smart machine is serviced at the train station, interactive object, which is wanted to inquire, arrives nearest subway
How to get to stand, at this point, exchangeable object is possible to actively to initiate to interact to smart machine, such as: " Excuse me, how can I
Get to the nearest subway station? ", which is collected by smart machine microphone array
After voice messaging, it can identify that the corresponding languages of the voice messaging are English.Identifying that the corresponding languages of voice messaging are
After English, so that it may English are determined as smart machine and interactive object interacts the required target language used.As handed over
The voice messaging that mutual object is actively initiated to smart machine are as follows: " Chi ょ っ と お evil spirit's ま The Ga most posts the underground り Iron Station へ
は ど う row け ば い い In The か " then identifies that the corresponding languages of voice messaging are Japanese, so that Japanese is determined as intelligence
The target language that equipment and interactive object use needed for interacting.
It certainly, can also be according to the content of voice messaging in languages corresponding according to voice messaging identification voice messaging
To identify the languages of interactive object.For example, it is " can you speak that interactive object, which initiates voice messaging to smart machine,
Spanish? " although interactive object uses English at this time, interactive object is that can inquiry smart machine say western class
Therefore tooth language according to the content of voice messaging, can identify that the languages of passenger should be Spanish, rather than English.
Implementation two
Interactive object does not interact actively, but can be acquired by the camera of smart machine includes interactive object
Face and/or human body image information, in this case, pass through smart machine acquire interactive object image information.
Camera acquisition by smart machine includes the image information of interactive object, is then identified according to the image information
The characteristic information of interactive object determines the languages that interactive object may use according to this feature information, finally by the interactive object
The languages that may be used are set to smart machine and interactive object interacts the required target language used.Here image information
The facial information that can be interactive object is also possible to the general image of interactive object, that is, whole body images information, not to this
It is limited.
When the characteristic information of interactive object is more clear, be interactive object directly can be obtained according to this feature information can
The languages that can be used.For example, collecting the face information of interactive object by the camera of smart machine, obtained by face information
When having the information such as russian, blue eyes, golden tresses to the characteristic information of interactive object, it can determine that the interactive object may make
Languages are English, so as to the target language for being determined as using needed for smart machine is interacted with interactive object by English
Kind.
Implementation three
It is not the interaction by the languages that the interactive object that image information determines may use in above-mentioned implementation two
When the true languages of object, it can control smart machine and handed over using the languages that interactive object may use to interactive object
Mutually, the interactive information that interactive object can be issued based on smart machine at this time issues voice messaging to smart machine.Further can
To again identify that the languages of interactive object by the collected voice messaging of smart machine.Here according to voice messaging identification
Languages are more accurately, so that it may be used needed for being determined as interacting with interactive object by the corresponding languages of voice messaging
Target language.
Such as: determine that languages that interactive object may use are English by image information, at this point, smart machine is to interaction
Object export interactive information are as follows: " What can I do for you? ", interactive object receives the interactive information of smart machine
Afterwards, if to smart machine initiation inquiry: " can you speak Spanish? " although interactive object uses English at this time
Language, still, interactive object are that can inquiry smart machine say Spanish, therefore, can be identified according to the content of voice messaging
The languages of interactive object should be Spanish out, rather than English.
For another example: determining that the languages that interactive object may use are English by image information, at this point, exporting to interactive object
Interactive information are as follows: " What can I do for you? ", after interactive object receives the interactive information of smart machine, if interaction
Object is interacted using Japanese with smart machine, then after smart machine collects the voice messaging of interactive object, can be based on should
Voice messaging carries out languages identification and Japanese is determined as smart machine when recognizing the corresponding languages of voice messaging is Japanese
The target language used needed for being interacted with interactive object, and control smart machine output Japanese and handed over interactive object
Mutually.
Embodiment four
Include following two scene in which:
The first scene can directly control smart machine difference when that can not collect the voice messaging of interactive object
The languages and international languages and interactive object being currently configured using smart machine carry out interactive voice.Then, it is collecting
After the voice messaging of interactive object, according to the voice messaging of interactive object, the corresponding languages of voice messaging are identified, by voice messaging
Corresponding languages are determined as smart machine and interactive object interacts the required target language used.
For example, the languages that smart machine is currently configured are Chinese, and international languages are English.When intelligence can not be passed through
When energy equipment collects the voice messaging of interactive object, smart machine can control using Chinese and english in turn to interactive object
Carry out interactive voice.At this point, being inquired when interactive object receives the interactive information of smart machine if being issued to smart machine:
" can you speak Spanish? " although interactive object uses English at this time, interactive object is inquiry intelligence
Can equipment say Spanish, therefore, according to the content of voice messaging, can identify that the languages of interactive object should be western class
Tooth language.
Second of scene, if the voice messaging of interactive object can not be collected, although and collected image information,
Language information can not be judged according to image information, be currently configured respectively using smart machine at this point it is possible to control smart machine
Languages and international languages and interactive object carry out interactive voice.Then, after collecting the voice messaging of interactive object,
According to the voice messaging of interactive object, identifies the corresponding languages of voice messaging, the corresponding languages of voice messaging are determined as intelligence
The target language that equipment and interactive object use needed for interacting.
Such as: the languages that smart machine is currently configured are Chinese, and international languages are English.If interaction can not be collected
The voice messaging of object is acquired the image information of interactive object by smart machine, but knows another characteristic letter according to image information
Breath can not determine the languages that interactive object may use, then control smart machine using Chinese and english in turn to interactive object into
Row interactive voice.After interactive object receives the interactive information of smart machine, if interactive object is handed over using Japanese and smart machine
Mutually, then after collecting the voice messaging of interactive object, languages identification can be carried out based on the voice messaging, is recognizing voice
When the corresponding languages of information are Japanese, Japanese is determined as smart machine and interactive object interacts the required target language used
Kind, and control smart machine output Japanese and interacted with interactive object.
The third scene can not also collect the image information of interactive object, then may be used if can not both collect voice messaging
To control the languages and international languages and interactive object progress voice that smart machine uses smart machine to be currently configured respectively
Interaction.Then, after collecting the voice messaging of interactive object, according to the voice messaging of interactive object, voice messaging pair is identified
The corresponding languages of voice messaging are determined as smart machine and interactive object interact the required target language used by the languages answered
Kind.
Such as: the languages that smart machine is currently configured are Chinese, and international languages are English.When voice can not be collected
Information when can not also collect image information, can control smart machine and be carried out in turn to interactive object using Chinese and english
Interactive voice.After interactive object receives the interactive information of smart machine, if interactive object is interacted using Japanese with smart machine,
Then after collecting the voice messaging of interactive object, languages identification can be carried out based on the voice messaging, recognize voice letter
When to cease corresponding languages be Japanese, by Japanese be determined as smart machine and interactive object interact needed for the target language that uses
Kind, and control smart machine output Japanese and interacted with interactive object.
In the embodiment four, in spite of the voice messaging or image information that can collect interactive object, it is ok
It controls smart machine and carries out interactive voice to interactive object in turn using the languages and international languages being currently configured.Interaction pair
After interactive information as receiving smart machine, if interactive object is interacted using both non-languages with smart machine, adopting
After the voice messaging for collecting interactive object, languages identification can be carried out based on the voice messaging, the languages that identification is obtained determine
The target language used needed for being interacted for smart machine and interactive object, and control smart machine using target language and hand over
Mutual object interacts.
Based on any of the above-described embodiment, the target language used needed for determining that smart machine and interactive object interact
After kind, the display interface of smart machine can also be switched to display interface corresponding with target language.Such as in China, when
Preceding configuration languages are Chinese, and the display interface of smart machine is also Chinese.The display interface of Chinese can be switched to target
The corresponding display interface of languages.
It certainly may include following at least one information in display interface here: background letter corresponding with target language
Breath;Text information corresponding with target language;Multimedia messages corresponding with target language.By the way that display interface is switched to mesh
The corresponding display interface of poster kind can make interactive object more intuitively see the content of display.
For example, after the target language used needed for English to be determined as to smart machine and is interacted with interactive object,
Background information in the display interface of smart machine is switched to the corresponding background information of English, text information is switched to English
Music information being played on, advertising information or video information can be switched to the corresponding version of English by text.Wherein,
The corresponding background information of each languages, text information and multimedia messages can be preset, and storage is in memory.
When the languages of target language and current-configuration are inconsistent, so that it may control smart machine using target language and hand over
Mutual object interacts.Correspondingly, currently used speech recognition modeling can also be switched to the corresponding voice of target language
Identification model, by currently used semantic understanding models switching to the corresponding semantic understanding model of target language, and will be current
The speech synthesis models switching used is to the corresponding speech synthesis model of target language.Such as by cloud ASR (Automatic
Speech Recognition, automatic speech recognition), NLP (Natural Language Processing, at natural language
Reason), the models such as the speech synthesis of TTS (Text To Speech, from Text To Speech) be all switched to the corresponding mould of target language
Type.Preferably to be serviced for interactive object.
Based on any of the above-described embodiment, in a kind of possible embodiment, after switching to target language, the method is also
Include:
If the interaction of the smart machine and the interactive object terminates, the languages that the smart machine is currently configured are cut
Shift to default languages.
For example, the default languages if it is smart machine configuration are Chinese, if after switching to English from Chinese, in intelligence
After the completion of the interaction of energy equipment and the interactive object, it can control smart machine for English and be switched to Chinese again, that is, is heavy
Newly it is switched to the languages of default configuration.
Based on any of the above-described embodiment, in alternatively possible embodiment, after switching to target language, the method
Further include:
If the interaction of the smart machine and the interactive object terminates, default languages are set by the target language.
For example, the default languages if it is smart machine configuration are Chinese, if after switching to English from Chinese, in intelligence
After the completion of the interaction of energy equipment and the interactive object, it can control the languages that smart machine sets English to default configuration.This
Sample is after the completion of interaction, so that it may without reducing the resource overhead of system in the switching for carrying out languages.
Both above-mentioned modes in the specific implementation, can be set according to the scene used.
It should be noted that the process in any of the above-described embodiment, can be executed by the controller of smart machine, it can also
To be executed by cloud server.
Above-described embodiment shows to determine that smart machine and interactive object interact the required target language used;If mesh
The languages of poster kind and current-configuration are inconsistent, and control smart machine is interacted using target language and interactive object.Pass through
It first determines the target language used needed for smart machine and interactive object interact, the interaction in face of different language may be implemented
When object, the languages and interactive object that interactive object can be used are interacted, so as to realize the user with different language
It interacts, corresponding service is provided.
Based on the same technical idea, Fig. 3 illustratively shows a kind of control provided in an embodiment of the present invention and intelligently sets
The device of standby interaction, the device can execute the process of control smart machine interaction, which can be set in smart machine,
Also it can be set in cloud server.
As shown in figure 3, the device specifically includes:
Determination unit 301, for determining that smart machine and interactive object interact the required target language used;
Interactive unit 302 controls the smart machine if the languages for the target language and current-configuration are inconsistent
It is interacted using the target language and the interactive object.
Optionally, the determination unit 301 is specifically used for:
According to the voice messaging and/or image information for collecting the interactive object, determine the smart machine with it is described
The target language that interactive object uses needed for interacting.
Optionally, the determination unit 301 is specifically used for:
If collecting the voice messaging of the interactive object, according to the voice messaging of the interactive object, institute's predicate is identified
Message ceases corresponding languages, and the corresponding languages of the voice messaging are determined as the smart machine and the interactive object carries out
The target language used needed for interaction.
Optionally, the determination unit 301 is specifically used for:
Acquisition includes the image information of the interactive object;
According to described image information, the characteristic information of the interactive object is identified;
According to the characteristic information, the languages that the interactive object may use are determined;
The languages that the interactive object may use are determined as the smart machine to interact with the interactive object
The required target language used.
Optionally, the determination unit 301 is specifically used for:
Control the languages and international languages and institute that the smart machine is currently configured using the smart machine respectively
It states interactive object and carries out interactive voice;
After the voice messaging for collecting the interactive object, according to the voice messaging of the interactive object, described in identification
The corresponding languages of voice messaging, by the corresponding languages of the voice messaging be determined as the smart machine and the interactive object into
The target language used needed for row interaction.
Optionally, the determination unit 301 is also used to:
After determining the target language used needed for smart machine and interactive object interact, by the smart machine
Display interface is switched to the corresponding display interface of the target language.
Optionally, information is comprised at least one of the following in the display interface:
Background information corresponding with the target language;
Text information corresponding with the target language;
Multimedia messages corresponding with the target language.
Optionally, the interactive unit 302 is also used to:
Currently used speech recognition modeling is switched into the corresponding speech recognition modeling of the target language;
By currently used semantic understanding models switching to the corresponding semantic understanding model of the target language;
By currently used speech synthesis models switching to the corresponding speech synthesis model of the target language.
Based on the same technical idea, the embodiment of the invention also provides a kind of computer readable storage medium, the meters
Calculation machine readable storage medium storing program for executing is stored with computer executable instructions, and the computer executable instructions are for holding the computer
The method of the above-mentioned smart machine interaction of row.
Based on the same technical idea, the embodiment of the invention also provides a kind of calculating equipment, comprising:
Memory, for storing program instruction;
Processor executes above-mentioned intelligence according to the program of acquisition for calling the program instruction stored in the memory
The method of equipment interaction.
Based on the same technical idea, the embodiment of the invention provides a kind of electronic equipment, which can be intelligence
The controller of energy equipment, is also possible to server.As shown in figure 4, include at least one processor 401, and and at least one
The memory 402 of processor connection, does not limit the specific connection between processor 401 and memory 402 in the embodiment of the present invention
Medium, for being connected between processor 401 and memory 402 by bus in Fig. 4.Bus can be divided into address bus, data
Bus, control bus etc..
In embodiments of the present invention, memory 402 is stored with the instruction that can be executed by least one processor 401, at least
The instruction that one processor 401 is stored by executing memory 402, the method that control smart machine interaction above-mentioned can be executed
In included step.
Wherein, processor 401 is the control centre of electronic equipment, can use various interfaces and connection Lung neoplasm point
The various pieces of the equipment of analysis are stored in memory by running or executing the instruction being stored in memory 402 and calling
Data in 402, to realize control smart machine interaction.Optionally, processor 401 may include that one or more processing are single
Member, processor 401 can integrate application processor and modem processor, wherein the main processing operation system of application processor,
User interface and application program etc., modem processor mainly handle wireless communication.It is understood that above-mentioned modulation /demodulation
Processor can not also be integrated into processor 401.In some embodiments, processor 401 and memory 402 can be same
It is realized on chip, in some embodiments, they can also be realized respectively on independent chip.
Processor 401 can be general processor, such as central processing unit (CPU), digital signal processor, dedicated integrated
Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array or other can
Perhaps transistor logic, discrete hardware components may be implemented or execute present invention implementation for programmed logic device, discrete gate
Each method, step and logic diagram disclosed in example.General processor can be microprocessor or any conventional processor
Deng.The step of method in conjunction with disclosed in the embodiment of smart machine interaction, can be embodied directly in hardware processor and execute
At, or in processor hardware and software module combination execute completion.
Memory 402 is used as a kind of non-volatile computer readable storage medium storing program for executing, can be used for storing non-volatile software journey
Sequence, non-volatile computer executable program and module.Memory 402 may include the storage medium of at least one type,
It such as may include flash memory, hard disk, multimedia card, card-type memory, random access storage device (Random Access
Memory, RAM), static random-access memory (Static Random Access Memory, SRAM), may be programmed read-only deposit
Reservoir (Programmable Read Only Memory, PROM), read-only memory (Read Only Memory, ROM), band
Electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory,
EEPROM), magnetic storage, disk, CD etc..Memory 402 can be used for carrying or storing have instruction or data
The desired program code of structure type and can by any other medium of computer access, but not limited to this.The present invention is real
Applying the memory 402 in example can also be circuit or other devices that arbitrarily can be realized store function, for storing program
Instruction and/or data.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of method of control smart machine interaction, which is characterized in that this method comprises:
Determine the target language used needed for smart machine and interactive object interact;
If the languages of the target language and current-configuration are inconsistent, control the smart machine and use the target language and institute
Interactive object is stated to interact.
2. the method according to claim 1, wherein the determining smart machine and interactive object interact institute
The target language that need to be used, comprising:
According to the voice messaging and/or image information for collecting the interactive object, the smart machine and the interaction are determined
The target language that object uses needed for interacting.
3. according to the method described in claim 2, it is characterized in that, the basis collects the voice messaging of the interactive object
And/or image information, determine the target language used needed for smart machine and interactive object interact, comprising:
If collecting the voice messaging of the interactive object, according to the voice messaging of the interactive object, the voice letter is identified
Corresponding languages are ceased, the corresponding languages of the voice messaging are determined as the smart machine and are interacted with the interactive object
The required target language used.
4. according to the method in claim 2 or 3, which is characterized in that the basis collects the voice of the interactive object
Information and/or image information determine the target language used needed for smart machine and interactive object interact, comprising:
Acquisition includes the image information of the interactive object;
According to described image information, the characteristic information of the interactive object is identified;
According to the characteristic information, the languages that the interactive object may use are determined;
The languages that the interactive object may use are determined as the smart machine and needed for the interactive object interacts
The target language used.
5. the method according to claim 1, wherein determining makes needed for smart machine is interacted with interactive object
Target language, comprising:
Control the languages and international languages and the friendship that the smart machine is currently configured using the smart machine respectively
Mutual object carries out interactive voice;
After the voice messaging for collecting the interactive object, according to the voice messaging of the interactive object, the voice is identified
The corresponding languages of the voice messaging are determined as the smart machine and handed over the interactive object by the corresponding languages of information
The target language used needed for mutually.
6. according to claim 1 to 3 or 5 described in any item methods, which is characterized in that determine smart machine and interactive object into
After the target language used needed for row interaction, further includes:
The display interface of the smart machine is switched to the corresponding display interface of the target language.
7. the method according to claim 1, wherein the control smart machine uses the target language
It is interacted with the interactive object, further includes:
Currently used speech recognition modeling is switched into the corresponding speech recognition modeling of the target language;
By currently used semantic understanding models switching to the corresponding semantic understanding model of the target language;
By currently used speech synthesis models switching to the corresponding speech synthesis model of the target language.
8. a kind of device of control smart machine interaction characterized by comprising
Determination unit, for determining that smart machine and interactive object interact the required target language used;
Interactive unit controls the smart machine and uses institute if the languages for the target language and current-configuration are inconsistent
Target language is stated to interact with the interactive object.
9. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer can
It executes instruction, the computer executable instructions are for executing the computer as described in any one of claims 1 to 7
Method.
10. a kind of calculating equipment characterized by comprising
Memory, for storing program instruction;
Processor, for calling the program instruction stored in the memory, according to acquisition program execute as claim 1 to
Method described in any one of 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910205210.7A CN109949795A (en) | 2019-03-18 | 2019-03-18 | A kind of method and device of control smart machine interaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910205210.7A CN109949795A (en) | 2019-03-18 | 2019-03-18 | A kind of method and device of control smart machine interaction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109949795A true CN109949795A (en) | 2019-06-28 |
Family
ID=67008278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910205210.7A Pending CN109949795A (en) | 2019-03-18 | 2019-03-18 | A kind of method and device of control smart machine interaction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109949795A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414014A (en) * | 2019-08-05 | 2019-11-05 | 珠海格力电器股份有限公司 | Voice equipment control method and device, storage medium and voice equipment |
CN110569726A (en) * | 2019-08-05 | 2019-12-13 | 北京云迹科技有限公司 | interaction method and system for service robot |
CN110928588A (en) * | 2019-11-19 | 2020-03-27 | 珠海格力电器股份有限公司 | Method and device for adjusting terminal configuration, mobile terminal and storage medium |
CN111581362A (en) * | 2020-04-29 | 2020-08-25 | 联想(北京)有限公司 | Processing method and device |
CN112102547A (en) * | 2020-08-18 | 2020-12-18 | 深圳市视美泰技术股份有限公司 | Threshold language matching method and device, storage medium and intelligent equipment |
CN112863521A (en) * | 2020-12-24 | 2021-05-28 | 哈尔滨理工大学 | Speaker identification method based on mutual information estimation |
CN113053389A (en) * | 2021-03-12 | 2021-06-29 | 云知声智能科技股份有限公司 | Voice interaction system and method for switching languages by one key and electronic equipment |
CN114461110A (en) * | 2021-12-30 | 2022-05-10 | 惠州华阳通用电子有限公司 | Vehicle-mounted menu language changing method and storage medium |
CN114464179A (en) * | 2022-01-28 | 2022-05-10 | 达闼机器人股份有限公司 | Voice interaction method, system, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1441402A (en) * | 2003-04-03 | 2003-09-10 | 上海交通大学 | Information exchange method between different languages |
CN1677391A (en) * | 2005-03-01 | 2005-10-05 | 陈汉奕 | Internet accessing multi-language intelligent identifying system and method |
CN105895083A (en) * | 2016-05-30 | 2016-08-24 | 珠海市魅族科技有限公司 | Information processing method and device |
CN108161933A (en) * | 2017-12-07 | 2018-06-15 | 北京康力优蓝机器人科技有限公司 | Interactive mode selection method, system and reception robot |
CN108470563A (en) * | 2018-03-21 | 2018-08-31 | 上海木爷机器人技术有限公司 | Method for switching languages, server and system in a kind of interactive voice |
US10102844B1 (en) * | 2016-03-29 | 2018-10-16 | Amazon Technologies, Inc. | Systems and methods for providing natural responses to commands |
CN109032379A (en) * | 2018-07-25 | 2018-12-18 | 维沃移动通信有限公司 | A kind of choice of language display methods and terminal |
-
2019
- 2019-03-18 CN CN201910205210.7A patent/CN109949795A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1441402A (en) * | 2003-04-03 | 2003-09-10 | 上海交通大学 | Information exchange method between different languages |
CN1677391A (en) * | 2005-03-01 | 2005-10-05 | 陈汉奕 | Internet accessing multi-language intelligent identifying system and method |
US10102844B1 (en) * | 2016-03-29 | 2018-10-16 | Amazon Technologies, Inc. | Systems and methods for providing natural responses to commands |
CN105895083A (en) * | 2016-05-30 | 2016-08-24 | 珠海市魅族科技有限公司 | Information processing method and device |
CN108161933A (en) * | 2017-12-07 | 2018-06-15 | 北京康力优蓝机器人科技有限公司 | Interactive mode selection method, system and reception robot |
CN108470563A (en) * | 2018-03-21 | 2018-08-31 | 上海木爷机器人技术有限公司 | Method for switching languages, server and system in a kind of interactive voice |
CN109032379A (en) * | 2018-07-25 | 2018-12-18 | 维沃移动通信有限公司 | A kind of choice of language display methods and terminal |
Non-Patent Citations (1)
Title |
---|
张磊等: "《大学计算机基础》", 31 August 2016 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414014A (en) * | 2019-08-05 | 2019-11-05 | 珠海格力电器股份有限公司 | Voice equipment control method and device, storage medium and voice equipment |
CN110569726A (en) * | 2019-08-05 | 2019-12-13 | 北京云迹科技有限公司 | interaction method and system for service robot |
CN110928588A (en) * | 2019-11-19 | 2020-03-27 | 珠海格力电器股份有限公司 | Method and device for adjusting terminal configuration, mobile terminal and storage medium |
CN111581362A (en) * | 2020-04-29 | 2020-08-25 | 联想(北京)有限公司 | Processing method and device |
CN112102547A (en) * | 2020-08-18 | 2020-12-18 | 深圳市视美泰技术股份有限公司 | Threshold language matching method and device, storage medium and intelligent equipment |
CN112863521A (en) * | 2020-12-24 | 2021-05-28 | 哈尔滨理工大学 | Speaker identification method based on mutual information estimation |
CN113053389A (en) * | 2021-03-12 | 2021-06-29 | 云知声智能科技股份有限公司 | Voice interaction system and method for switching languages by one key and electronic equipment |
CN114461110A (en) * | 2021-12-30 | 2022-05-10 | 惠州华阳通用电子有限公司 | Vehicle-mounted menu language changing method and storage medium |
CN114464179A (en) * | 2022-01-28 | 2022-05-10 | 达闼机器人股份有限公司 | Voice interaction method, system, device, equipment and storage medium |
WO2023143439A1 (en) * | 2022-01-28 | 2023-08-03 | 达闼机器人股份有限公司 | Speech interaction method, system and apparatus, and device and storage medium |
CN114464179B (en) * | 2022-01-28 | 2024-03-19 | 达闼机器人股份有限公司 | Voice interaction method, system, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109949795A (en) | A kind of method and device of control smart machine interaction | |
EP3567584B1 (en) | Electronic apparatus and method for operating same | |
CN110910872B (en) | Voice interaction method and device | |
CN108549519B (en) | Split screen processing method and device, storage medium and electronic equipment | |
CN104298436B (en) | A kind of quickly revert operating method and terminal | |
CN105828145B (en) | Interactive approach and device | |
US20220214894A1 (en) | Command execution method, apparatus, and device | |
CN111147660B (en) | Control operation method and electronic equipment | |
CN104049745A (en) | Input control method and electronic device supporting the same | |
CN106293375B (en) | A kind of method for changing scenes and equipment | |
CN104378441A (en) | Schedule creating method and device | |
CN103813127B (en) | A kind of video call method, terminal and system | |
CN108958629B (en) | Split screen quitting method and device, storage medium and electronic equipment | |
CN107924286A (en) | The input method of electronic equipment and electronic equipment | |
CN106204423A (en) | A kind of picture-adjusting method based on augmented reality, device and terminal | |
CN108958680A (en) | Display control method, device, display system and computer readable storage medium | |
CN105302804B (en) | Display methods, terminal and the server of technical account | |
CN106940997A (en) | A kind of method and apparatus that voice signal is sent to speech recognition system | |
CN108958587A (en) | split screen processing method, device, storage medium and electronic equipment | |
CN110471604A (en) | A kind of more application switching methods and relevant apparatus | |
CN109491632A (en) | A kind of resource sharing method and terminal | |
CN106327342A (en) | Emoji package processing method and terminal | |
CN108052356A (en) | A kind of method and terminal device for starting calculator | |
CN111897916B (en) | Voice instruction recognition method, device, terminal equipment and storage medium | |
CN110278481A (en) | Picture-in-picture implementing method, terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190628 |
|
RJ01 | Rejection of invention patent application after publication |