CN110286771B - Interaction method, device, intelligent robot, electronic equipment and storage medium - Google Patents

Interaction method, device, intelligent robot, electronic equipment and storage medium Download PDF

Info

Publication number
CN110286771B
CN110286771B CN201910578601.3A CN201910578601A CN110286771B CN 110286771 B CN110286771 B CN 110286771B CN 201910578601 A CN201910578601 A CN 201910578601A CN 110286771 B CN110286771 B CN 110286771B
Authority
CN
China
Prior art keywords
interaction
user
intelligent robot
unit
interaction data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910578601.3A
Other languages
Chinese (zh)
Other versions
CN110286771A (en
Inventor
蒋志轩
郝凯阳
靳国强
蔡孟笈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Internet Security Software Co Ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201910578601.3A priority Critical patent/CN110286771B/en
Publication of CN110286771A publication Critical patent/CN110286771A/en
Application granted granted Critical
Publication of CN110286771B publication Critical patent/CN110286771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/227Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an interaction method, an interaction device, an intelligent robot, electronic equipment and a storage medium, wherein the method is executed by the intelligent robot, the intelligent robot is provided with an interaction unit and a display unit, the interaction unit and the display unit are arranged separately, data communication is carried out between the interaction unit and the display unit, the interaction unit comprises a camera, the display unit comprises a display screen, and the method comprises the steps of collecting an environment image by adopting the camera; when a user gazing at the intelligent robot is presented in the environment image, executing a preset interaction process to determine user characteristics; determining interaction data corresponding to the user features; based on the corresponding interaction data, the interaction unit is adopted to interact with the user by combining with the display screen. According to the invention, interaction and display can be synchronously performed, the interaction efficiency is improved, targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the users is met, and the intelligent interaction effect of the intelligent robot is improved.

Description

Interaction method, device, intelligent robot, electronic equipment and storage medium
Technical Field
The present invention relates to the field of electronic devices, and in particular, to an interaction method, an interaction device, an intelligent robot, an electronic device, and a storage medium.
Background
With the development of artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) technology, intelligent robots have been developed, and the intelligent robots bring great convenience to people's daily life, so as to enrich the application scenes of the intelligent robots. Intelligent robots that provide services in public places have been developed in the related art.
In the related art, in the process of interacting with a user in an environment where the intelligent robot is located, a general interaction mode is generally adopted to interact with the user, for example, a pre-input general inquiry voice is played to inquire the user for interaction, or an interaction interface is displayed, and general inquiry characters are displayed to inquire the user for interaction.
In this way, when the interaction is performed, the targeted interaction cannot be performed according to the interaction habits of different users, and the personalized interaction requirements of the users cannot be met.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent.
Therefore, an object of the present invention is to provide an interaction method, an apparatus, an intelligent robot, an electronic device, and a storage medium, which can synchronously perform interaction and display, improve interaction efficiency, implement targeted interaction according to interaction habits of different users, satisfy personalized interaction requirements of users, and promote intelligent interaction effects of the intelligent robot.
To achieve the above object, an interactive method according to an embodiment of a first aspect of the present invention is performed by an intelligent robot, where the intelligent robot is provided with an interactive unit and a display unit, the interactive unit and the display unit are separately disposed, data communication is performed between the interactive unit and the display unit, the interactive unit includes a camera, and the display unit includes a display screen, and includes: collecting an environment image by adopting the camera; when a user gazing at the intelligent robot is presented in the environment image, executing a preset interaction process to determine user characteristics; determining interaction data corresponding to the user features; and based on the corresponding interaction data, adopting the interaction unit to interact with the user by combining the display screen.
In some embodiments, the performing a preset interaction process to determine the user characteristic includes:
Executing a voice interaction process to determine user voice characteristics and serve as the user characteristics; or alternatively
Performing a preset interaction process to identify and as the user characteristics, the physical characteristics including: facial features and/or fingerprint features.
In some embodiments, the interaction data has an associated first feature, and the determining interaction data corresponding to the user feature includes:
Determining whether there is a first feature that matches the user feature;
if the matched first feature exists, taking the interaction data associated with the matched first feature as the corresponding interaction data;
If the matched first feature does not exist, guiding a user to input the user feature;
generating interaction data corresponding to the user characteristics by combining a preset interaction rule;
And associating the user characteristics with the corresponding interaction data, and supplementing the existing interaction data and the associated first characteristics according to the associated user characteristics and the corresponding interaction data.
In some embodiments, when a user looking at the intelligent robot is presented in the environment image, performing a preset interaction process to determine user characteristics includes:
When the number of the users is multiple, determining the duration of time for each user to watch the intelligent robot, wherein the duration is smaller than or equal to a time threshold;
determining a target duration from a plurality of durations;
And executing the preset interaction process to determine the user characteristics of the user to which the target duration belongs.
In some embodiments, when a user looking at the intelligent robot is presented in the environment image, performing a preset interaction process to determine user characteristics includes:
when the number of the users is multiple, playing pre-recorded voice, wherein the pre-recorded voice is used for guiding a target user to input voice data;
Determining the position information of a target user to which the voice data belong;
And executing the preset interaction process based on the position information to determine the characteristics of the target user.
In some embodiments, the performing the preset interaction procedure to determine the characteristics of the target user based on the location information includes:
according to the position information, the direction of the interaction unit is adjusted;
And executing the preset interaction process by adopting an interaction unit with the direction adjusted so as to determine the characteristics of the target user.
According to the interaction method provided by the embodiment of the first aspect of the invention, when the user looking at the intelligent robot is presented in the environment image, the preset interaction process is executed to determine the user characteristics, the interaction data corresponding to the user characteristics are determined, the interaction is performed with the user based on the corresponding interaction data, and the interaction and display functions are borne by different screens, so that the interaction and display can be performed synchronously, the interaction efficiency is improved, the targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the user is met, and the intelligent interaction effect of the intelligent robot is improved.
To achieve the above object, an embodiment of the present invention provides an interaction device, where the method is performed by an intelligent robot, the intelligent robot is provided with an interaction unit and a display unit, the interaction unit and the display unit are separately disposed, data communication is performed between the interaction unit and the display unit, the interaction unit includes a camera, and the display unit includes a display screen, including: the acquisition module acquires an environment image through the camera; the execution module is used for executing a preset interaction process to determine user characteristics when the user looking at the intelligent robot is presented in the environment image; the first determining module is used for determining interaction data corresponding to the user characteristics; and the interaction module is used for interacting with the user through the interaction unit and the display screen based on the corresponding interaction data.
In some embodiments, the execution module comprises:
The voice feature sub-module is used for executing a voice interaction process to determine the voice feature of the user and serve as the user feature; or alternatively
A physical feature sub-module, configured to perform a preset interaction process to identify a physical feature of the user and as the user feature, where the physical feature includes: facial features and/or fingerprint features.
In some embodiments, the interaction data has an associated first feature, the interaction device further comprising:
A second determining module, configured to determine whether a first feature matching the user feature exists, and when the first feature matching exists, use interaction data associated with the first feature matching as the corresponding interaction data;
a guiding module, configured to guide a user to input a user feature when the matched first feature does not exist;
the production module is used for generating interaction data corresponding to the user characteristics by combining with preset interaction rules;
And the association module is used for associating the user characteristics with the corresponding interaction data and supplementing the existing interaction data and the associated first characteristics according to the associated user characteristics and the corresponding interaction data.
In some embodiments, the execution module is further configured to:
When the number of the users is multiple, determining the duration that each user looks at the intelligent robot, wherein the duration is smaller than or equal to a time threshold, determining a target duration from the multiple durations, and executing the preset interaction process to determine the user characteristics of the user to which the target duration belongs.
In some embodiments, the execution module is further configured to:
when the number of the users is multiple, playing pre-recorded voice, wherein the pre-recorded voice is used for guiding a target user to input voice data;
Determining the position information of a target user to which the voice data belong;
And executing the preset interaction process based on the position information to determine the characteristics of the target user.
In some embodiments, the execution module is further configured to:
according to the position information, the direction of the interaction unit is adjusted;
And executing the preset interaction process by adopting an interaction unit with the direction adjusted so as to determine the characteristics of the target user.
According to the interaction device provided by the embodiment of the second aspect of the invention, when the user looking at the intelligent robot is presented in the environment image, the preset interaction process is executed to determine the user characteristics, the interaction data corresponding to the user characteristics are determined, the interaction is performed with the user based on the corresponding interaction data, and the interaction and display functions are borne by different screens, so that the interaction and display can be performed synchronously, the interaction efficiency is improved, the targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the user is met, and the intelligent interaction effect of the intelligent robot is improved.
In order to achieve the above object, a third aspect of the present invention further provides an electronic device, which includes a housing, a processor, a memory, a circuit board, and a power circuit, wherein the circuit board is disposed inside a space enclosed by the housing, and the processor and the memory are disposed on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the electronic equipment; the memory is used for storing executable program codes; the processor runs a program corresponding to the executable program code stored in the memory by reading the executable program code for executing: collecting an environment image; when a user gazing at the intelligent robot is presented in the environment image, executing a preset interaction process to determine user characteristics; determining interaction data corresponding to the user features; and interacting with the user based on the corresponding interaction data.
The third aspect of the invention also provides an electronic device, which performs a preset interaction process to determine user characteristics when a user looking at the intelligent robot is presented in an environment image, determines interaction data corresponding to the user characteristics, and interacts with the user based on the corresponding interaction data, and the interaction and display functions are borne by different screens, so that the interaction and display can be performed synchronously, the interaction efficiency is improved, targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirements of the user are met, and the intelligent interaction effect of the intelligent robot is improved.
To achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform: the embodiment of the first aspect of the invention provides an interaction method.
The fourth aspect of the present invention provides a computer readable storage medium, which performs a preset interaction process to determine user characteristics when a user looking at an intelligent robot is presented in an environment image, determines interaction data corresponding to the user characteristics, and interacts with the user based on the corresponding interaction data, and bears the functions of interaction and display by different screens, so that the interaction and display can be performed synchronously, the interaction efficiency is improved, targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the user is met, and the intelligent interaction effect of the intelligent robot is improved.
To achieve the above object, a fifth aspect of the present invention provides an intelligent robot, comprising: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the interaction method according to the embodiment of the first aspect of the invention when executing the program.
According to the intelligent robot, when the user looking at the intelligent robot is presented in the environment image, the preset interaction process is executed to determine the user characteristics, the interaction data corresponding to the user characteristics are determined, interaction is carried out with the user based on the corresponding interaction data, and the interaction and display functions are borne by different screens, so that the interaction and display can be synchronously carried out, the interaction efficiency is improved, targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the user is met, and the intelligent interaction effect of the intelligent robot is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an interaction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a structure of an intelligent robot according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating an interaction method according to another embodiment of the present invention;
FIG. 4 is a flow chart illustrating an interaction method according to another embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an interactive device according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an interaction device according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. On the contrary, the embodiments of the invention include all alternatives, modifications and equivalents as may be included within the spirit and scope of the appended claims.
In order to solve the technical problem that the intelligent robot in the related art cannot conduct targeted interaction according to interaction habits of different users and cannot meet personalized interaction requirements of the users when the intelligent robot is interacted, the embodiment of the invention provides an interaction method.
Fig. 1 is a flow chart of an interaction method according to an embodiment of the invention.
In the embodiment of the invention, the interaction method is executed by an intelligent robot, wherein the intelligent robot can be any device, instrument or machine with calculation processing capability.
In the embodiment of the invention, the intelligent robot can be set to execute the interaction method in a triggering manner when receiving the starting instruction of the administrator, or can also be set to intelligently detect the time, and automatically trigger to execute the interaction method when the detected time is within the preset time range, for example, when the flow of people in the noon period in the public occasion is relatively large, the intelligent robot can be set to execute the interaction method in real time, the setting manner is flexible, and the intelligent degree is high.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a smart robot according to an embodiment of the present invention, and the smart robot 20 includes: the interactive device comprises an interactive unit 21 and a display unit 22, wherein the interactive unit 21 is used for interacting with a user, the interactive unit 21 comprises a camera 211, the display unit 22 is used for displaying, and the display unit 22 comprises a display screen 221. The interaction unit 21 and the display unit 22 are separately provided, and data communication is performed between the interaction unit 21 and the display unit 22.
The interaction unit 21 and the display unit 22 may be disposed up and down, or may be disposed left and right, and in fig. 2, the interaction unit 21 and the display unit 22 are disposed up and down, which is not limited.
In the embodiment of the invention, the smaller screen can be used as the interaction unit by separately setting the interaction unit and the display unit, and the user can interact through the smaller screen and the larger screen can be used as the display unit, so that the display screen of the display unit can be set to be larger and is not limited by the size of the interaction unit, thereby being beneficial to display. In addition, in the interaction process of the interaction unit, the interaction unit can be used for interacting with the user through combining the display screen, for example, the display screen is used for displaying interaction data, and the interaction unit is used for interacting with the user, so that the interaction and display functions are borne by different screens, the interaction and display can be synchronously performed, and the interaction efficiency is improved.
Because the interactive unit not only can display, still need carry out information acquisition, generally need adopt the touch screen, the cost is higher, and the screen of show unit only needs to demonstrate, only needs to set up to ordinary display screen, need not to set up to the touch screen, will interact with the function of showing by different screens bears, need not great touch screen, has reduced the cost to a certain extent.
Referring to fig. 1, the method includes:
s101: and acquiring an environment image by adopting a camera.
The image corresponding to the environment where the intelligent robot is located may be referred to as an environment image, and the environment image may be specifically an environment picture or an environment video image, which is not limited thereto.
In the specific execution process, the embodiment of the invention can directly shoot by the camera arranged on the intelligent robot to obtain the environment image, and on the basis of ensuring the interaction function of the intelligent robot, the acquisition timeliness of the environment image is improved, and the interaction effect is ensured from the time limit.
In the embodiment of the invention, the camera of the intelligent robot can be set to collect the environment image in real time or collect the environment image at intervals of preset time, which is not limited.
When the environment image is shot, the time range corresponding to the acquired environment image can be recorded in real time, and the time range can be used for tracing the environment image.
In the embodiment of the invention, not only can the environment images be collected to realize some common monitoring functions, but also the environment images can be adopted to assist subsequent interaction, so that the application function of the intelligent robot is expanded, the interaction is realized by combining the actual environment conditions in the use scene, and the interaction is more targeted.
In the embodiment of the invention, the image processing module can be arranged in the intelligent robot, and when the intelligent robot shoots an environment image through the camera, the environment image can be transmitted to the image processing module in real time, and the image processing module carries out subsequent image recognition.
S102: when a user looking at the intelligent robot is presented in the environment image, a preset interaction process is performed to determine user characteristics.
In some embodiments, when the image processing module adopts a portrait identification algorithm in the related art to identify that a portrait exists in the environmental image, further, the identified portrait eye gazing angle may be detected to obtain a target angle, the target angle is compared with a preset angle threshold (the preset angle threshold may be set empirically), and when the target angle is within the preset angle threshold, the portrait gazing intelligent robot obtained by identifying the environmental image is determined.
In other embodiments, a depth information detection module may be further provided for the intelligent robot, and when the image processing module recognizes that the environment image has a portrait by adopting a portrait recognition algorithm in the related art, the depth information detection module further determines depth information of eyes of the portrait, and then analyzes the depth information to determine whether the portrait looks at the intelligent robot.
In the embodiment of the invention, the fact that the user does not interact with the intelligent robot in time when the user does not watch the intelligent robot is considered, so that the preset interaction process is triggered to be executed to determine the user characteristics only when the user watching the intelligent robot is shown in the environment image while the pertinence of the interaction is ensured and the timeliness of the interaction is ensured.
The user features are used for uniquely identifying the identity of the user, and the user features may be some biological features of the user, or may also be identities corresponding to some biological features of the user, for example, in the process of initially collecting the biological features of the user, according to collected biological features and combined with an identifier generation rule, a corresponding identifier is generated as the identity of the user, which is not limited.
In the embodiment of the invention, some biological characteristics of the user are taken as examples, and the characteristics of the user are, for example, facial contour characteristics, voiceprint characteristics, pupil characteristics and the like, which are not limiting.
In some embodiments, in the process of executing the preset interaction process to determine the user features, a voice interaction process may be executed to determine the user voice features and serve as the user features, so that the user input operation steps can be effectively simplified, the learning cost of interaction with the user is reduced as much as possible, the use viscosity of the user is improved, and the use experience of the user is improved.
For example, a voice broadcasting module may be preset in the intelligent robot, and when a user looking at the intelligent robot is presented in an environment image, the preset inquiry voice is triggered to be played, and the inquiry voice may be preset by a factory program of the intelligent robot or may be set by an administrator of a usage scenario of the intelligent robot, which is not limited.
The inquiry voice is, for example, please speak a sentence? To guide the user in entering a piece of speech, the query speech may of course be in any other possible form, without limitation.
In a specific execution process, a voice listening device (e.g., a microphone device) of the intelligent robot, via which whether a reply voice to the query voice is received or not is detected in real time, may be activated while playing the pre-entered query voice.
In the specific implementation process, a voice recognition module may be preset in the intelligent robot, when the reply voice is monitored through the voice monitoring device, the reply voice may be sent to the voice recognition module in real time, and the voice recognition module analyzes the reply voice to determine the voice characteristics of the user by adopting a voice analysis technology in the related technology.
In other embodiments, in the process of performing the preset interaction process to determine the user characteristics, the physical characteristics of the user may be identified and used as the user characteristics, where the physical characteristics include: facial features and/or fingerprint features, for example, pupil features, it can be seen that the method for determining user features in the embodiment of the present invention is flexible, and an appropriate manner may be selected according to actual use requirements to determine user features.
In a specific implementation, a biometric identification module may be preset in the intelligent robot to identify physical features of the user and to act as user features when it is detected that a user looking at the intelligent robot is present in the environmental image.
S103: interaction data corresponding to the user characteristic is determined.
Wherein, the interaction data can be used for describing some using habit information, preference information and the like of a user when using the control function related to the intelligent robot.
For example, when the user a uses the function of calling related to the intelligent robot, the user B uses the function of calling related to the intelligent robot, the user a uses the function of transacting related to the intelligent robot, the user B uses the business process B to assist transacting, and the user B uses the user feature of the user a to match the user to call related to the user B, or the user B uses the business process a to assist transacting, i.e. the user B can be referred to as corresponding interactive data.
The user characteristics and the corresponding interaction data can be acquired by acquiring the using habit information, the preference information and the like of a large number of users in advance, analyzing and storing the acquired user characteristics and the corresponding interaction data based on big data.
S104: based on the corresponding interaction data, the interaction unit is adopted to interact with the user by combining with the display screen.
For example, if the intelligent robot is expected to call in english voice or the intelligent robot is expected to assist in handling the business in the business process a, which is matched with the user characteristics of the user a, the intelligent robot may be controlled to call in english voice or the intelligent robot may be controlled to assist in handling the business in the business process a.
In the embodiment, when the user looking at the intelligent robot is presented in the environment image, a preset interaction process is executed to determine the user characteristics, the interaction data corresponding to the user characteristics is determined, interaction is performed with the user based on the corresponding interaction data, and the interaction and display functions are borne by different screens, so that the interaction and display can be performed synchronously, the interaction efficiency is improved, targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the user is met, and the intelligent interaction effect of the intelligent robot is improved.
Fig. 3 is a flow chart of an interaction method according to another embodiment of the present invention.
Referring to fig. 3, the method includes:
s301: and acquiring an environment image by adopting a camera.
The description of S301 can be found in detail in the above embodiments.
S302: when the number of the users is multiple, determining the duration of time for each user to watch the intelligent robot, wherein the duration is smaller than or equal to the time threshold.
In this embodiment, the steps are considered that when the user features are identified, the intelligent robot may have one or multiple users facing each other, that is, multiple users watch the scene of the intelligent robot, and only one of the users may need to interact at this time, or multiple users may have interaction requirements.
Therefore, in order to realize sequential interaction, the invention can determine the duration of each user looking at the intelligent robot, and the user with stronger interaction will can be preferentially identified due to the longer duration of the user looking at the intelligent robot.
The time threshold is a threshold for controlling the intelligent robot to respond to user interaction, and the preset threshold can be calibrated in advance by a factory program of the intelligent robot or set by an administrator of the intelligent robot according to actual use requirements, so that the method is not limited.
The time threshold is, for example, 15s.
In a specific implementation process, for example, a duration that each user looks at the intelligent robot for 15s is detected, and then the user with stronger interaction will is determined from the duration according to the duration.
S303: a target time length is determined from the plurality of time lengths.
Wherein the target time length is the maximum value of a plurality of time lengths.
It will be appreciated that users with longer gaze times generally have a stronger interaction desire in combination with the actual usage scenario.
Therefore, in the embodiment, the maximum duration can be determined from a plurality of durations and used as the target duration, and then the user characteristics of the user to which the target duration belongs are determined, so that the actual use scene is combined to control the intelligent robot to interact, the user with stronger interaction will can be accurately determined, and the interaction effect is ensured.
S304: and executing a preset interaction process to determine the user characteristics of the user to which the target duration belongs.
S305: interaction data corresponding to the user characteristic is determined.
S306: based on the corresponding interaction data, the interaction unit is adopted to interact with the user by combining with the display screen.
The description of S304-S306 can be found in detail in the above embodiments.
As another example, when the user is plural, pre-entered speech is played, the pre-entered speech being used to guide the target user to input speech data; determining the position information of a target user to which the voice data belong; based on the position information, executing a preset interaction process to determine the characteristics of the target user, wherein the direction of the interaction unit can be specifically adjusted according to the position information when the preset interaction process is executed to determine the characteristics of the target user based on the position information; and executing a preset interaction process by adopting the interaction unit with the direction adjusted so as to determine the characteristics of the target user.
For example, if the current user is multiple, the pre-entered speech may be played: after a user needing interaction speaks a sentence, a built-in voice monitoring device can be started to monitor voice input by the user, after the voice input by the user is monitored, a sound source positioning technology in related technology can be adopted to position the user to which the voice belongs in combination with the voice to obtain position information, then the position information can be analyzed to obtain the relative direction of the user and the intelligent robot, and then the direction of an interaction unit is adjusted to be the relative direction, so that the interaction unit with the adjusted direction is adopted, and a preset interaction process is executed to determine the characteristics of the target user.
Therefore, in the embodiment of the invention, the targeted identification can be realized, when a plurality of users are provided, the target users can be determined in a sound source positioning mode, then the direction of the interaction unit is adjusted to be the relative direction so as to determine the characteristics of the target users, and the users with stronger interaction will can be accurately determined, so that the interaction unit rotates to facilitate the operation of the target users, thereby obtaining better interaction effect, improving the user experience degree and improving the intelligent control effect of the intelligent robot.
In the embodiment, when the number of users is multiple, the continuous time length of each user watching the intelligent robot is determined, the time length is smaller than or equal to the time threshold, the target time length is determined from the multiple time lengths, and the user characteristics of the user to which the target time length belongs are determined by executing the preset interaction process, so that the actual use of the scene to control the intelligent robot to interact is combined, the user with stronger interaction will can be accurately determined, and the interaction effect is ensured.
Fig. 4 is a flow chart of an interaction method according to another embodiment of the present invention.
Referring to fig. 4, the method includes:
S401: and acquiring an environment image by adopting a camera.
S402: when a user looking at the intelligent robot is presented in the environment image, a preset interaction process is performed to determine user characteristics.
The descriptions of S401 and S402 may be detailed in the above embodiments, and are not repeated here.
S403: it is determined whether there is a first feature that matches the user feature, the interaction data having an associated first feature.
In a specific implementation process, the intelligent robot may search in a local storage module based on the user features, and the local storage module may pre-store candidate interaction data and first features associated with each interaction data, where the first features are user features of users to whom the pre-learned interaction data belongs, or may send a determination request to a server, where the candidate interaction data and the first features associated with each interaction data may be pre-stored, so that the server determines whether there is a first feature matching with the user features based on the determination request, and feeds back a determined result to the intelligent robot.
S404: and if the matched first feature exists, taking the interaction data associated with the matched first feature as corresponding interaction data.
In the specific execution process, if the first feature matched with the user feature exists, the intelligent robot is indicated to learn the interaction data of the user currently watching the intelligent robot, at the moment, the interaction data associated with the matched first feature can be directly read as the corresponding interaction data, and the corresponding interaction data is directly adopted to interact with the user, so that better interaction efficiency can be ensured, and because the intelligent robot has learned the interaction data of the user currently watching the intelligent robot, interaction is carried out by adopting the called interaction data, and the intelligent interaction effect is improved.
S405: based on the corresponding interaction data, the interaction unit is adopted to interact with the user by combining with the display screen.
The description of S405 may be detailed in the above embodiments, and will not be repeated here.
S406: if there is no matching first feature, the user is guided to input the user feature.
For example, the user may be guided in the form of voice interactions, or in the form of text interactions, the pre-entered voice may be played: please look at the camera-! Then, when the user is detected to watch the camera of the intelligent robot, pupil characteristics of the user are collected and used as user characteristics, or characters can be displayed: please place your finger on the fingerprint identifier ≡! The user is guided to input fingerprint features and as user features.
S407: and generating interaction data corresponding to the user characteristics by combining with preset interaction rules.
The preset interaction rules are, for example, to simulate some interaction scenes (for example, the interaction scenes of the calling) in advance, guide the user to input corresponding feedback information (for example, for the type of the calling language played by the intelligent robot, determine whether the voice information of the type of the language fed back by the user) and, if the type of the calling language played by the intelligent robot is english, determine that the voice information of chinese language is fed back by the user, take the voice information of chinese as corresponding interaction data, which is not limited.
S408: associating the user features with the corresponding interaction data, and supplementing the existing interaction data and the associated first features according to the associated user features and the corresponding interaction data.
For example, when generating interaction data corresponding to the user features in combination with the preset interaction rule, some interaction scenes (for example, interaction scenes of calling) may be simulated in advance, the user is guided to input corresponding feedback information (for example, voice information fed back by the user in terms of calling mode played by the intelligent robot), the feedback information is analyzed to obtain the interaction data, the interaction data is associated with the user features guided to be input by the user, and the existing interaction data and the associated first features are supplemented.
In this embodiment, when determining whether the first feature matched with the user feature exists, the interaction data associated with the matched first feature is directly read as the corresponding interaction data, and the corresponding interaction data is directly adopted to interact with the user, so that better interaction efficiency can be ensured, and because the intelligent robot has learned the interaction data of the user currently watching the intelligent robot, interaction is performed by adopting the retrieved interaction data, and intelligent interaction effect is improved. And when the first feature matched with the user feature does not exist, guiding the user to input the user feature, combining a preset interaction rule, generating interaction data corresponding to the user feature, associating the user feature with the corresponding interaction data, supplementing the existing interaction data and the associated first feature according to the associated user feature and the corresponding interaction data, dynamically supplementing and updating the interaction data learned by the intelligent robot so as to better meet the personalized interaction requirement of the user, and improving the intelligent interaction effect of the intelligent robot.
Fig. 5 is a schematic structural diagram of an interaction device according to an embodiment of the present invention.
The interaction device is arranged inside the intelligent robot.
The intelligent robot is provided with an interaction unit and a display unit, the interaction unit and the display unit are arranged in a separated mode, data communication is conducted between the interaction unit and the display unit, the interaction unit comprises a camera, and the display unit comprises a display screen.
Referring to fig. 5, the interaction device 500 includes:
An acquisition module 501 for acquiring an environmental image via a camera;
An execution module 502, configured to execute a preset interaction procedure to determine a user characteristic when a user looking at the intelligent robot is presented in the environment image;
a first determining module 503, configured to determine interaction data corresponding to the user feature;
The interaction module 504 is configured to interact with a user via the interaction unit in combination with the display screen based on the corresponding interaction data.
Optionally, in some embodiments, referring to fig. 6, the executing module 502 includes:
A voice feature submodule 5021, configured to perform a voice interaction procedure to determine a user voice feature and serve as the user feature; or alternatively
The physical feature submodule 5022 is configured to perform a preset interaction procedure to identify physical features of a user and serve as the user features, where the physical features include: facial features and/or fingerprint features.
Optionally, in some embodiments, the interaction data has an associated first feature, the interaction device 500 further comprising:
The second determining module 505 is configured to determine whether there is a first feature matching the user feature, and when there is a matching first feature, take interaction data associated with the matching first feature as corresponding interaction data.
A guiding module 506, configured to guide the user to input the user feature when there is no matching first feature;
the production module 507 is configured to combine preset interaction rules to generate interaction data corresponding to the user features;
And the association module 508 is configured to associate the user feature with the corresponding interaction data, and supplement the existing interaction data and the associated first feature according to the associated user feature and the corresponding interaction data.
Optionally, in some embodiments, the execution module 502 is further configured to:
when the number of the users is multiple, determining the duration that each user looks at the intelligent robot, wherein the duration is smaller than or equal to a time threshold, determining a target duration from the multiple durations, and executing a preset interaction process to determine the user characteristics of the user to which the target duration belongs.
Optionally, in some embodiments, the execution module 502 is further configured to:
When the number of the users is multiple, playing pre-recorded voice which is used for guiding the target user to input voice data;
determining the position information of a target user to which the voice data belong;
Based on the location information, a preset interaction procedure is performed to determine characteristics of the target user.
Optionally, in some embodiments, the execution module 502 is further configured to:
according to the position information, adjusting the direction of the interaction unit;
And executing a preset interaction process by adopting the interaction unit with the direction adjusted so as to determine the characteristics of the target user.
It should be noted that the explanation of the embodiment of the interaction method in the foregoing embodiments of fig. 1 to fig. 4 is also applicable to the interaction device 500 of this embodiment, and the implementation principle is similar, and will not be repeated here.
In the embodiment, when the user looking at the intelligent robot is presented in the environment image, a preset interaction process is executed to determine the user characteristics, the interaction data corresponding to the user characteristics is determined, interaction is performed with the user based on the corresponding interaction data, and the interaction and display functions are borne by different screens, so that the interaction and display can be performed synchronously, the interaction efficiency is improved, targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the user is met, and the intelligent interaction effect of the intelligent robot is improved.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Referring to fig. 7, an electronic device 700 may include one or more of the following components: a processor 701, a memory 702, a power circuit 703, a multimedia component 704, an audio component 705, an input/output (I/O) interface 706, a sensor component 707, and a communication component 705.
A power supply circuit 703 for supplying power to the respective circuits or devices of the mobile terminal; the memory 602 is used to store executable program code; the processor 701 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 702, for performing the steps of:
Collecting an environment image;
When a user gazing at the intelligent robot is presented in the environment image, executing a preset interaction process to determine user characteristics;
determining interaction data corresponding to the user features;
And interacting with the user based on the corresponding interaction data.
It should be noted that the explanation of the embodiment of the interaction method in the embodiments of fig. 1 to fig. 4 is also applicable to the electronic device 700 of this embodiment, and the implementation principle is similar, and will not be repeated here.
In the embodiment, when the user looking at the intelligent robot is presented in the environment image, a preset interaction process is executed to determine the user characteristics, the interaction data corresponding to the user characteristics is determined, interaction is performed with the user based on the corresponding interaction data, and the interaction and display functions are borne by different screens, so that the interaction and display can be performed synchronously, the interaction efficiency is improved, targeted interaction according to the interaction habits of different users is realized, the personalized interaction requirement of the user is met, and the intelligent interaction effect of the intelligent robot is improved.
In order to implement the above embodiments, the present invention also proposes a computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the interaction method proposed by the above embodiments.
In order to implement the above embodiment, the present invention further provides an intelligent robot, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the interaction method as set forth in the above embodiment of the present invention when executing the program.
It should be noted that in the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented as software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (9)

1. The method is characterized in that the method is executed by an intelligent robot, the intelligent robot is provided with an interaction unit and a display unit, the screen of the interaction unit and the screen of the display unit are separately arranged, the screen size of the interaction unit is smaller than the screen size of the display unit, the screen of the interaction unit is a touch screen, data communication is carried out between the interaction unit and the display unit, the interaction unit comprises a camera, and the screen of the display unit is a display screen and comprises the following steps:
acquiring an environment image by adopting the camera, and transmitting the environment image to an image processing module;
when the image processing module recognizes that a person image exists in an environment image, detecting the eye gazing angle of the recognized person image, comparing the detected target angle with a preset angle threshold, determining that a user gazing at the intelligent robot is present in the environment image when the target angle is in the range of the angle threshold, playing a pre-recorded inquiry voice and starting a voice monitoring device of the intelligent robot when the user gazing at the intelligent robot is present in the environment image, adopting a sound source positioning technology, monitoring reply voice by combining the voice monitoring device, positioning the user to obtain position information of the user, analyzing the relative direction of the position information and the intelligent robot, adjusting the interaction unit according to the relative direction, executing a voice interaction process to determine user voice characteristics and serve as user characteristics, or executing a preset interaction process to recognize body characteristics of the user and serve as the user characteristics, wherein when the user gazes at a plurality of users, determining that the duration of the target duration is equal to the maximum value or the duration of the target duration is equal to the duration of the target duration, and the duration is equal to the duration of the target duration is determined to the threshold;
determining interaction data corresponding to the user characteristics, wherein the interaction data comprises using habit information for describing a user when using the intelligent robot related control function;
And based on the corresponding interaction data, adopting the interaction unit to interact with the user by combining the display screen.
2. The interaction method of claim 1, wherein the user features further comprise: facial features.
3. The interaction method of claim 2, wherein the interaction data has an associated first characteristic, the determining interaction data corresponding to the user characteristic comprising:
Determining whether there is a first feature that matches the user feature;
if the matched first feature exists, taking the interaction data associated with the matched first feature as the corresponding interaction data;
If the matched first feature does not exist, guiding a user to input the user feature;
generating interaction data corresponding to the user characteristics by combining a preset interaction rule;
And associating the user characteristics with the corresponding interaction data, and supplementing the existing interaction data and the associated first characteristics according to the associated user characteristics and the corresponding interaction data.
4. The utility model provides an interaction device, its characterized in that, the device sets up in intelligent robot, intelligent robot is equipped with the screen of interaction unit and show unit, the screen of interaction unit with the screen separation of show unit sets up, the screen size of interaction unit is less than the screen size of show unit, the screen of interaction unit is the touch-control screen, interaction unit with carry out data communication between the show unit, interaction unit includes the camera, the screen of show unit is the display screen, includes:
The acquisition module acquires an environment image through the camera and transmits the environment image to the image processing module;
The execution module is used for detecting the eye gazing angle of the identified human image when the image processing module identifies that the environment image is provided with the human image, comparing the detected target angle with a preset angle threshold, determining that the environment image is provided with a user gazing at the intelligent robot when the target angle is in the range of the angle threshold, playing a pre-recorded inquiry voice and starting a voice monitoring device of the intelligent robot when the environment image is provided with the user gazing at the intelligent robot, adopting a sound source positioning technology, combining the voice monitoring device to monitor reply voice, positioning the user to obtain the position information of the user, adjusting the interaction unit according to the relative direction, executing a voice interaction process to determine the voice characteristics of the user and serve as the user characteristics, or executing a preset interaction process to identify the body characteristics of the user and serve as the user characteristics, and further determining that the duration of each user gazing at the intelligent robot is equal to the maximum value or the duration of each target duration of the intelligent robot is equal to the maximum value or the duration of the user characteristics when the user is multiple;
the first determining module is used for determining interaction data corresponding to the user characteristics, wherein the interaction data comprises using habit information for describing a user when using the intelligent robot related control function;
and the interaction module is used for interacting with the user through the interaction unit and the display screen based on the corresponding interaction data.
5. The interactive apparatus of claim 4, wherein the user features further comprise: fingerprint characteristics.
6. The interactive apparatus of claim 5, wherein the interactive data has an associated first characteristic, the interactive apparatus further comprising:
A second determining module, configured to determine whether a first feature matching the user feature exists, and when the first feature matching exists, use interaction data associated with the first feature matching as the corresponding interaction data;
a guiding module, configured to guide a user to input a user feature when the matched first feature does not exist;
the production module is used for generating interaction data corresponding to the user characteristics by combining with preset interaction rules;
And the association module is used for associating the user characteristics with the corresponding interaction data and supplementing the existing interaction data and the associated first characteristics according to the associated user characteristics and the corresponding interaction data.
7. An electronic device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space surrounded by the shell, and the processor and the memory are arranged on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the electronic equipment; the memory is used for storing executable program codes; the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the interaction method of any of claims 1-3.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the interaction method as claimed in any of claims 1-3.
9. An intelligent robot comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the interaction method of any of claims 1-3 when the program is executed by the processor.
CN201910578601.3A 2019-06-28 2019-06-28 Interaction method, device, intelligent robot, electronic equipment and storage medium Active CN110286771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578601.3A CN110286771B (en) 2019-06-28 2019-06-28 Interaction method, device, intelligent robot, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578601.3A CN110286771B (en) 2019-06-28 2019-06-28 Interaction method, device, intelligent robot, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110286771A CN110286771A (en) 2019-09-27
CN110286771B true CN110286771B (en) 2024-06-07

Family

ID=68020163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578601.3A Active CN110286771B (en) 2019-06-28 2019-06-28 Interaction method, device, intelligent robot, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110286771B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991249A (en) * 2019-11-04 2020-04-10 支付宝(杭州)信息技术有限公司 Face detection method, face detection device, electronic equipment and medium
WO2021217572A1 (en) * 2020-04-30 2021-11-04 华为技术有限公司 In-vehicle user positioning method, on-board interaction method, on-board device, and vehicle
CN111710046A (en) * 2020-06-05 2020-09-25 北京有竹居网络技术有限公司 Interaction method and device and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010182287A (en) * 2008-07-17 2010-08-19 Steven C Kays Intelligent adaptive design
CN102610035A (en) * 2012-04-05 2012-07-25 广州广电运通金融电子股份有限公司 Financial self-service device and anti-peeping system and anti-peeping method thereof
CN102609089A (en) * 2011-01-13 2012-07-25 微软公司 Multi-state model for robot and user interaction
CN108068121A (en) * 2017-12-22 2018-05-25 达闼科技(北京)有限公司 A kind of man-machine interaction control method, device and robot
WO2018156912A1 (en) * 2017-02-27 2018-08-30 Tobii Ab System for gaze interaction
CN108733208A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 The I-goal of smart machine determines method and apparatus
EP3399427A1 (en) * 2017-05-03 2018-11-07 Karlsruher Institut für Technologie Method and system for the quantitative measurement of mental stress of an individual user
CN109015593A (en) * 2018-09-21 2018-12-18 中新智擎科技有限公司 A kind of advertisement robot and its advertisement placement method
CN109144262A (en) * 2018-08-28 2019-01-04 广东工业大学 A kind of man-machine interaction method based on eye movement, device, equipment and storage medium
CN109840804A (en) * 2019-01-21 2019-06-04 深圳市丰巢科技有限公司 Third party's information displaying method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247208A1 (en) * 2013-03-01 2014-09-04 Tobii Technology Ab Invoking and waking a computing device from stand-by mode based on gaze detection
US10063560B2 (en) * 2016-04-29 2018-08-28 Microsoft Technology Licensing, Llc Gaze-based authentication

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010182287A (en) * 2008-07-17 2010-08-19 Steven C Kays Intelligent adaptive design
CN102609089A (en) * 2011-01-13 2012-07-25 微软公司 Multi-state model for robot and user interaction
CN102610035A (en) * 2012-04-05 2012-07-25 广州广电运通金融电子股份有限公司 Financial self-service device and anti-peeping system and anti-peeping method thereof
WO2018156912A1 (en) * 2017-02-27 2018-08-30 Tobii Ab System for gaze interaction
EP3399427A1 (en) * 2017-05-03 2018-11-07 Karlsruher Institut für Technologie Method and system for the quantitative measurement of mental stress of an individual user
CN108068121A (en) * 2017-12-22 2018-05-25 达闼科技(北京)有限公司 A kind of man-machine interaction control method, device and robot
CN108733208A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 The I-goal of smart machine determines method and apparatus
CN109144262A (en) * 2018-08-28 2019-01-04 广东工业大学 A kind of man-machine interaction method based on eye movement, device, equipment and storage medium
CN109015593A (en) * 2018-09-21 2018-12-18 中新智擎科技有限公司 A kind of advertisement robot and its advertisement placement method
CN109840804A (en) * 2019-01-21 2019-06-04 深圳市丰巢科技有限公司 Third party's information displaying method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
新一代人机交互:自然用户界面的现状、类型与教育应用探究――兼对脑机接口技术的初步展望;徐振国等;远程教育杂志;20180717(第04期);全文 *

Also Published As

Publication number Publication date
CN110286771A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN111492328B (en) Non-verbal engagement of virtual assistants
CN108052079B (en) Device control method, device control apparatus, and storage medium
WO2021036644A1 (en) Voice-driven animation method and apparatus based on artificial intelligence
US20190019508A1 (en) System and method for voice command context
CN104049721B (en) Information processing method and electronic equipment
CN108363706B (en) Method and device for man-machine dialogue interaction
EP3627290A1 (en) Device-facing human-computer interaction method and system
US11854550B2 (en) Determining input for speech processing engine
CN110286771B (en) Interaction method, device, intelligent robot, electronic equipment and storage medium
CN110519636B (en) Voice information playing method and device, computer equipment and storage medium
CN111683263B (en) Live broadcast guiding method, device, equipment and computer readable storage medium
CN112236739B (en) Adaptive automatic assistant based on detected mouth movement and/or gaze
CN109120790B (en) Call control method and device, storage medium and wearable device
CN109637518A (en) Virtual newscaster's implementation method and device
CN105122353A (en) Natural human-computer interaction for virtual personal assistant systems
CN109259724B (en) Eye monitoring method and device, storage medium and wearable device
CN105204642A (en) Adjustment method and device of virtual-reality interactive image
CN108965981B (en) Video playing method and device, storage medium and electronic equipment
KR101480668B1 (en) Mobile Terminal Having Emotion Recognition Application using Voice and Method for Controlling thereof
WO2021232875A1 (en) Method and apparatus for driving digital person, and electronic device
CN111063024A (en) Three-dimensional virtual human driving method and device, electronic equipment and storage medium
KR20190113252A (en) Method for eye-tracking and terminal for executing the same
CN107452381B (en) Multimedia voice recognition device and method
CN108388399B (en) Virtual idol state management method and system
US11759387B2 (en) Voice-based control of sexual stimulation devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant