CN112732074A - Robot interaction method - Google Patents

Robot interaction method Download PDF

Info

Publication number
CN112732074A
CN112732074A CN202011593728.1A CN202011593728A CN112732074A CN 112732074 A CN112732074 A CN 112732074A CN 202011593728 A CN202011593728 A CN 202011593728A CN 112732074 A CN112732074 A CN 112732074A
Authority
CN
China
Prior art keywords
user
robot
interaction
intention
acquisition mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011593728.1A
Other languages
Chinese (zh)
Inventor
刘笑彤
李小山
陈伯行
黄海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xintiandi Technology Co ltd
Original Assignee
Zhuhai Xintiandi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xintiandi Technology Co ltd filed Critical Zhuhai Xintiandi Technology Co ltd
Priority to CN202011593728.1A priority Critical patent/CN112732074A/en
Publication of CN112732074A publication Critical patent/CN112732074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Abstract

The invention provides a robot interaction method, which comprises the following steps: s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user; s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot; s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user; s4, acquiring the interaction information of the user through the activated information acquisition mode; s5, processing the interactive information to complete the service request. The face recognition device is arranged on the robot, so that whether a user has the intention of interacting with the robot or not is judged, whether the user wants to interact with the robot or not is judged actively, the intelligent degree of the robot in the aspect of interacting with people can be improved, and the interaction efficiency is improved.

Description

Robot interaction method
Technical Field
The invention relates to the technical field of robots, in particular to a robot interaction method.
Background
With the development of robot technology, more and more robots enter the lives of people to replace or assist the work of people, such as a sweeping robot, a greeting robot, a companion robot and the like.
In the prior art, when a user and a robot start to interact, the user is required to actively send an instruction, and the robot can act according to the instruction. For example, a user may initiate interaction with the robot by pressing a physical button provided external to the robot or touching a screen display interface of the robot itself. The mode that the user actively sends the instruction to carry out human-computer interaction enables the robot to be in a passive state in the human-computer interaction, and both the intelligence degree and the interaction efficiency are low.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a robot interaction method for improving the intelligence of a robot in the aspect of interacting with a person, so as to improve the interaction efficiency.
The invention adopts the following specific implementation modes: a method of robotic interaction, comprising: s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user;
s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot;
s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user;
s4, acquiring the interaction information of the user through the activated information acquisition mode;
s5, processing the interactive information to complete the service request.
Preferably, the recognition module includes password recognition corresponding to the keyboard acquisition mode, voice recognition corresponding to the voice acquisition mode, and face recognition corresponding to the shooting acquisition mode.
Preferably, the recognition module is a face recognition module corresponding to the shooting acquisition mode;
in a corresponding manner, the first and second optical fibers are,
determining the orientation of the user and the area of the face of the user which is shot according to the facial feature points;
judging whether the user has interaction intention recognition with the robot or not according to the orientation of the user and the photographed face area of the user;
if the interactive intention identification judgment result is yes, carrying out user authority identification;
and if the user permission identification judgment structure is passed, the robot interacts with the user.
The method of robot interaction as described above, preferably, the method further comprises: if the interaction intention identification judgment result is negative, forbidding the robot to interact with the user;
alternatively, the first and second electrodes may be,
and if the user authority identification judgment structure is failed, prohibiting the robot from interacting with the user.
The method of robot interaction as described above, preferably, the determining whether the user has an intention to interact with the robot according to the orientation of the user and the photographed area of the face of the user;
the method comprises the following steps:
if the user faces the robot and the area of the face of the user is larger than or equal to an area threshold value, judging that the user has the intention of interacting with the robot;
if the user is not facing the robot or the area of the face of the user is smaller than the area threshold, determining that the user does not have the intention of interacting with the robot.
The method of robot interaction as described above, preferably, the method further comprises:
monitoring the orientation of the user in real time during the interaction of the robot and the user;
and if the monitored duration that the user does not face the robot is longer than the specified duration, controlling the robot to stop interacting with the user.
The robot interaction method as described above, preferably, the robot interacts with the user by voice.
The method for robot interaction as described above, preferably, controlling the robot to actively interact with the user, includes:
controlling the robot to output voice guidance information to introduce functions of the robot to the user; and/or
And controlling the robot to output an interactive page to the user so that the user can interact with the robot.
The beneficial technical effects are as follows: the face recognition device is installed on the robot, and whether the user has the intention of interacting with the robot or not is judged according to the shot face image of the user in the specified range, so that whether the user wants to interact with the robot or not can be actively judged before the user interacts with the robot, and the robot is controlled to actively interact with the user when the judgment result is yes. The method provided by the embodiment can improve the intelligence degree of the robot in the aspect of interaction with people so as to improve the interaction efficiency.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
Wherein:
FIG. 1 is a wire-frame diagram of a method of robotic interaction provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, which are for convenience of description of the present invention only and do not require that the present invention must be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. The terms "connected" and "connected" used herein should be interpreted broadly, and may include, for example, a fixed connection or a detachable connection; they may be directly connected or indirectly connected through intermediate members, and specific meanings of the above terms will be understood by those skilled in the art as appropriate.
A method of robotic interaction, comprising: s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user;
s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot;
s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user;
s4, acquiring the interaction information of the user through the activated information acquisition mode;
s5, processing the interactive information to complete the service request.
Based on the interactive function provided by the robot, the user can trigger a corresponding service request by selecting a certain interactive service, and then the robot activates an information acquisition mode corresponding to the interactive service identifier selected by the user according to the preset corresponding relationship between the interactive service identifier and the information acquisition mode, so as to acquire the interactive information of the user through the activated information acquisition mode, and process the acquired interactive information to complete the service request. The application requirement of interaction of the user is realized through the robot, and the use mode of the robot is expanded.
The invention also has the following implementation mode, and the activated information acquisition mode corresponding to the social service identification comprises at least one of a keyboard acquisition mode, a voice acquisition mode and a shooting acquisition mode.
Various different acquisition modes can be set in the robot in advance, and the social service identification is enabled to be an interaction intention, for example, a name of the robot is shout or a preset instruction is set. And displaying the service functions which can be provided by the robot to the user in an interface display mode through a screen arranged on the robot, wherein each service function can be displayed in an icon form. In order to prevent misoperation, interactive intention recognition is set, and interactive intention recognition confirmation is carried out.
The invention also has the following implementation mode, and the recognition module comprises password recognition corresponding to the keyboard acquisition mode, voice recognition corresponding to the voice acquisition mode and face recognition corresponding to the shooting acquisition mode.
The present invention also has an embodiment, as above-mentioned robot interaction method, preferably, the recognition module is a face recognition module corresponding to the shooting acquisition mode;
in a corresponding manner, the first and second optical fibers are,
determining the orientation of the user and the area of the face of the user which is shot according to the facial feature points;
judging whether the user has interaction intention recognition with the robot or not according to the orientation of the user and the photographed face area of the user;
if the interactive intention identification judgment result is yes, carrying out user authority identification;
and if the user permission identification judgment structure is passed, the robot interacts with the user.
The method of robot interaction as described above, preferably, the method further comprises: if the interaction intention identification judgment result is negative, forbidding the robot to interact with the user;
alternatively, the first and second electrodes may be,
and if the user authority identification judgment structure is failed, prohibiting the robot from interacting with the user.
One or more cameras may be mounted on the head of the robot.
When one camera is installed, the camera can be controlled to rotate 360 degrees for shooting in order to shoot images in the specified range of the robot. When a plurality of cameras are installed, lenses of the plurality of cameras may be oriented in different directions to capture images in different directions.
The robot-specified range may refer to a sphere area having a radius of a specified distance with respect to the robot center. Users within a specified range are more likely to have an intent to interact with the robot; intent to interact with a robot means that the user has not interacted with the robot, but has the idea of interacting with the robot
The present invention also has an embodiment in which it is determined whether the user has an intention to interact with the robot based on the orientation of the user and the area of the face of the user that is photographed;
the method comprises the following steps:
if the user faces the robot and the area of the face of the user is larger than or equal to an area threshold value, judging that the user has the intention of interacting with the robot;
if the user is not facing the robot or the area of the face of the user is smaller than the area threshold, determining that the user does not have the intention of interacting with the robot.
The facial image of the user is further analyzed to determine if the user has an intent to interact with the robot. The photographed face image of the user is the face image of the user seen from the perspective of the robot. Generally, the face of a user when the user wants to interact with the robot is different from the face when the user does not want to interact with the robot. Based on this, it is possible to determine whether the user has an intention to interact with the robot from the face image of the user.
And if the judgment result is yes, namely the user has the intention of interacting with the robot, controlling the robot to actively interact with the user. That is, the robot is controlled to actively interact with the user, e.g., the robot is controlled to actively travel in the direction in which the user is located, before the user interacts with the robot.
The present invention also has embodiments wherein the method further comprises:
monitoring the orientation of the user in real time during the interaction of the robot and the user;
and if the monitored duration that the user does not face the robot is longer than the specified duration, controlling the robot to stop interacting with the user.
The method provided by the embodiment can improve the intelligence degree of the robot in the aspect of interaction with people so as to improve the interaction efficiency.
The invention also has an embodiment in which the robot interacts with the user by voice.
The invention also has the following implementation mode, and the control method controls the robot to actively interact with the user and comprises the following steps: controlling the robot to output voice guidance information to introduce functions of the robot to the user;
and/or
And controlling the robot to output an interactive page to the user so that the user can interact with the robot.
And when the user triggers the service request, the robot determines an information acquisition mode corresponding to the interactive service selected by the user according to the identification of the interactive service selected by the user and the corresponding relation, and activates the corresponding information acquisition mode. The activation means to enable the corresponding information acquisition mode, and generally speaking, the activation operation can be realized by controlling the corresponding information acquisition device to be started and enabled.
In some embodiments, the present application further has a step S6 of actively interacting with the user based on the self-monitoring data;
for example, the electric quantity of the robot is monitored by the electric quantity monitoring unit, when the electric quantity is lower than a preset value, the robot actively searches for a user, and if the user cannot be found in the visual field of the robot, autonomous charging is performed;
and if the user can be found in the visual field or the user is interacting with the user, sending a charging request to autonomously charge the user after the user agrees.
In some embodiments, the method further includes step S7, where the display module of the robot may display the two-dimensional code, and after the user scans the two-dimensional code using a mobile terminal (e.g., a mobile phone or a tablet computer), the user may communicate with the robot through a specific APP, so that the robot may transmit content or records of interaction with the user to the user through the specific APP, for example, a certain restaurant location where the user interacts with the robot, and after the user completes communication with the robot through the two-dimensional code, the restaurant location and navigation mode fed back by the robot are downloaded to the mobile phone.
Furthermore, the robot can judge the age of the user according to the facial image of the user, and provide different interaction modes according to the age of the user, for example, different voices are selected to be played in different voices, when the robot judges the age of the user to be 0-10 years according to the facial image of the user, the feedback voice of the robot can be played by using the child voice, and if the age is judged to be over 60 years according to the facial image of the user, the more mature feedback voice can be played, or voice packages of popular celebrity people in different age groups can be prestored in the robot, and the feedback voice can be played by automatically applying the popular celebrity voice package in the age group according to the age judged by the facial image of the user.
In addition, when a plurality of face images of users are judged in one picture, the robot can automatically and preferentially select the user with a larger face image area (meaning that the distance is close) to interact with the face image and feed back the interactive content of the user, when the face area of the user image is close to or the user with a larger face image area cannot be judged, the robot can select the interactive user according to the age judged by the face image, for example, preferentially interact with the user with a later age judged by the face image, or display head photos of two users in the display module, click and specify the interactive head portrait by the user, and track the head portrait position of the user.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above embodiments are only used for illustrating the embodiments of the present application, and not for limiting the embodiments of the present application, and those skilled in the relevant art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also belong to the scope of the embodiments of the present application, and the scope of the embodiments of the present application should be defined by the claims.

Claims (9)

1. A method of robotic interaction, comprising:
s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user;
s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot;
s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user;
s4, acquiring the interaction information of the user through the activated information acquisition mode;
s5, processing the interactive information to complete the service request.
2. The method of robot interaction of claim 1, wherein the activated information acquisition mode corresponding to the social service identifier comprises at least one of a keyboard acquisition mode, a voice acquisition mode, and a shooting acquisition mode.
3. The method of robot interaction of claim 2, wherein the recognition module comprises password recognition corresponding to the keyboard acquisition mode, voice recognition corresponding to the voice acquisition mode, and face recognition corresponding to the capture acquisition mode.
4. The method of robot interaction of claim 3, wherein the recognition module is a face recognition corresponding to the capture mode;
in a corresponding manner, the first and second optical fibers are,
determining the orientation of the user and the area of the face of the user which is shot according to the facial feature points;
judging whether the user has interaction intention recognition with the robot or not according to the orientation of the user and the photographed face area of the user;
if the interactive intention identification judgment result is yes, carrying out user authority identification;
and if the user permission identification judgment structure is passed, the robot interacts with the user.
5. The method of robotic interaction of claim 4, further comprising: if the interaction intention identification judgment result is negative, forbidding the robot to interact with the user;
or if the user authority identification judgment structure is not passed, the robot is forbidden to interact with the user.
6. The method of robot interaction of claim 4, wherein the determining whether the user has an intention to interact with the robot is based on an orientation of the user and a photographed face area of the user,
the method comprises the following steps:
if the user faces the robot and the area of the face of the user is larger than or equal to an area threshold value, judging that the user has the intention of interacting with the robot;
if the user is not facing the robot or the area of the face of the user is smaller than the area threshold, determining that the user does not have the intention of interacting with the robot.
7. The method of robotic interaction of claim 6, further comprising:
monitoring the orientation of the user in real time during the interaction of the robot and the user;
and if the monitored duration that the user does not face the robot is longer than the specified duration, controlling the robot to stop interacting with the user.
8. A method for robotic interaction as claimed in any one of claims 1 to 7, wherein the robot interacts with the user by voice.
9. The method of robotic interaction of claim 8, wherein controlling the robot to actively interact with the user comprises:
controlling the robot to output voice guidance information to introduce functions of the robot to the user; and/or
And controlling the robot to output an interactive page to the user so that the user can interact with the robot.
CN202011593728.1A 2020-12-29 2020-12-29 Robot interaction method Pending CN112732074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011593728.1A CN112732074A (en) 2020-12-29 2020-12-29 Robot interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011593728.1A CN112732074A (en) 2020-12-29 2020-12-29 Robot interaction method

Publications (1)

Publication Number Publication Date
CN112732074A true CN112732074A (en) 2021-04-30

Family

ID=75607516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011593728.1A Pending CN112732074A (en) 2020-12-29 2020-12-29 Robot interaction method

Country Status (1)

Country Link
CN (1) CN112732074A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441513A (en) * 2008-11-26 2009-05-27 北京科技大学 System for performing non-contact type human-machine interaction by vision
CN105959320A (en) * 2016-07-13 2016-09-21 上海木爷机器人技术有限公司 Interaction method and system based on robot
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device
KR20180046649A (en) * 2016-10-28 2018-05-09 한국과학기술연구원 User intention detection system for initiation of interaction based on multi-modal perception and a method using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441513A (en) * 2008-11-26 2009-05-27 北京科技大学 System for performing non-contact type human-machine interaction by vision
CN105959320A (en) * 2016-07-13 2016-09-21 上海木爷机器人技术有限公司 Interaction method and system based on robot
KR20180046649A (en) * 2016-10-28 2018-05-09 한국과학기술연구원 User intention detection system for initiation of interaction based on multi-modal perception and a method using the same
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device

Similar Documents

Publication Publication Date Title
CN104780154B (en) Apparatus bound method and apparatus
CN105117209B (en) Using exchange method and device
EP3070936A1 (en) Multi-purpose conference terminal and multi-purpose conference system
KR20150077231A (en) robot cleaner, robot cleaner system and a control method of the same
EP3780575B1 (en) Photographing method and terminal device
CN112613475A (en) Code scanning interface display method and device, mobile terminal and storage medium
CN104571518A (en) Method and device for executing set operation
JP2004206707A (en) Terminal operation controller using input video image of camera and method therefor
CN109155817B (en) Photographing method and terminal
CN111168691B (en) Robot control method, control system and robot
CN109688253A (en) A kind of image pickup method and terminal
CN108647633B (en) Identification tracking method, identification tracking device and robot
WO2018094911A1 (en) Multimedia file sharing method and terminal device
CN113747069A (en) Shooting control method and device, control equipment and shooting equipment
CN106292994A (en) The control method of virtual reality device, device and virtual reality device
CN111241499B (en) Application program login method, device, terminal and storage medium
CN112520519A (en) Robot control method, device, equipment and computer readable storage medium
CN104123075A (en) Method and device for controlling terminal
CN112732074A (en) Robot interaction method
CN113536073A (en) Robot-based question-answering service method and device, intelligent equipment and storage medium
CN111314619B (en) Shooting angle control method, terminal and storage medium
KR101537625B1 (en) Mobile terminal and method for controlling the same
CN104378576B (en) Information processing method and electronic equipment
CN107153621A (en) device identification method and device
CN105159181B (en) The control method and device of smart machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination