CN114092791A - Scene perception-based man-machine cooperation method and system and robot - Google Patents

Scene perception-based man-machine cooperation method and system and robot Download PDF

Info

Publication number
CN114092791A
CN114092791A CN202111382548.3A CN202111382548A CN114092791A CN 114092791 A CN114092791 A CN 114092791A CN 202111382548 A CN202111382548 A CN 202111382548A CN 114092791 A CN114092791 A CN 114092791A
Authority
CN
China
Prior art keywords
human
scene perception
subtask
scene
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111382548.3A
Other languages
Chinese (zh)
Inventor
冯志全
张鑫
杨晓晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202111382548.3A priority Critical patent/CN114092791A/en
Publication of CN114092791A publication Critical patent/CN114092791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Abstract

The invention provides a human-computer cooperation method, a human-computer cooperation system and a robot based on scene perception, wherein the method comprises the steps that the robot determines the intention of a human through the scene perception, and the scene perception comprises voice recognition, gesture recognition and periodic scene perception recognition; associating corresponding activities based on the intentions, and acquiring a subtask set contained in the activities through a database; and when the subtask set is not finished, calculating a joint entropy corresponding to the subtask, and distributing the subtask to a human or a machine to finish based on the value of the joint entropy. According to the invention, the intention of a person is acquired through scene perception, the intention is refined through tasks corresponding to the database, the things that the person wants to do are assisted to be completed according to a man-machine cooperation algorithm, and the person is efficiently assisted to complete the tasks in time.

Description

Scene perception-based man-machine cooperation method and system and robot
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a scene perception-based man-machine cooperation method, a scene perception-based man-machine cooperation system and a robot.
Background
Along with the maturity and development of artificial intelligence technology, the intelligence trend of people in daily life is more and more obvious. As the population aging will become more and more serious, solving the problem around the elderly will become the focus of social attention. With the increase of age, various functions of a human body slide down seriously, the problems of memory decline, vision decline, inconvenient movement and the like are most prominent on the old, and the problems can cause the old to have no clear name of the medicine, take wrong medicine, take overdue medicine and the like.
The artificial intelligence breakthrough is to allow more elderly accompanying robots to walk into the life of the elderly and to cooperate with the elderly to complete the things which the elderly cannot reach by manpower. The old person robot of accompanying and attending to in the market now mostly takes the pronunciation to accompany and attend to as the main, chats with the old person promptly, alleviates old person's lonely on the heart spirit, and this can not solve the difficult problem in the life of old person fundamentally. Cooperative robots are also largely divided into two categories: one type is an industrial cooperative robot, which is bound in a certain range and executes the inherent tasks according to a fixed industrialized flow and standard, although the efficiency is high and the cost is low, the robot is applied to the life of the old and is very stiff, the intention of the old cannot be really understood, humanized service cannot be provided, and the personal safety of the old cannot be ensured; the second category is cooperative robots that are mainly aimed at intent estimation, which focuses on estimating human intent in order to provide more appropriate services for humans, but questions exist as to whether the method of cooperation is efficient, whether the selection of tasks during cooperation is optimal, and whether the cooperative tasks can be completed within the limits that elderly are subjected to time. For example, for the case that the elderly take medicines, the task may be sudden, the robot needs to efficiently assist the elderly to complete, the voice accompanying robot is obviously impossible to complete, if the robot cannot complete without the setting in the setting program of the industrial cooperative robot, the cooperative robot which intends to be the key point cannot provide a specific cooperative flow, the completion degree is unknown, and various problems occur in the cooperative process.
Disclosure of Invention
The invention provides a scene perception-based human-computer cooperation method, a scene perception-based human-computer cooperation system and a robot, which are used for solving the problem that the existing robot cannot efficiently and completely complete human-computer cooperation.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a human-computer cooperation method based on scene perception in a first aspect, which comprises the following steps:
the robot determines the intention of a person through scene perception, wherein the scene perception comprises voice recognition, gesture recognition and periodical scene perception recognition;
associating corresponding activities based on the intentions, and acquiring a subtask set contained in the activities through a database;
and when the subtask set is not finished, calculating a joint entropy corresponding to the subtask, and distributing the subtask to a human or a machine to finish based on the value of the joint entropy.
Further, the periodic scene recognition perception is specifically:
in two continuous sensing periods, if the voice recognition signal and the gesture recognition signal are not acquired, calculating the degree of the hand approaching to the object;
and when the value of the degree of approaching the object is greater than a preset threshold value, the intention of the person to the current object is considered.
Further, the calculation of the degree of the human hand approaching the object is specifically as follows:
Figure BDA0003363774050000021
in the formula, ωiDegree of approach of human hand to object, doi,t1And doi,t2The distances of the human hand from the object i in the field of view at times t1 and t2, respectively.
Further, before calculating the distance between the human hand and the object i in the visual field, the method first calculates the center coordinates of the human hand and the object in the visual field, specifically:
the robot establishes a bounding box for hands and objects in the field of view, based on the bounding box diagonal two-dimensional pixel coordinates (x min)j,y minj)、(x maxj,y maxj) Calculating the center point pixel coordinate (x)j,yj);
Calculating the actual central coordinate according to the central point pixel coordinate,
Figure BDA0003363774050000031
in the formula, cxAnd cyIs a generally Cartesian point, f, on the X and Y axesxAnd fyWhere is the focal length along the X and Y axes, dpIs the depth of the pixel, (x)r,yr,zr) Representing the actual center coordinates of a human hand or object.
Further, the determination of the completion condition of the subtask set specifically includes:
respectively acquiring a task set M finished by a person and a task set R finished by a machine, and calculating a union set of the set M and the set R;
if the union set is equal to the subtask set, the current subtask is completed, otherwise, the current subtask is not completed.
Further, the joint entropy is specifically calculated as:
Figure BDA0003363774050000032
in the formula, X represents a set of the identification rate of the target object required by the activity, and Y represents the difficulty level of the event; p (x)i,yi) Representing an event xi,yiProbability of coincidence, I (x)i,yi) Represents xi,yiThe amount of self-information.
Further, factors that affect the ease of the event include the weight of the item, the volume of the item, and the relative distance of the item from the robotic arm.
The invention provides a human-computer collaboration system based on scene perception in a second aspect, which comprises:
the system comprises a scene perception module, a processing module and a display module, wherein the scene perception module determines the intention of a person through scene perception, and the scene perception comprises voice recognition, gesture recognition and periodic scene perception recognition;
the task association module is used for associating corresponding activities based on the intents and acquiring a subtask set contained in the activities through a database;
and the man-machine cooperation module is used for calculating the joint entropy corresponding to the subtasks when the subtask set is not completed, and distributing the subtasks to the human or the machine to be completed based on the value of the joint entropy.
The invention provides a robot, and the robot is provided with the system.
A fourth aspect of the invention provides a computer storage medium having stored thereon computer instructions which, when run on the system, cause the system to perform the steps of the method.
The network service control apparatus according to the second aspect of the present invention can implement the methods according to the first aspect and the respective implementation manners of the first aspect, and achieve the same effects.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the method acquires human intentions through scene perception, refines the intentions through tasks corresponding to a database, assists in completing the things that a human wants to do according to a man-machine cooperation algorithm, respectively makes the recognition rate and the operation difficulty of an article probabilistic in the man-machine cooperation algorithm, then uses a joint information entropy to obtain the certainty factor of the tasks, finally selects the task with the highest certainty factor from a task set, and actively interacts if the task with the certainty factor of 0 is met until the task set is completed, so that the efficient assistant can timely complete the tasks.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of an embodiment of the method of the present invention;
FIG. 2 is a schematic flow chart of one implementation of the method embodiment of the present invention;
FIG. 3 is a diagram illustrating the relationship between the intent database and the database according to an embodiment of the method of the present invention;
fig. 4 is a schematic structural diagram of an embodiment of the system of the present invention.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
The invention solves the problem that the old cannot reach the strength of the life, and improves the life quality of the old. The method comprises the steps of firstly confirming the intentions of the old through scene perception, refining tasks corresponding to the intentions through a database after the intentions are obtained, and then assisting the old to finish the things the old wants to do according to a man-machine cooperation algorithm. The prominent contribution herein is the task selection strategy in human-computer collaboration, which first divides the task into two parts: and respectively carrying out probability on the identification rate and the operation difficulty of the article for the identification and the operation of the article, then obtaining the certainty factor of the task by utilizing the joint information entropy, and finally selecting the task with the highest certainty factor from the task set, and actively carrying out interaction if the task with the certainty factor of 0 is encountered until the task set is completed.
As shown in fig. 1, an embodiment of the present invention provides a human-computer collaboration method based on scene awareness, where the method includes the following steps:
s1, the robot determines the intention of the person through scene perception, wherein the scene perception comprises voice recognition, gesture recognition and periodical scene perception recognition;
s2, associating corresponding activities based on the intention, and acquiring subtask sets contained in the activities through a database;
and S3, when the subtask set is not completed, calculating a joint entropy corresponding to the subtask, and distributing the subtask to a human or a machine to be completed based on the value of the joint entropy.
In step S1, the periodic scene recognition perception specifically includes:
in two continuous sensing periods, if the voice recognition signal and the gesture recognition signal are not acquired, calculating the degree of the hand approaching to the object;
and when the value of the degree of approaching the object is greater than a preset threshold value, the intention of the person to the current object is considered.
The calculation of the degree of the human hand approaching the object is specifically as follows:
Figure BDA0003363774050000061
in the formula, omegaiDegree of approach of human hand to object, doi,t1And doi,t2The distances of the human hand from the object i in the field of view at times t1 and t2, respectively. OmegaiThe larger the value of the trend toward the item i, the stronger the intention of the elderly to get the item i.
And when the gesture recognition result and the voice recognition result are not sensed at two continuous T moments, the man-machine cooperation algorithm calculates the degree of the old people approaching a certain article at two continuous moments, and the article with the maximum degree is taken as the target article.
Before calculating the distance between the human hand and an object i in the visual field, firstly, calculating the center coordinates of the human hand and the object in the visual field, specifically:
the robot establishes bounding boxes (j is 1, 2.. o) for human hands and objects in the field of view, and o is the number of objects detectable in the field of view), and diagonal two-dimensional pixel coordinates (x min) based on the bounding boxesj,y minj)、(x maxj,y maxj) Calculating the center point pixel coordinate (x)j,yj);
Figure BDA0003363774050000062
Calculating the actual central coordinate according to the central point pixel coordinate,
Figure BDA0003363774050000063
in the formula, cxAnd cyIs a general Cartesian point in the X and Y axes, fxAnd fyWhere is the focal length along the X and Y axes, dp is the depth of the pixel and (xr, yr, zr) represents the actual center coordinates of a human hand or object.
Center coordinates (x) of handg,yg,zg) With the central coordinates (x) of other objects i in the field of viewi,yi,zi) Distance do between (i ═ 1, 2.. o-1)iThe formula is as follows:
Figure BDA0003363774050000071
when d isoi<s1, it is said that the elderly have held an item, and s1 indicates that the elderly have just gripped the item but s1 is increased by an arbitrary small value and then dropped from the hand.
As shown in FIGS. 2 and 3, in step S2, based on the intention obtained in step S1, the database is associated with the intention library according to the corresponding relationship between the database and the databaseActivity Q for Current intentiAnd acquiring a task set Q contained in the activityiThe task is decomposed into a plurality of subtasks Qi,1,Qi,2…Qi,n
For the tasks in the sub-task set, including the tasks completed by the human and the tasks completed by the robot, respectively recording the tasks as a set M and a set R, and calculating a union set of the set M and the set R;
if the union set is equal to the subtask set QiIf yes, the current subtask is finished without cooperation; otherwise, the current subtask is not completed and the cooperative operation is needed.
In step S3, the joint entropy H (X, Y) represents the entropy of information when random variables X and Y occur together, i.e., the degree of certainty when X and Y occur together, i.e., the joint entropy H (X, Y) represents the amount of information generated when X and Y occur together. The calculation of the joint entropy is specifically as follows:
Figure BDA0003363774050000072
in the formula, X represents a set of the identification rate of the target object required by the activity, and Y represents the difficulty level of the event; p (x)i,yi) Representing an event xi,yiProbability of coincidence, I (x)i,yi) Denotes xi,yiThe amount of self-information.
The influence factors of the difficulty degree of the event comprise the weight m of the article, the volume v of the article and the relative distance l between the article and the mechanical arm, and Y is converted into yi epsilon (0-1), so that the yi epsilon can be used as a parameter for calculating the joint entropy.
The difficulty level is divided into five regions: simple, relatively difficult, difficult and unfinishable. Respectively expressed by decimal numbers of 0.2, 0.4, 0.6, 0.8 and 1.0, specifically:
Y={m,v,l}:
Figure BDA0003363774050000081
Figure BDA0003363774050000082
Figure BDA0003363774050000083
in the formula, m1、m2、m3、m4Respectively, a set weight threshold value, v1、v2、v3、v4Respectively, a set volume threshold value,/1、l2、l3、l4Respectively, a set relative distance threshold.
As shown in fig. 4, an embodiment of the present invention further provides a human-computer collaboration system based on scene awareness, where the system includes a scene awareness module 1, a task association module 2, and a human-computer collaboration module 3.
The scene perception module 1 determines the intention of a person through scene perception, wherein the scene perception comprises voice recognition, gesture recognition and periodic scene perception recognition; the task association module 2 associates corresponding activities based on the intentions, and acquires a subtask set contained in the activities through a database; and the human-computer cooperation module 3 calculates the joint entropy corresponding to the subtasks when the subtask set is not completed, and allocates the subtasks to the human or the machine to be completed based on the value of the joint entropy.
The embodiment of the invention also provides a robot, and the system on the robot.
The present invention also provides a computer storage medium having stored therein computer instructions which, when run on the system, cause the system to perform the steps of the method.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. A man-machine cooperation method based on scene perception is characterized by comprising the following steps:
the robot determines the intention of a person through scene perception, wherein the scene perception comprises voice recognition, gesture recognition and periodic scene perception recognition;
associating corresponding activities based on the intentions, and acquiring a subtask set contained in the activities through a database;
and when the subtask set is not finished, calculating a joint entropy corresponding to the subtask, and distributing the subtask to a human or a machine to finish based on the value of the joint entropy.
2. The human-computer cooperation method based on scene awareness of claim 1, wherein the periodic scene recognition awareness is specifically:
in two continuous sensing periods, if the voice recognition signal and the gesture recognition signal are not acquired, calculating the degree of the hand approaching to the object;
and when the value of the degree of approaching the object is greater than a preset threshold value, the intention of the person to the current object is considered.
3. The human-computer cooperation method based on scene perception according to claim 2, wherein the calculation of the degree of the human hand approaching the object is specifically as follows:
Figure FDA0003363774040000011
in the formula, ωiIs the degree of approach of the human hand to the object, doi,t1And doi,t2The distances of the human hand from the object i in the field of view at times t1 and t2, respectively.
4. The human-computer cooperation method based on scene perception according to claim 3, wherein before calculating the distance between the human hand and the object i in the visual field, the method first calculates the center coordinates of the human hand and the object in the visual field, specifically:
the robot establishes a bounding box for hands and objects in the field of view, based on the bounding box diagonal two-dimensional pixel coordinates (x min)j,y minj)、(x maxj,y maxj) Calculating the center point pixel coordinate (x)j,yj);
Calculating the actual central coordinate according to the central point pixel coordinate,
Figure FDA0003363774040000021
in the formula, cxAnd cyIs a general Cartesian point in the X and Y axes, fxAnd fyIn is the focal length along the X and Y axes, dpIs the depth of the pixel, (x)r,yr,zr) Representing the actual center coordinates of a human hand or object.
5. The human-computer collaboration method based on scene awareness as claimed in claim 1, wherein the determination of the completion of the subtask set is specifically:
respectively acquiring a task set M finished by a person and a task set R finished by a machine, and calculating a union set of the set M and the set R;
and if the union set is equal to the subtask set, finishing the current subtask, otherwise, finishing the current subtask.
6. The human-computer cooperation method based on scene awareness of claim 1, wherein the joint entropy is calculated by:
Figure FDA0003363774040000022
in the formula, X represents a set of the identification rate of the target object required by the activity, and Y represents the difficulty level of the event; p is a radical of(xi,yi) Representing an event xi,yiProbability of coincidence, I (x)i,yi) Denotes xi,yiThe amount of self-information.
7. The human-computer interaction method based on scene perception according to claim 6, wherein the influence factors of the difficulty degree of the event comprise the weight of the object, the volume of the object and the relative distance between the object and the mechanical arm.
8. A human-computer collaboration system based on scene perception is characterized by comprising:
the system comprises a scene perception module, a processing module and a display module, wherein the scene perception module determines the intention of a person through scene perception, and the scene perception comprises voice recognition, gesture recognition and periodic scene perception recognition;
the task association module is used for associating corresponding activities based on the intents and acquiring a subtask set contained in the activities through a database;
and the man-machine cooperation module is used for calculating the joint entropy corresponding to the subtasks when the subtask set is not completed, and distributing the subtasks to the human or the machine to be completed based on the value of the joint entropy.
9. A robot, characterized in that said robot is provided with a system according to claim 8.
10. A computer storage medium having computer instructions stored thereon, which, when run on the system of claim 8, cause the system to perform the steps of the method of any one of claims 1-7.
CN202111382548.3A 2021-11-19 2021-11-19 Scene perception-based man-machine cooperation method and system and robot Pending CN114092791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111382548.3A CN114092791A (en) 2021-11-19 2021-11-19 Scene perception-based man-machine cooperation method and system and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111382548.3A CN114092791A (en) 2021-11-19 2021-11-19 Scene perception-based man-machine cooperation method and system and robot

Publications (1)

Publication Number Publication Date
CN114092791A true CN114092791A (en) 2022-02-25

Family

ID=80302363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111382548.3A Pending CN114092791A (en) 2021-11-19 2021-11-19 Scene perception-based man-machine cooperation method and system and robot

Country Status (1)

Country Link
CN (1) CN114092791A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427293A (en) * 2015-11-11 2016-03-23 中国科学院深圳先进技术研究院 Indoor scene scanning reconstruction method and apparatus
US20160098838A1 (en) * 2013-04-24 2016-04-07 Commissariat à l'Energie Atomique et aux Energies Alternatives Registration of sar images by mutual information
WO2017079918A1 (en) * 2015-11-11 2017-05-18 中国科学院深圳先进技术研究院 Indoor scene scanning reconstruction method and apparatus
KR20200059112A (en) * 2018-11-19 2020-05-28 한성대학교 산학협력단 System for Providing User-Robot Interaction and Computer Program Therefore
CN111383330A (en) * 2020-03-20 2020-07-07 吉林化工学院 Three-dimensional reconstruction method and system for complex environment
CN112099632A (en) * 2020-09-16 2020-12-18 济南大学 Human-robot cooperative interaction method for assistant accompanying

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098838A1 (en) * 2013-04-24 2016-04-07 Commissariat à l'Energie Atomique et aux Energies Alternatives Registration of sar images by mutual information
CN105427293A (en) * 2015-11-11 2016-03-23 中国科学院深圳先进技术研究院 Indoor scene scanning reconstruction method and apparatus
WO2017079918A1 (en) * 2015-11-11 2017-05-18 中国科学院深圳先进技术研究院 Indoor scene scanning reconstruction method and apparatus
KR20200059112A (en) * 2018-11-19 2020-05-28 한성대학교 산학협력단 System for Providing User-Robot Interaction and Computer Program Therefore
CN111383330A (en) * 2020-03-20 2020-07-07 吉林化工学院 Three-dimensional reconstruction method and system for complex environment
CN112099632A (en) * 2020-09-16 2020-12-18 济南大学 Human-robot cooperative interaction method for assistant accompanying

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIN ZHANG ET.AL: "Multimodal of fusion algorithm applied to robots", JOURNAL OF PHYSICS CONFERENCE SERIES, 31 January 2020 (2020-01-31) *
冯志全;梁丽伟;徐涛;杨晓晖;刘弘;: "虚拟装配交互界面中的隐式交互算法研究", 计算机辅助设计与图形学学报, no. 10, 15 October 2017 (2017-10-15) *
尹建芹;田国会;姜海涛;周风余;: "面向家庭服务的人体动作识别", 四川大学学报(工程科学版), no. 04, 20 July 2011 (2011-07-20) *
徐涛;贾松敏;张国梁;: "基于协同显著性的服务机器人空间物体快速定位方法", 机器人, no. 03, 15 May 2017 (2017-05-15) *

Similar Documents

Publication Publication Date Title
Wendemuth et al. A companion technology for cognitive technical systems
CN106965193A (en) A kind of intelligent robot diagnosis guiding system
Jaques et al. Understanding and predicting bonding in conversations using thin slices of facial expressions and body language
CN112101219B (en) Intention understanding method and system for elderly accompanying robot
Wang et al. Living with artificial intelligence–developing a theory on trust in health chatbots
CN109765991A (en) Social interaction system is used to help system and non-transitory computer-readable storage media that user carries out social interaction
CN112613534B (en) Multi-mode information processing and interaction system
CN109117952A (en) A method of the robot emotion cognition based on deep learning
Chen et al. Real-time multi-modal human–robot collaboration using gestures and speech
Nejat et al. Can I be of assistance? The intelligence behind an assistive robot
Su et al. Recent advancements in multimodal human–robot interaction
Jayaratne et al. Bio-inspired multisensory fusion for autonomous robots
Zhang et al. A fusion-based spiking neural network approach for predicting collaboration request in human-robot collaboration
Papanagiotou et al. Egocentric gesture recognition using 3D convolutional neural networks for the spatiotemporal adaptation of collaborative robots
CN114092791A (en) Scene perception-based man-machine cooperation method and system and robot
CN114093028A (en) Human-computer cooperation method and system based on intention analysis and robot
CN112099632B (en) Human-robot cooperative interaction method for helping old accompany
CN108919804B (en) Intelligent vehicle unmanned system
Saad et al. An integrated human computer interaction scheme for object detection using deep learning
Chen et al. Dynamic gesture design and recognition for human-robot collaboration with convolutional neural networks
CN106054602A (en) Fuzzy adaptive robot system capable of recognizing voice demand and working method thereof
Wang et al. Hand and Arm Gesture-based Human-Robot Interaction: A Review
Makne et al. Artificial Intelligence: A Review
Zhou et al. MIUIC: A human-computer collaborative multimodal intention-understanding algorithm incorporating comfort analysis
Altameem et al. Responsive Policy Decisions for Improving the Accuracy of Medical Data Analysis in Healthcare-based Human–Machine Interaction Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination