CN107657852B - Infant teaching robot, teaching system and storage medium based on face recognition - Google Patents

Infant teaching robot, teaching system and storage medium based on face recognition Download PDF

Info

Publication number
CN107657852B
CN107657852B CN201711118479.9A CN201711118479A CN107657852B CN 107657852 B CN107657852 B CN 107657852B CN 201711118479 A CN201711118479 A CN 201711118479A CN 107657852 B CN107657852 B CN 107657852B
Authority
CN
China
Prior art keywords
learner
interactor
teaching
robot
image signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711118479.9A
Other languages
Chinese (zh)
Other versions
CN107657852A (en
Inventor
翟奕雲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201711118479.9A priority Critical patent/CN107657852B/en
Publication of CN107657852A publication Critical patent/CN107657852A/en
Application granted granted Critical
Publication of CN107657852B publication Critical patent/CN107657852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Manipulator (AREA)

Abstract

The present invention relates to a human face recognition-based teaching robot for infants, a teaching system, and a storage medium storing a computer program executable on a processor. The infant teaching system provided by the invention comprises the position signal transmitting device which is provided with the robot and can be worn on the body of a learner, the robot can receive the position signal of the position of the learner and adjust the second camera to shoot the face image signal of the learner according to the position signal, and the interactive feedback with the interactor is made according to the face image signal of the learner, and the situation of the interactor and the teaching direction are considered during the interactive feedback, so that the interactor can easily and deeply intervene with the robot, and the teaching effect is improved.

Description

Infant teaching robot, teaching system and storage medium based on face recognition
Technical Field
The invention relates to the field of infant teaching, in particular to an infant teaching robot based on face recognition, an infant teaching system and a teaching interaction method, and relates to a computer readable storage medium which stores a computer program and can be executed on a processor.
Background
With the rise of intelligent technology, the method of introducing intelligent robots in the young children education links is increasingly promoted. At present, in the infant education link, the application of intelligent robots based on face recognition is popular, and as the robots can recognize the faces of children and judge the emotion change of the children according to the face state change of the children, the corresponding interactive feedback is further made. Therefore, the teaching aid can better guide children, and better teaching effect is achieved.
However, at present, the intelligent robot still only considers the interaction between the robot and the child, and does not consider the interaction among the robot, the child and the teacher. But in the class, the teacher is still a controller of the class rhythm, and has more deep knowledge of the conditions of each child, so that more humanized judgment can be made. Therefore, if the interaction between the robot and the child is lack of the intervention of a teacher in a class, the interaction between the robot and the student is lack of humanization, so that the real teaching in the material is difficult to achieve, and the teaching rhythm of the teacher is also influenced.
Therefore, the teacher must be added to intervene in the interaction between the intelligent robot and the student, and the means of the teacher must be added, so that at present, part of the intelligent robots can be added with remote control functions, and the teacher controls the robot through the remote controller, so that the teacher can also intervene in the interaction between the intelligent robot and the student. However, due to the limited functionality of the remote control, such intervention is difficult to reach at a deeper level, and brings more complex operations to the teacher, affecting the concentration of the teacher's teaching.
Disclosure of Invention
The invention aims to enable a teaching person to easily and deeply intervene in the interaction between the interactive person and the robot so as to improve the teaching effect.
The invention provides a face recognition-based infant teaching robot, which comprises: the teaching robot comprises a position signal receiving device for receiving the position of a teaching person, a first camera for acquiring the face image signal of the current interaction person and a second camera for acquiring the face image signal of the teaching person, wherein the second camera is arranged at the top of the teaching robot through a rotating device, the teaching robot further comprises a main controller, the main controller controls the rotating device to rotate according to the position information of the teaching person received by the position signal receiving device, so that the second camera is aligned to the teaching person and acquires the face image signal of the teaching person, and the main controller also performs interaction feedback according to the face image signal of the interaction person and the face image signal of the teaching person.
The robot further comprises a robot body, wherein a display device is arranged on the front face of the robot body, and the first camera is arranged on the front face of the robot body.
The voice print recognition device is characterized by further comprising a voice print recognition device, and the main controller judges the identity of the current interactor according to the voice print information acquired by the voice print recognition device and the interactor face image signal.
The self-walking device is characterized by further comprising a self-walking device, and the main controller controls the self-walking device to walk to a preset position according to a preset self-walking strategy and position information of a learner.
The invention also provides a baby teaching system based on face recognition, which is characterized by comprising the position signal transmitting device which can be worn on a person to be taught and the baby teaching robot, wherein the position signal receiving device receives the position signal transmitted by the position signal transmitting device.
The invention also provides a baby teaching interaction method based on face recognition, which is characterized by comprising the following steps:
an interactor identification step: acquiring an interactor face image signal of a current interactor;
a step of identifying a teaching person: acquiring position information of a learner, and adjusting the position of a camera according to the position information of the learner, so as to acquire a face image signal of the learner;
and an interactive feedback step: judging the current emotion state of the interactor according to the face image signal of the interactor, judging the guiding intention of the learner according to the face image signal of the learner, generating an interaction feedback strategy for the next feedback according to the emotion state and the guiding intention, and executing corresponding interaction feedback according to the interaction feedback strategy.
Wherein the interactive feedback step includes an emotional state judgment step: the method comprises the steps of obtaining a pre-stored image set of a current interactor, selecting an image model closest to a face image signal of the interactor from the pre-stored image set, and judging the emotion state of the current interactor according to emotion attributes preset by the image model.
The interactive feedback step comprises a guiding intention judging step: the method comprises the steps of obtaining a pre-stored guiding image set of a learner, selecting an image model closest to a face image signal of the learner from the guiding image set, and judging the guiding intention of the learner according to guiding intention attributes preset by the image model.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the infant teaching interaction method based on face recognition.
The beneficial effects of the invention are as follows: the infant teaching system provided by the invention comprises the position signal transmitting device which is provided with the robot and can be worn on the body of a learner, the robot can receive the position signal of the position of the learner and adjust the second camera to shoot the face image signal of the learner according to the position signal, and the interactive feedback with the interactor is made according to the face image signal of the learner, and the situation of the interactor and the teaching direction are considered during the interactive feedback, so that the interactor can easily and deeply intervene with the robot, and the teaching effect is improved.
Drawings
The invention will be further described with reference to the accompanying drawings, in which embodiments do not constitute any limitation of the invention, and other drawings can be obtained by one of ordinary skill in the art without inventive effort from the following drawings.
Fig. 1 is a schematic block diagram of a face recognition-based infant teaching system of the present invention.
In fig. 1, it is included that: 1-a main controller, 2-a first camera, 3-a second camera, 4-a rotating device, 5-a position signal transmitting device, 6-a position signal receiving device, 7-a voiceprint recognition device and 8-a self-walking device.
Detailed Description
The invention will be further described with reference to the following examples.
As shown in fig. 1, the infant teaching system based on face recognition of the present invention comprises an infant teaching robot and a position signal transmitting device 5 wearable on a person to be taught, wherein the infant teaching robot comprises a main controller 1, a position signal receiving device 6 for receiving the position of the person to be taught, a first camera 2 for acquiring the face image signal of the person interacting with the current person, a rotating device 4 and a second camera 3 for acquiring the face image signal of the person to be taught, and the second camera 3 is mounted on the top of the teaching robot through the rotating device 4.
In a classroom, since a teacher (i.e., a teacher) needs to move around continuously during a teaching, the teacher wears a position signal transmitting device 5 on the body, the position signal transmitting device 5 detects a position signal of the teacher in real time or intermittently and transmits the position signal to a position signal receiving device 6, the position signal receiving device 6 transmits the received position signal to a main controller 1, and in order to enable the second camera 3 to clearly photograph a facial expression (i.e., a teacher face image signal) of the teacher in real time, the main controller 1 controls a rotating device 4 to rotate according to the position information so that the second camera 3 is aligned with the teacher and acquires the teacher face image signal, and the main controller 1 determines a guiding intention of the teacher according to the acquired teacher face image signal. In order to more accurately judge the guiding intention of the teacher, the main controller 1 stores in advance a guiding image set of the teacher, where the guiding image set is a set of different facial expressions made by the teacher when representing different guiding attributes, that is, each of the different guiding attributes has a different one-to-one correspondence to each of the different facial expressions. Because each infant (i.e. the interactor) is generally seated on the fixed seat, the teaching robot is placed on the platform side, the front of the teaching robot faces the infant, the front of the teaching robot body is provided with a display device for the infant to watch, and in order to clearly shoot the facial image signals of the infant interacted with by the teaching robot, the first camera 2 is installed above the display device, so that the first camera 2 clearly shoots the facial expression (i.e. the interactor facial image signals) of each infant. The first camera 2 transmits the captured infant face image signal to the main controller 1, and the main controller 1 judges the current emotional state of the infant based on the infant face image signal. In order to more accurately determine the current emotional state of the infant, the main controller 1 stores a pre-stored image set of each infant, where the pre-stored image set is a set of various image models expressed by each infant in different emotional states (i.e., emotional attributes).
During lesson interaction, the teaching robot faces each infant, the first camera 2 sends the facial expression (i.e. the facial image signal of the interactor) of the infant shot in real time to the main controller 1, the main controller 1 selects an image model closest to the facial image signal of the interactor from the prestored image set according to the facial expression (i.e. the facial image signal of the interactor) and judges the emotion state of the current interactor according to the emotion attribute preset by the image model; because the first camera 2 shoots a plurality of infants, in order to more accurately identify the currently interacted infants, the teaching robot is provided with the voiceprint identification device 7 capable of identifying the voice of the person, the voiceprint identification device 7 sends the acquired voiceprint information to the main controller 1, and the main controller 1 judges the identity of the current infant according to the voiceprint information and the interacted face image signal. The second camera 3 sends the facial expression of the teacher (i.e. the face image signal of the learner) shot in real time to the main controller 1, the main controller 1 selects an image model closest to the face image signal of the learner from the guiding image set according to the facial expression, and the guiding intention of the learner is judged according to the guiding intention attribute preset by the image model; the main controller 1 generates an interactive feedback strategy for the next feedback according to the current emotional state of the interactor and the guiding intention of the learner, and executes corresponding interactive feedback according to the interactive feedback strategy. The main controller 1 makes interactive feedback according to the interactive face image signals and the teaching person face image signals, and the situation of infants is considered during the interactive feedback, and the guidance of teachers is also considered, so that the teachers can easily and deeply intervene in the interaction between the interactive person and the robot, and the teaching effect is improved.
Since the teacher may move the position continuously in the interaction process, in order to increase the interaction interest, the teaching robot is further provided with a self-walking device 8, and the main controller 1 controls the self-walking device 8 to walk to the preset position according to the preset self-walking strategy and the position information of the learner.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (7)

1. Infant teaching robot based on face identification, its characterized in that includes: the teaching robot further comprises a main controller, wherein the main controller controls the rotating device to rotate according to the position information of the learner received by the position signal receiving device, so that the second camera aims at the learner and acquires the face image signal of the learner, and the main controller performs interaction feedback according to the face image signal of the interactor and the face image signal of the learner;
the interactive feedback comprises emotion state judgment and guiding intention judgment, wherein the emotion state judgment step comprises the following steps: acquiring a pre-stored image set of a current interactor, selecting an image model closest to a face image signal of the interactor from the pre-stored image set, and judging the emotion state of the current interactor according to emotion attributes preset by the image model; the guiding intention judging step: the method comprises the steps of obtaining a pre-stored guiding image set of a learner, selecting an image model closest to a face image signal of the learner from the guiding image set, and judging the guiding intention of the learner according to guiding intention attributes preset by the image model.
2. The face recognition-based teaching robot for infants according to claim 1, further comprising a robot body, wherein the front surface of the robot body is provided with a display device, and the first camera is disposed on the front surface of the robot body.
3. The human face recognition-based teaching robot for infants according to claim 1, further comprising a voiceprint recognition device, wherein the main controller judges the identity of the current interactor according to voiceprint information acquired by the voiceprint recognition device and the interactor face image signal.
4. The human face recognition-based teaching robot for infants according to claim 1, further comprising a self-walking device, wherein the main controller controls the self-walking device to walk to a preset position according to a preset self-walking strategy and position information of a learner.
5. A face recognition-based child teaching system comprising a position signal transmitting device wearable on a teaching person and the child teaching robot according to any one of claims 1 to 4, wherein the position signal receiving device receives a position signal transmitted by the position signal transmitting device.
6. The infant teaching interaction method based on face recognition is characterized by comprising the following steps of:
an interactor identification step: acquiring an interactor face image signal of a current interactor;
a step of identifying a teaching person: acquiring position information of a learner, and adjusting the position of a camera according to the position information of the learner, so as to acquire a face image signal of the learner;
and an interactive feedback step: judging the current emotion state of an interactor according to the face image signal of the interactor, judging the guiding intention of the learner according to the face image signal of the learner, generating an interaction feedback strategy for the next feedback according to the emotion state and the guiding intention, and executing corresponding interaction feedback according to the interaction feedback strategy;
the interactive feedback step includes an emotional state judgment step: acquiring a pre-stored image set of a current interactor, selecting an image model closest to a face image signal of the interactor from the pre-stored image set, and judging the emotion state of the current interactor according to emotion attributes preset by the image model;
the interactive feedback step includes an instruction intention judging step: the method comprises the steps of obtaining a pre-stored guiding image set of a learner, selecting an image model closest to a face image signal of the learner from the guiding image set, and judging the guiding intention of the learner according to guiding intention attributes preset by the image model.
7. A computer readable storage medium storing a computer program, which when executed by a processor performs the face recognition based baby teaching interaction method of claim 6.
CN201711118479.9A 2017-11-14 2017-11-14 Infant teaching robot, teaching system and storage medium based on face recognition Active CN107657852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711118479.9A CN107657852B (en) 2017-11-14 2017-11-14 Infant teaching robot, teaching system and storage medium based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711118479.9A CN107657852B (en) 2017-11-14 2017-11-14 Infant teaching robot, teaching system and storage medium based on face recognition

Publications (2)

Publication Number Publication Date
CN107657852A CN107657852A (en) 2018-02-02
CN107657852B true CN107657852B (en) 2023-09-22

Family

ID=61120415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711118479.9A Active CN107657852B (en) 2017-11-14 2017-11-14 Infant teaching robot, teaching system and storage medium based on face recognition

Country Status (1)

Country Link
CN (1) CN107657852B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427722A (en) * 2018-02-09 2018-08-21 卫盈联信息技术(深圳)有限公司 intelligent interactive method, electronic device and storage medium
US11422568B1 (en) * 2019-11-11 2022-08-23 Amazon Technolgoies, Inc. System to facilitate user authentication by autonomous mobile device
CN113393717A (en) * 2021-06-10 2021-09-14 上海宝明教育科技有限公司 Computer multimedia distance education training set

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998056209A2 (en) * 1997-06-02 1998-12-10 Marie Lapalme Video-assisted apparatus for hearing impaired persons
JP2001357137A (en) * 2000-06-09 2001-12-26 Sega Corp Health consultation network system and health consulting method using the system
JP2007043263A (en) * 2005-08-01 2007-02-15 Ricoh Co Ltd Photographing system, photographing method, and program for executing the method
WO2011049353A2 (en) * 2009-10-21 2011-04-28 디브이에스코리아 주식회사 System and method for providing electronic learning content
JP2011107329A (en) * 2009-11-16 2011-06-02 Wao Corporation Server device, bidirectional education method, and program
CN103764236A (en) * 2011-08-16 2014-04-30 西博互动有限公司 Connected multi functional system and method of use
CN103869470A (en) * 2012-12-18 2014-06-18 精工爱普生株式会社 Display device, head-mount type display device, method of controlling display device, and method of controlling head-mount type display device
CN204322085U (en) * 2014-12-15 2015-05-13 山东大学 A kind of early education towards child is accompanied and attended to robot
CN104800950A (en) * 2015-04-22 2015-07-29 中国科学院自动化研究所 Robot and system for assisting autistic child therapy
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
WO2015177908A1 (en) * 2014-05-22 2015-11-26 株式会社日立製作所 Training system
CN105139701A (en) * 2015-09-16 2015-12-09 华中师范大学 Interactive children teaching system
WO2017031860A1 (en) * 2015-08-24 2017-03-02 百度在线网络技术(北京)有限公司 Artificial intelligence-based control method and system for intelligent interaction device
CN106485968A (en) * 2016-12-15 2017-03-08 重庆市巫溪县中小企业公共服务中心 Online class interaction system
CN107025616A (en) * 2017-05-08 2017-08-08 湖南科乐坊教育科技股份有限公司 A kind of childhood teaching condition detection method and its system
CN107053191A (en) * 2016-12-31 2017-08-18 华为技术有限公司 A kind of robot, server and man-machine interaction method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872535B2 (en) * 2009-07-24 2020-12-22 Tutor Group Limited Facilitating facial recognition, augmented reality, and virtual reality in online teaching groups
US9164590B2 (en) * 2010-12-24 2015-10-20 Kevadiya, Inc. System and method for automated capture and compaction of instructional performances
US10319249B2 (en) * 2012-11-21 2019-06-11 Laureate Education, Inc. Facial expression recognition in educational learning systems

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998056209A2 (en) * 1997-06-02 1998-12-10 Marie Lapalme Video-assisted apparatus for hearing impaired persons
JP2001357137A (en) * 2000-06-09 2001-12-26 Sega Corp Health consultation network system and health consulting method using the system
JP2007043263A (en) * 2005-08-01 2007-02-15 Ricoh Co Ltd Photographing system, photographing method, and program for executing the method
WO2011049353A2 (en) * 2009-10-21 2011-04-28 디브이에스코리아 주식회사 System and method for providing electronic learning content
JP2011107329A (en) * 2009-11-16 2011-06-02 Wao Corporation Server device, bidirectional education method, and program
CN103764236A (en) * 2011-08-16 2014-04-30 西博互动有限公司 Connected multi functional system and method of use
CN103869470A (en) * 2012-12-18 2014-06-18 精工爱普生株式会社 Display device, head-mount type display device, method of controlling display device, and method of controlling head-mount type display device
WO2015177908A1 (en) * 2014-05-22 2015-11-26 株式会社日立製作所 Training system
CN204322085U (en) * 2014-12-15 2015-05-13 山东大学 A kind of early education towards child is accompanied and attended to robot
CN104800950A (en) * 2015-04-22 2015-07-29 中国科学院自动化研究所 Robot and system for assisting autistic child therapy
WO2017031860A1 (en) * 2015-08-24 2017-03-02 百度在线网络技术(北京)有限公司 Artificial intelligence-based control method and system for intelligent interaction device
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN105139701A (en) * 2015-09-16 2015-12-09 华中师范大学 Interactive children teaching system
CN106485968A (en) * 2016-12-15 2017-03-08 重庆市巫溪县中小企业公共服务中心 Online class interaction system
CN107053191A (en) * 2016-12-31 2017-08-18 华为技术有限公司 A kind of robot, server and man-machine interaction method
CN107025616A (en) * 2017-05-08 2017-08-08 湖南科乐坊教育科技股份有限公司 A kind of childhood teaching condition detection method and its system

Also Published As

Publication number Publication date
CN107657852A (en) 2018-02-02

Similar Documents

Publication Publication Date Title
Fasola et al. Robot exercise instructor: A socially assistive robot system to monitor and encourage physical exercise for the elderly
CN107657852B (en) Infant teaching robot, teaching system and storage medium based on face recognition
CN107225578B (en) Robot control device, robot control method and system
Breazeal Sociable machines: Expressive social exchange between humans and robots
Boccanfuso et al. CHARLIE: An adaptive robot design with hand and face tracking for use in autism therapy
McColl et al. Brian 2.1: A socially assistive robot for the elderly and cognitively impaired
CN106022305A (en) Intelligent robot movement comparing method and robot
CN107369341A (en) Educational robot
CN112614399A (en) Dance teaching equipment based on virtual reality and teaching method thereof
Petric et al. Four tasks of a robot-assisted autism spectrum disorder diagnostic protocol: First clinical tests
Block et al. In the arms of a robot: Designing autonomous hugging robots with intra-hug gestures
US20180052512A1 (en) Behavioral rehearsal system and supporting software
Palestra et al. Artificial Intelligence for Robot-Assisted Treatment of Autism.
Tanevska et al. A cognitive architecture for socially adaptable robots
Alnajjar et al. A low-cost autonomous attention assessment system for robot intervention with autistic children
Liu et al. An interactive training system of motor learning by imitation and speech instructions for children with autism
McColl et al. Classifying a person’s degree of accessibility from natural body language during social human–robot interactions
Tanaka et al. Developing dance interaction between QRIO and toddlers in a classroom environment: plans for the first steps
Michaud et al. Assistive technologies and child-robot interaction
Hieida et al. Walking hand-in-hand helps relationship building between child and robot
US20240038361A1 (en) Systems and Methods for Evaluating Environmental and Entertaining Elements of Digital Therapeutic Content
CN110786643A (en) Desktop device and method for protecting eyesight of students during learning
CN108742516B (en) Emotion measuring and adjusting system and method for smart home
Chen et al. Android as a receptionist in a shopping mall using inverse reinforcement learning
Palestra et al. Social robots in postural education: a new approach to address body consciousness in ASD children

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant