WO2017143951A1 - 一种表情反馈方法及智能机器人 - Google Patents

一种表情反馈方法及智能机器人 Download PDF

Info

Publication number
WO2017143951A1
WO2017143951A1 PCT/CN2017/074054 CN2017074054W WO2017143951A1 WO 2017143951 A1 WO2017143951 A1 WO 2017143951A1 CN 2017074054 W CN2017074054 W CN 2017074054W WO 2017143951 A1 WO2017143951 A1 WO 2017143951A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
face
intelligent robot
feature point
recognition model
Prior art date
Application number
PCT/CN2017/074054
Other languages
English (en)
French (fr)
Inventor
陈明修
Original Assignee
芋头科技(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 芋头科技(杭州)有限公司 filed Critical 芋头科技(杭州)有限公司
Priority to US15/999,762 priority Critical patent/US11819996B2/en
Publication of WO2017143951A1 publication Critical patent/WO2017143951A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • B25J11/001Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • B25J11/0015Face robots, animated artificial faces for imitating human expressions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1653Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Definitions

  • the present invention relates to the field of smart device technologies, and in particular, to an expression feedback method and an intelligent robot.
  • an audio playback function of an audio playback device a video playback function of a video playback device, a voice dialogue function of an intelligent voice device, and other various functions may be integrated.
  • the intelligent robot can make feedback according to the information input by the user, that is, the intelligent robot can perform relatively simple information interaction with the user.
  • the feedback information that the intelligent robot makes to the user is usually “cold and cold”, that is, it does not make different feedback according to the different emotions when the user inputs the information, thus causing information interaction between the user and the intelligent robot.
  • the content is relatively rigid, which reduces the user experience.
  • an expression feedback method and a technical solution of an intelligent robot are provided, which aim to enrich the information interaction content between the intelligent robot and the user, thereby improving the user experience.
  • An expression feedback method is applicable to an intelligent robot, and the intelligent robot is provided with an image collection device; wherein:
  • Step S1 the intelligent robot uses the image collection device to collect image information
  • Step S2 the intelligent robot detects whether there is a face representing a human face in the image information.
  • step S3 If yes, the location information and the size information associated with the face information are obtained, and then proceeds to step S3;
  • Step S3 predicting, according to the location information and the size information, a plurality of feature point information in the face information and outputting the same;
  • a plurality of the feature point information are respectively associated with each part of the face represented by the face information
  • Step S4 using a first recognition model formed by a pre-training, determining, according to the feature point information, whether the face information represents a smiley face, and then exiting:
  • the intelligent robot outputs preset expression feedback information
  • the expression feedback method wherein the smart robot is pre-trained to form a second recognition model for detecting the face information
  • step S2 is specifically:
  • the intelligent robot detects whether the face information exists in the image information by using the second recognition model:
  • step S3 If yes, the location information and the size information associated with the face information are obtained, and then proceeds to step S3;
  • the expression feedback method wherein the smart robot is pre-trained to form a third recognition model for predicting the feature point information
  • step S3 is specifically:
  • the 68 pieces of the feature point information associated with the face information are predicted by using the third recognition model.
  • the expression feedback method wherein the step S4 specifically includes:
  • Step S41 input all the feature point information predicted in the step S3 into the first recognition model
  • Step S42 determining, by using the first recognition model, whether the face information represents a smile face:
  • the intelligent robot outputs preset expression feedback information
  • the expression feedback method in the step S4, the expression feedback information includes:
  • a preset sound for expressing a happy mood played by the voice playback device of the intelligent robot is a preset sound for expressing a happy mood played by the voice playback device of the intelligent robot.
  • the beneficial effects of the above technical solution are: providing an expression feedback method, which can enrich the information interaction content between the intelligent robot and the user, thereby improving the user experience.
  • FIG. 1 is a schematic overall flow chart of an expression feedback method in a preferred embodiment of the present invention.
  • FIG. 2 is a flow chart showing the recognition of a smiling face on the basis of FIG. 1 in a preferred embodiment of the present invention.
  • Step S1 the intelligent robot uses the image acquisition device to collect image information
  • step S2 the intelligent robot detects whether there is face information indicating a face in the image information:
  • step S3 If yes, the location information and the size information associated with the face information are obtained, and then the process proceeds to step S3;
  • Step S3 predicting, according to the location information and the size information, a plurality of feature point information in the face information and outputting the same;
  • step S4 a first recognition model formed by a pre-training is used to determine whether the face information represents a smile based on the feature point information:
  • the intelligent robot outputs the preset expression feedback information, and then exits;
  • the intelligent robot uses face recognition to confirm what kind of expression information it should feed back. Specifically, the user needs to stand in the collection area of the image acquisition device (ie, the camera) of the intelligent robot, for example, standing directly in front of the image acquisition device.
  • the image acquisition device ie, the camera
  • the intelligent robot first uses the image acquisition device to collect image information in the collection area, and then determines whether there is face information in the image information: if present, the face information is received, and the face is simultaneously extracted.
  • Location information and size information for the information refers to a specific location of the face information in the frame of the image capture device;
  • the size information refers to the size of the face represented by the face information.
  • the plurality of feature point information of the face information may be predicted according to the location and the size thereof.
  • the plurality of feature point information are respectively associated with each part of the face represented by the face information; for example, the different feature point information is associated with the eyebrow part, the eye part, and the nose part of the face of the person indicated by the face information, respectively. , mouth parts and the entire contour of the face and other parts.
  • the first recognition model formed by the pre-training is used, and according to all the feature point information, whether the face represented by the face information is a smile is recognized: if yes, the smart The robot outputs preset expression feedback information according to the smile face; if not, it directly exits. Further, if the face information does not represent a smiley face, the smart robot outputs other feedback information according to a preset policy, and the process is not included in the technical solution of the present invention, and thus is not described.
  • a pre-training in the intelligent robot forms a check for Measuring a second recognition model of the face information
  • step S2 is specifically:
  • the intelligent robot uses the second recognition model to detect whether there is face information in the image information:
  • step S3 If yes, the location information and the size information associated with the face information are obtained, and then the process proceeds to step S3;
  • a second recognition model is formed by pre-training in the intelligent robot by inputting a plurality of different training samples, and the second recognition model is a face detector, which can be used to It is determined whether there is face information in the image information collected by the image collection device.
  • the second recognition model is a face detector, which can be used to It is determined whether there is face information in the image information collected by the image collection device.
  • a third recognition model for predicting feature point information is pre-trained in the intelligent robot
  • step S3 is specifically:
  • the third recognition model is used to predict and output a plurality of feature point information in the face information according to the position information and the size information.
  • a third identification for predicting a plurality of feature point information in the face information is pre-trained in the intelligent robot by inputting a plurality of different training samples. model.
  • the third recognition model described above is a feature point prediction model.
  • the third recognition model it is possible to predict a plurality of feature point information on the face information based on information such as face information and its position and size.
  • 68 feature point information is predicted from the face information by using the third recognition model, and the parts of the face involved in the feature point information include, but are not limited to, eyebrows and eyes. , nose, mouth and the overall contour of the face.
  • the 68 feature point information predicted from the face information according to the third recognition model described above can be used to represent the information of each part on the face information. Then, according to the identification process of the third recognition model, the 68 feature point information obtained by the prediction is output.
  • step S4 specifically includes:
  • Step S41 input all the feature point information predicted in step S3 into the first recognition model
  • step S42 the first recognition model is used to determine whether the face information represents a smile:
  • the intelligent robot outputs preset expression feedback information
  • the first recognition model is also formed by inputting a large number of training samples, and the first recognition model is a smile recognition model for identifying whether the face is a smile.
  • the smile recognition model can determine whether the feature point information is a feature of a smile face based on various feature point information on the captured person's face, and further determine whether the face is a smile face.
  • step S41 all feature point information predicted according to the third recognition model is input into the first recognition model, for example, 68 feature point information obtained according to the prediction. All are input into the first recognition model as input data of the first recognition model.
  • the first recognition model then makes a determination based on the feature point information, which may be, for example, one or more of the following:
  • the above judgment basis may also include other smile features that can usually be observed or obtained through experimental data, and will not be described herein.
  • the face can be judged to be a smile face, and then the expression feedback corresponding to the smile face is output according to the judgment result. information.
  • the so-called expression feedback information wherein one or more of the following may be included:
  • a preset sound for expressing a happy mood played by a voice playback device of the intelligent robot is a preset sound for expressing a happy mood played by a voice playback device of the intelligent robot.
  • the emoticon described above may be an emoticon displayed on a display device (such as a display screen) of the intelligent robot, such as an emoticon representing a smiley face of a stick figure. Or directly display a smile on the display, or other presets for the table Showing a happy expression.
  • a display device such as a display screen
  • the intelligent robot such as an emoticon representing a smiley face of a stick figure.
  • the sound described above may be a preset voice such as a preset laughter or a tone that is played from a voice playback device (for example, a speaker) of the smart robot, or other preset sounds.
  • a preset voice such as a preset laughter or a tone that is played from a voice playback device (for example, a speaker) of the smart robot, or other preset sounds.
  • the expression feedback information may further include other feedback information that can be output and perceived by the user, and details are not described herein.
  • the face information in the image information collected by the image acquisition device is first recognized according to a face detector, and then the face is predicted according to a feature point prediction model.
  • a plurality of feature point information in the information and then inputting all feature point information into a smile recognition model, and according to the smile recognition model, whether the current face information indicates a smile face: if the smile face is displayed, the smart robot output can be used.
  • Emotional feedback information used to express happy emotions; if not, exit directly.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

一种表情反馈方法及智能机器人,属于智能设备;方法包括:步骤S1,智能机器人采用图像采集装置采集图像信息;步骤S2,智能机器人检测图像信息中是否存在表示人脸的人脸信息:若是,则获取关联于人脸信息的位置信息和大小信息,随后转向步骤S3;若否,则返回步骤S1;步骤S3,根据位置信息和大小信息,预测得到人脸信息中的多个特征点信息并输出;步骤S4,采用一预先训练形成的第一识别模型,根据特征点信息判断人脸信息是否表示笑脸,随后退出:若是,则智能机器人输出预设的表情反馈信息;若否,则退出。该方法的有益效果是:丰富智能机器人和使用者之间的信息交互内容,从而提升使用者的使用体验。

Description

一种表情反馈方法及智能机器人 技术领域
本发明涉及智能设备技术领域,尤其涉及一种表情反馈方法及智能机器人。
背景技术
随着智能设备的制造和研发技术的飞速发展,有一种比较特殊的智能设备——智能机器人开始走进人们的生活。所谓智能机器人,其实是一种功能多样化的智能设备,相当于在一个智能设备中集成了不同种类的智能设备的不同功能。例如在一个智能机器人中可能集成有音频播放设备的音频播放功能,视频播放设备的视频播放功能,智能语音设备的语音对话功能以及其他各类功能。
现有技术中,智能机器人可以根据使用者输入的信息做出反馈,即智能机器人可以跟使用者之间进行比较简单的信息交互。但是智能机器人针对使用者做出的反馈信息通常是“冷冰冰”的,即并不会依照使用者输入信息时的不同情绪做出不同的反馈,因此造成使用者与智能机器人之间的信息交互的内容比较死板,从而降低使用者的使用体验。
发明内容
根据现有技术中存在的上述问题,现提供一种表情反馈方法及智能机器人的技术方案,旨在丰富智能机器人和使用者之间的信息交互内容,从而提升使用者的使用体验。
上述技术方案具体包括:
一种表情反馈方法,适用于智能机器人,所述智能机器人上设置有图像采集装置;其中,包括:
步骤S1,所述智能机器人采用所述图像采集装置采集图像信息;
步骤S2,所述智能机器人检测所述图像信息中是否存在表示人脸的人脸 信息:
若是,则获取关联于所述人脸信息的位置信息和大小信息,随后转向步骤S3;
若否,则返回所述步骤S1;
步骤S3,根据所述位置信息和所述大小信息,预测得到所述人脸信息中的多个特征点信息并输出;
多个所述特征点信息分别关联于所述人脸信息表示的所述人脸的各部位;
步骤S4,采用一预先训练形成的第一识别模型,根据所述特征点信息判断所述人脸信息是否表示笑脸,随后退出:
若是,则所述智能机器人输出预设的表情反馈信息;
若否,则退出。
优选的,该表情反馈方法,其中,于所述智能机器人内预先训练形成一用于检测所述人脸信息的第二识别模型;
则所述步骤S2具体为:
所述智能机器人采用所述第二识别模型检测所述图像信息中是否存在所述人脸信息:
若是,则获取关联于所述人脸信息的位置信息和大小信息,随后转向步骤S3;
若否,则返回所述步骤S1。
优选的,该表情反馈方法,其中,于所述智能机器人内预先训练形成一用于预测所述特征点信息的第三识别模型;
则所述步骤S3具体为:
采用所述第三识别模型,根据所述位置信息和所述大小信息预测得到所述人脸信息中的多个所述特征点信息并输出。
优选的,该表情反馈方法,其中,所述步骤S3中,采用所述第三识别模型预测得到关联于所述人脸信息的68个所述特征点信息。
优选的,该表情反馈方法,其中,所述步骤S4具体包括:
步骤S41,将所述步骤S3中预测得到的所有所述特征点信息输入至所述第一识别模型中;
步骤S42,采用所述第一识别模型判断所述人脸信息是否表示笑脸:
若是,则所述智能机器人输出预设的表情反馈信息;
若否,则退出。
优选的,该表情反馈方法,其中,所述步骤S4中,所述表情反馈信息包括:
通过所述智能机器人的显示装置显示的用于表示高兴的情绪的预设的表情符号;和/或
通过所述智能机器人的语音播放装置播放的用于表示高兴的情绪的预设的声音。
一种智能机器人,其中,采用上述的表情反馈方法。
上述技术方案的有益效果是:提供一种表情反馈方法,能够丰富智能机器人和使用者之间的信息交互内容,从而提升使用者的使用体验。
附图说明
图1是本发明的较佳的实施例中,一种表情反馈方法的总体流程示意图;
图2是本发明的较佳的实施例中,于图1的基础上,识别笑脸的流程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。
下面结合附图和具体实施例对本发明作进一步说明,但不作为本发明的限定。
本发明的较佳的实施例中,基于现有技术中存在的上述问题,现提供一种表情反馈方法,其适用于智能机器人。该方法的具体步骤如图1所示,包括:
步骤S1,智能机器人采用图像采集装置采集图像信息;
步骤S2,智能机器人检测图像信息中是否存在表示人脸的人脸信息:
若是,则获取关联于人脸信息的位置信息和大小信息,随后转向步骤S3;
若否,则返回步骤S1;
步骤S3,根据所述位置信息和所述大小信息,预测得到所述人脸信息中的多个特征点信息并输出;
步骤S4,采用一预先训练形成的第一识别模型,根据特征点信息判断人脸信息是否表示笑脸:
若是,则智能机器人输出预设的表情反馈信息,随后退出;
若否,则退出。
在一个具体实施例中,智能机器人采用面部识别的方式确认其应该反馈何种表情信息。具体地,使用者需要站在智能机器人的图像采集装置(即摄像头)的采集区域内,例如站在图像采集装置的正前方。
则该实施例中,智能机器人首先采用图像采集装置采集其采集区域内的图像信息,随后判断该图像信息中是否存在人脸信息:若存在,则接收该人脸信息,并同时提取该人脸信息的位置信息和大小信息。具体地,所谓位置信息,是指该人脸信息位于图像采集装置的取景框内的具体位置;所谓大小信息,是指该人脸信息所表示的人脸的大小。
该实施例中,获取到人脸信息及其位置信息和大小信息后,可以根据其位置和大小预测得到该人脸信息的多个特征点信息。具体地,多个特征点信息分别关联于人脸信息表示的人脸的各部位;例如不同的特征点信息分别关联于上述人脸信息所表示的人脸上的眉毛部位、眼睛部位、鼻子部位、嘴巴部位以及整个人脸轮廓等各部位。
该实施例中,预测得到上述多个特征点信息后,采用预先训练形成的第一识别模型,根据所有这些特征点信息,识别上述人脸信息所表示的人脸是否为笑脸:若是,则智能机器人根据该笑脸输出预设的表情反馈信息;若否,则直接退出。进一步地,若上述人脸信息不表示笑脸,则智能机器人根据预设的策略输出其他的反馈信息,该过程并不包括在本发明技术方案内,因此不对其进行描述。
本发明的较佳的实施例中,于上述智能机器人内预先训练形成一用于检 测人脸信息的第二识别模型;
则上述步骤S2具体为:
智能机器人采用第二识别模型检测图像信息中是否存在人脸信息:
若是,则获取关联于人脸信息的位置信息和大小信息,随后转向步骤S3;
若否,则返回步骤S1。
具体地,本发明的较佳的实施例中,通过输入多个不同的训练样本,在上述智能机器人内预先训练形成第二识别模型,该第二识别模型为一个人脸检测器,可以用来判断图像采集装置采集得到的图像信息中是否存在人脸信息。在现有技术中存在较多实现训练形成人脸检测器的技术方案,因此在此不再赘述。
本发明的较佳的实施例中,于智能机器人内预先训练形成一用于预测特征点信息的第三识别模型;
则上述步骤S3具体为:
采用第三识别模型,根据位置信息和大小信息预测得到人脸信息中的多个特征点信息并输出。
具体地,本发明的较佳的实施例中,同样通过输入多个不同的训练样本,在上述智能机器人中预先训练形成用于对人脸信息中的多个特征点信息进行预测的第三识别模型。换言之,上述第三识别模型为特征点预测模型。利用该第三识别模型,能够根据人脸信息及其位置和大小等信息预测得到人脸信息上的多个特征点信息。本发明的一个较佳的实施例中,利用上述第三识别模型,从人脸信息上预测得到68个特征点信息,这些特征点信息所涉及的人脸的部位包括但不限于:眉毛、眼睛、鼻子、嘴巴以及脸的整体轮廓等。换言之,依照上述第三识别模型从人脸信息上预测得到的68个特征点信息,分别可以用来表示该人脸信息上的各部位的信息。则根据上述第三识别模型的识别处理,输出预测得到的该68个特征点信息。
本发明的较佳的实施例中,如图3所示,上述步骤S4具体包括:
步骤S41,将步骤S3中预测得到的所有特征点信息输入至第一识别模型中;
步骤S42,采用第一识别模型判断人脸信息是否表示笑脸:
若是,则智能机器人输出预设的表情反馈信息;
若否,则退出。
具体地,本发明的较佳的实施例中,上述第一识别模型同样是通过输入大量的训练样本形成的,该第一识别模型为用于识别人脸是否为笑脸的笑脸识别模型。该笑脸识别模型可以根据抓取到的人脸上的各种特征点信息判断该特征点信息是否为笑脸的特征,并进而判断该人脸是否为笑脸。
本发明的较佳的实施例中,上述步骤S41中,根据上述第三识别模型预测得到的所有特征点信息被输入到上述第一识别模型中,例如上文中根据预测得到的68个特征点信息,被全部输入到第一识别模型中,以作为该第一识别模型的输入数据。随后第一识别模型根据这些特征点信息进行判断,判断依据可以例如下文中所述的一种或几种:
1)特征点信息是否表示该人脸上的嘴巴部分呈现对应笑脸的形状(例如嘴角上翘的形状,或者张嘴的形状);
2)特征点信息是否表示人脸上的脸部笑肌区域呈现对应笑脸的形状(例如肌肉聚拢并隆起的形状);
3)特征点信息是否表示人脸上的眼睛区域呈现对应笑脸的形状(例如眯眼的形状)。
上述判断依据还可以包括其他通常可以观测到或者通过实验数据得到的笑脸特征,在此不再赘述。
本发明的较佳的实施例中,若根据上述68个特征点信息的输入可以判断当前的人脸符合笑脸特征,则可以判断该人脸为笑脸,并进而根据判断结果输出对应笑脸的表情反馈信息。
本发明的较佳的实施例中,所谓表情反馈信息,其中可以下文中所述的一种或几种包括:
通过智能机器人的显示装置显示的用于表示高兴的情绪的预设的表情符号;以及
通过智能机器人的语音播放装置播放的用于表示高兴的情绪的预设的声音。
具体地,本发明的较佳的实施例中,上文中所述的表情符号,可以为显示在智能机器人的显示装置(例如显示屏)上的表情符号,例如表示简笔画的笑脸的表情符号,或者直接在显示屏上显示笑脸,或者其他预设的用于表 示高兴情绪的表情。
上文中所述的声音,可以为从智能机器人的语音播放装置(例如扬声器)播放的例如预设的笑声或者语气较为高兴的预设语音,或者其他预设的声音等。
本发明的较佳的实施例中,上述表情反馈信息还可以包括其他能够输出并被使用者所感知的反馈信息,在此不再赘述。
综上所述,本发明技术方案中,利用不同的识别模型,首先根据一人脸检测器识别出图像采集装置采集到的图像信息中的人脸信息,随后根据一特征点预测模型预测得到人脸信息中的多个特征点信息,再把所有特征点信息输入到一笑脸识别模型中,并根据该笑脸识别模型识别当前的人脸信息是否表示笑脸:若表示笑脸,则智能机器人输出能够被使用者感知的用于表示高兴的情绪的表情反馈信息;若否,则直接退出。上述技术方案能够丰富智能机器人与使用者之间的信息交互,并提升使用者的适用体验。
本发明的较佳的实施例中,还提供一种智能机器人,其中采用上文中所述的表情反馈方法。
以上所述仅为本发明较佳的实施例,并非因此限制本发明的实施方式及保护范围,对于本领域技术人员而言,应当能够意识到凡运用本发明说明书及图示内容所作出的等同替换和显而易见的变化所得到的方案,均应当包含在本发明的保护范围内。

Claims (7)

  1. 一种表情反馈方法,适用于智能机器人,所述智能机器人上设置有图像采集装置;其特征在于,包括:
    步骤S1,所述智能机器人采用所述图像采集装置采集图像信息;
    步骤S2,所述智能机器人检测所述图像信息中是否存在表示人脸的人脸信息:
    若是,则获取关联于所述人脸信息的位置信息和大小信息,随后转向步骤S3;
    若否,则返回所述步骤S1;
    步骤S3,根据所述位置信息和所述大小信息,预测得到所述人脸信息中的多个特征点信息并输出;
    多个所述特征点信息分别关联于所述人脸信息表示的所述人脸的各部位;
    步骤S4,采用一预先训练形成的第一识别模型,根据所述特征点信息判断所述人脸信息是否表示笑脸:
    若是,则所述智能机器人输出预设的表情反馈信息,随后退出;
    若否,则退出。
  2. 如权利要求1所述的表情反馈方法,其特征在于,于所述智能机器人内预先训练形成一用于检测所述人脸信息的第二识别模型;
    则所述步骤S2具体为:
    所述智能机器人采用所述第二识别模型检测所述图像信息中是否存在所述人脸信息:
    若是,则获取关联于所述人脸信息的位置信息和大小信息,随后转向步骤S3;
    若否,则返回所述步骤S1。
  3. 如权利要求1所述的表情反馈方法,其特征在于,于所述智能机器人内预先训练形成一用于预测所述特征点信息的第三识别模型;
    则所述步骤S3具体为:
    采用所述第三识别模型,根据所述位置信息和所述大小信息预测得到所 述人脸信息中的多个所述特征点信息并输出。
  4. 如权利要求3所述的表情反馈方法,其特征在于,所述步骤S3中,采用所述第三识别模型预测得到关联于所述人脸信息的68个所述特征点信息。
  5. 如权利要求1所述的表情反馈方法,其特征在于,所述步骤S4具体包括:
    步骤S41,将所述步骤S3中预测得到的所有所述特征点信息输入至所述第一识别模型中;
    步骤S42,采用所述第一识别模型判断所述人脸信息是否表示笑脸:
    若是,则所述智能机器人输出预设的表情反馈信息;
    若否,则退出。
  6. 如权利要求1所述的表情反馈方法,其特征在于,所述步骤S4中,所述表情反馈信息包括:
    通过所述智能机器人的显示装置显示的用于表示高兴的情绪的预设的表情符号;和/或
    通过所述智能机器人的语音播放装置播放的用于表示高兴的情绪的预设的声音。
  7. 一种智能机器人,其特征在于,采用如权利要求1-6所述的表情反馈方法。
PCT/CN2017/074054 2016-02-23 2017-02-20 一种表情反馈方法及智能机器人 WO2017143951A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/999,762 US11819996B2 (en) 2016-02-23 2017-02-20 Expression feedback method and smart robot

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610099484.9A CN107103269A (zh) 2016-02-23 2016-02-23 一种表情反馈方法及智能机器人
CN201610099484.9 2016-02-23

Publications (1)

Publication Number Publication Date
WO2017143951A1 true WO2017143951A1 (zh) 2017-08-31

Family

ID=59658321

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/074054 WO2017143951A1 (zh) 2016-02-23 2017-02-20 一种表情反馈方法及智能机器人

Country Status (4)

Country Link
US (1) US11819996B2 (zh)
CN (1) CN107103269A (zh)
TW (1) TW201826167A (zh)
WO (1) WO2017143951A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446617A (zh) * 2018-03-09 2018-08-24 华南理工大学 抗侧脸干扰的人脸快速检测方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034069B (zh) * 2018-07-27 2021-04-09 北京字节跳动网络技术有限公司 用于生成信息的方法和装置
CN110480656B (zh) * 2019-09-09 2021-09-28 国家康复辅具研究中心 一种陪护机器人、陪护机器人控制方法及装置
CN115042893B (zh) * 2022-06-13 2023-07-18 北京航空航天大学 基于mems加工实现的微型爬行机器人

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877056A (zh) * 2009-12-21 2010-11-03 北京中星微电子有限公司 人脸表情识别方法及系统、表情分类器的训练方法及系统
CN103488293A (zh) * 2013-09-12 2014-01-01 北京航空航天大学 一种基于表情识别的人机情感交互系统及方法
CN103679203A (zh) * 2013-12-18 2014-03-26 江苏久祥汽车电器集团有限公司 机器人的人脸检测与情感识别系统及方法
CN104102346A (zh) * 2014-07-01 2014-10-15 华中科技大学 一种家用信息采集和用户情感识别设备及其工作方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100423911C (zh) * 2000-10-13 2008-10-08 索尼公司 机器人装置及其行为控制方法
EP1262844A1 (en) * 2001-06-01 2002-12-04 Sony International (Europe) GmbH Method for controlling a man-machine-interface unit
US7113848B2 (en) * 2003-06-09 2006-09-26 Hanson David F Human emulation robot system
US8666198B2 (en) * 2008-03-20 2014-03-04 Facebook, Inc. Relationship mapping employing multi-dimensional context including facial recognition
US8848068B2 (en) * 2012-05-08 2014-09-30 Oulun Yliopisto Automated recognition algorithm for detecting facial expressions
US11158403B1 (en) * 2015-04-29 2021-10-26 Duke University Methods, systems, and computer readable media for automated behavioral assessment
US11308313B2 (en) * 2018-04-25 2022-04-19 Shutterfly, Llc Hybrid deep learning method for recognizing facial expressions
JP6675564B1 (ja) * 2019-05-13 2020-04-01 株式会社マイクロネット 顔認識システム、顔認識方法及び顔認識プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877056A (zh) * 2009-12-21 2010-11-03 北京中星微电子有限公司 人脸表情识别方法及系统、表情分类器的训练方法及系统
CN103488293A (zh) * 2013-09-12 2014-01-01 北京航空航天大学 一种基于表情识别的人机情感交互系统及方法
CN103679203A (zh) * 2013-12-18 2014-03-26 江苏久祥汽车电器集团有限公司 机器人的人脸检测与情感识别系统及方法
CN104102346A (zh) * 2014-07-01 2014-10-15 华中科技大学 一种家用信息采集和用户情感识别设备及其工作方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446617A (zh) * 2018-03-09 2018-08-24 华南理工大学 抗侧脸干扰的人脸快速检测方法

Also Published As

Publication number Publication date
TW201826167A (zh) 2018-07-16
US11819996B2 (en) 2023-11-21
CN107103269A (zh) 2017-08-29
US20210291380A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
JP5323770B2 (ja) ユーザ指示取得装置、ユーザ指示取得プログラムおよびテレビ受像機
US12069345B2 (en) Characterizing content for audio-video dubbing and other transformations
JP5201050B2 (ja) 会議支援装置、会議支援方法、会議システム、会議支援プログラム
JP6101684B2 (ja) 患者を支援する方法及びシステム
WO2017143951A1 (zh) 一种表情反馈方法及智能机器人
US20130162524A1 (en) Electronic device and method for offering services according to user facial expressions
EP2925005A1 (en) Display apparatus and user interaction method thereof
Danner et al. Quantitative analysis of multimodal speech data
KR20160054392A (ko) 전자 장치 및 그 동작 방법
WO2020215590A1 (zh) 智能拍摄设备及其基于生物特征识别的场景生成方法
CN108733209A (zh) 人机交互方法、装置、机器人和存储介质
CN105930035A (zh) 显示界面背景的方法及装置
Paleari et al. Evidence theory-based multimodal emotion recognition
CN105528080A (zh) 一种对移动终端进行控制的方法及装置
JP2002023716A (ja) プレゼンテーションシステムおよび記録媒体
JP2015104078A (ja) 撮像装置、撮像システム、サーバ、撮像方法、及び撮像プログラム
WO2021134250A1 (zh) 情绪管理方法、设备及计算机可读存储介质
JP7206741B2 (ja) 健康状態判定システム、健康状態判定装置、サーバ、健康状態判定方法、及びプログラム
WO2020175969A1 (ko) 감정 인식 장치 및 감정 인식 방법
JP5847646B2 (ja) テレビ制御装置、テレビ制御方法及びテレビ制御プログラム
JP2017182261A (ja) 情報処理装置、情報処理方法、およびプログラム
Galvan et al. Audiovisual affect recognition in spontaneous filipino laughter
JP6838739B2 (ja) 近時記憶支援装置
CN113450804B (zh) 语音可视化方法、装置、投影设备及计算机可读存储介质
JP2000194252A (ja) 理想行動支援装置及びその方法及びそのシステム並びに記憶媒体

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17755780

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17755780

Country of ref document: EP

Kind code of ref document: A1