US10664511B2 - Fast identification method and household intelligent robot - Google Patents

Fast identification method and household intelligent robot Download PDF

Info

Publication number
US10664511B2
US10664511B2 US15/766,890 US201615766890A US10664511B2 US 10664511 B2 US10664511 B2 US 10664511B2 US 201615766890 A US201615766890 A US 201615766890A US 10664511 B2 US10664511 B2 US 10664511B2
Authority
US
United States
Prior art keywords
user
intelligent robot
household intelligent
facial image
personal file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/766,890
Other languages
English (en)
Other versions
US20180293236A1 (en
Inventor
Wenjie Xiang
Lei Zhu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutou Technology Hangzhou Co Ltd
Original Assignee
Yutou Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yutou Technology Hangzhou Co Ltd filed Critical Yutou Technology Hangzhou Co Ltd
Assigned to YUTOU TECHNOLOGY (HANGZHOU) CO., LTD. reassignment YUTOU TECHNOLOGY (HANGZHOU) CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XIANG, Wenjie, ZHU, LEI
Publication of US20180293236A1 publication Critical patent/US20180293236A1/en
Application granted granted Critical
Publication of US10664511B2 publication Critical patent/US10664511B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/433Query formulation using audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06K9/00228
    • G06K9/00255
    • G06K9/00288
    • G06K9/00664
    • G06K9/00892
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques

Definitions

  • the invention relates to the field of robots, and more particularly, to a household intelligent robot and a rapid identification method applicable to the household intelligent robot.
  • the present invention provides a rapid identification method and a household intelligent robot capable of rapidly identifying family members and providing personalized services to each family member.
  • a rapid identification method applicable to a household intelligent robot, comprising:
  • Step S 100 pre-setting a plurality of personal files corresponding to different users
  • Step S 200 collecting identification information associated with features of the user, and establishing an association between the identification information and the personal file corresponding to the user;
  • Step S 300 the household intelligent robot collecting the features of the user and matching the user features with stored identification information, so as to identify the user;
  • Step S 400 if the user is successfully identified, executing Step S 400 , otherwise, exiting;
  • Step S 400 retrieving the corresponding personal file according to the identified user, and working according to the personal file.
  • the above-mentioned rapid identification method wherein the user starts up the household intelligent robot by an activated voice, and sends a command to the household intelligent robot.
  • the above-mentioned rapid identification method wherein the identification information comprises a voiceprint model.
  • the above-mentioned rapid identification method wherein the identification information comprises a facial image model.
  • the above-mentioned rapid identification method wherein the voiceprint model collection methods comprise a first active collection and a first automatic collection
  • the first active collection collects the activated voice of the user in advance according to the household intelligent robot to acquire the voiceprint model of the user
  • the first automatic collection automatically acquires the voiceprint model of the user according to the initial activated voice, used by the user and collected by the household intelligent robot.
  • the above-mentioned rapid identification method wherein the facial image model collection methods comprise a second active collection and a second automatic collection,
  • the second active collection collects a facial image of the user in advance according to the household intelligent robot to acquire the facial image model of the user
  • the second automatic collection automatically reads the facial image of the user and acquires the facial image model after acquiring the voiceprint model of the user according to the household intelligent robot.
  • the household intelligent robot receives a command of the identified user, and executes the command according to the history record and the favorites list in the personal file of the identified user.
  • the above-mentioned rapid identification method comprising: providing a storage unit for storing a plurality of pre-recorded voice associated with time, wherein the personal file comprises a name of the user; and the household intelligent robot automatically performs a facial image identification on the user, retrieves the name of the user in the personal file according to a identification result, selects a corresponding pre-recorded voce stored in the storage unit according to the current time, and finally splices the name with the pre-recorded voice by utterance of a machine before playing the name.
  • the above-mentioned rapid identification method comprising: providing a camera for reading the facial image of the user.
  • the present invention further provides a household intelligent robot, adopting the above-mentioned rapid identification method
  • Beneficial effects of the present invention are as follows: by adoption of the technical solution, different users can be identified rapidly, and identification performance and identification rate can be improved, such that the household intelligent robot becomes more intelligent and can provide personalized services depending on different users' features, thereby having wide application prospects.
  • FIG. 1 is a flowchart of an embodiment of a rapid identification method and a household intelligent robot of the present invention.
  • “around”, “about” or “approximately” shall generally mean within 20 percent, preferably within 10 percent, and more preferably within 5 percent of a given value or range. Numerical quantities given herein are approximate, meaning that the term “around”, “about” or “approximately” can be inferred if not expressly stated.
  • the term “plurality” means a number greater than one.
  • a rapid identification method applicable to a household intelligent robot, as shown in FIG. 1 , comprising:
  • Step S 100 pre-setting a plurality of personal files corresponding to different users
  • Step S 200 collecting identification information associated with features of the user, and establishing an association between the identification information and the personal file corresponding to the user;
  • the identification information comprises a voiceprint model.
  • the voiceprint model collection methods comprise a first active collection and a first automatic collection
  • the first active collection collects the activated voice of the user in advance according to the household intelligent robot to acquire the voiceprint model of the user
  • the active collection is about entering some information into the robot and making some settings on the robot for future use, such as a setting of the activated voice, and a collection of the identification information related to each family member when using the household intelligent robot for the first time.
  • the first automatic collection automatically acquires the voiceprint model of the user according to the initial activated voice, used by the user and collected by the household intelligent robot.
  • the collection of identification information for a new user can be achieved through the automatic collection of the household intelligent robots. For example, when the new user gives a command to the household intelligent robot for the first time (e.g., calls a name setting for the household intelligent robot), the household intelligent robot is activated based on the activated voice, and the voice of the user is collected to produce a voiceprint model. In response to the user's command, the identification information is collected and a new user's personal file is created at the same time. The collected voiceprint model is saved as the identification information.
  • a command to the household intelligent robot for the first time e.g., calls a name setting for the household intelligent robot
  • the household intelligent robot is activated based on the activated voice
  • the voice of the user is collected to produce a voiceprint model.
  • the identification information is collected and a new user's personal file is created at the same time.
  • the collected voiceprint model is saved as the identification information.
  • the identification information comprises a facial image model.
  • the voiceprint model collection methods comprise a second active collection and a second automatic collection
  • the second active collection collects a facial image of the user in advance according to the household intelligent robot to acquire the facial image model of the user
  • the active collection is about entering some information into the robot and making some settings on the robot for future use, such as a setting of the activated voice, and a collection of the identification information related to each family member when using the household intelligent robot for the first time.
  • the second automatic collection automatically reads the facial image of the user and acquires the facial image model after acquiring the voiceprint model of the user according to the household intelligent robot.
  • the collection of the face image model for the new user is also included to facilitate the identification of the user when using the household intelligent robot next time.
  • Step S 300 the household intelligent robot collects the features of the user and matches the user features with stored identification information, so as to identify the user;
  • Step S 400 exiting
  • the household intelligence robot when the household intelligence robot is identifying the user, for example, if the collected facial image is so blurred that facial recognition cannot be performed, recognition of the user's voiceprint is automatically performed; if the identity of the user is identified through the voiceprint, even if the facial image recognition is not successful, the user can also be identified by the household intelligent robot by use of voice recognition.
  • the household intelligent robot successfully identifies the user; only in the case when neither the facial recognition nor the voiceprint recognition is successful, the household intelligent robot fails in identifying the user, then the user can be identified again by voice or facial image recognition.
  • Step S 400 retrieving the corresponding personal file according to the identified user, and working according to the personal file.
  • the user starts up the household intelligent robot by an activated voice, and sends a command to the household intelligent robot.
  • the household intelligent robot When the user gives the command to the household intelligent robot, in order to distinguish it from the other languages of the user, the household intelligent robot is generally activated by using a fixed voice, such as giving a nice name to the household intelligent robot, and calling out the name with emotion as you did in calling your family members. With early settings, the household intelligent robot is activated when hearing its own name. Since the activated voice of the robot is fixed, voice recognition based on the activated voice can be done. The user activates the household intelligent robot by sending the activated voice when using the robot. When the robot detects a sound containing its own name, the voiceprint is detected. Hence, the voiceprint detection based on the fixed voice has a higher accuracy.
  • the personal file comprises a history record and a favorites list
  • the household intelligent robot receives a command of the identified user, and executes the command according to the history record and the favorites list in the personal file of the identified user.
  • the robot may identify the user based on the activated voice, record the user's playlist and analyze it. After the user uses the robot for a period of time, the robot can make an accurate recommendation based on the history record and the favorites list of the user.
  • the robot may distinguish one from another in the family through the voiceprint, thereby recommending different music for different family members.
  • a storage unit for storing a plurality of pre-recorded voice associated with time, wherein the personal file comprises a name of the user; and the household intelligent robot automatically performs a facial image identification on the user, retrieves the name of the user in the personal file according to a identification result, selects a corresponding pre-recorded voce stored in the storage unit according to the current time, and finally splices the name with the pre-recorded voice by utterance of a machine before playing the name.
  • the robot detects someone coming around through an infrared camera device. Then the robot will make an active self-activation, identify the identity of the user through the facial image, to obtain the user's personal file, and obtain a corresponding pre-recorded voice in the storage unit according to the current time.
  • the household intelligent robot may say the name in the personal file through a built-in TTS (Text To Speech), an engine player, and splice the obtained pre-recorded voice, so as to form greetings such as “Good evening, XXX”; or play the user's most favorite music based on the history record in the personal file.
  • greetings can be saved as a character string in the storage unit, and be directly played out by the TTS engine player, thereby reducing storage space required for the storage unit.
  • a camera is provided to read the user's facial image.
  • the camera synchronously detects the user's face while detecting the voiceprint. If the user's facial image is not detected, the voiceprint data is saved separately; if the user's facial image is detected, saving the user's face and the voiceprint data simultaneously and associating the same with the personal file. After user's confirmation via the interaction between the robot and the user, associations among the voiceprint, the facial image and the personal file are established.
  • the present invention further comprises a household intelligent robot, adopting the above rapid identification method.
  • either the voiceprint model or the face model can be used for identification. Identification carried out in multiple ways is conducive to improving the accuracy and efficiency of the identification. If the user interacts with the robot by activating the robot through the activated voice, the user can be accurately identified through voiceprint recognition; if the user does not use activated voice, the user can also be identified through the face.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)
US15/766,890 2015-10-09 2016-10-09 Fast identification method and household intelligent robot Active 2036-12-21 US10664511B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201510650110.7 2015-10-09
CN201510650110.7A CN106570443A (zh) 2015-10-09 2015-10-09 一种快速识别方法及家庭智能机器人
CN201510650110 2015-10-09
PCT/CN2016/101567 WO2017059815A1 (zh) 2015-10-09 2016-10-09 一种快速识别方法及家庭智能机器人

Publications (2)

Publication Number Publication Date
US20180293236A1 US20180293236A1 (en) 2018-10-11
US10664511B2 true US10664511B2 (en) 2020-05-26

Family

ID=58487272

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/766,890 Active 2036-12-21 US10664511B2 (en) 2015-10-09 2016-10-09 Fast identification method and household intelligent robot

Country Status (6)

Country Link
US (1) US10664511B2 (ja)
EP (1) EP3361410A4 (ja)
JP (1) JP6620230B2 (ja)
CN (1) CN106570443A (ja)
TW (1) TWI621470B (ja)
WO (1) WO2017059815A1 (ja)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570443A (zh) 2015-10-09 2017-04-19 芋头科技(杭州)有限公司 一种快速识别方法及家庭智能机器人
WO2018216180A1 (ja) * 2017-05-25 2018-11-29 三菱電機株式会社 音声認識装置および音声認識方法
CN107623614B (zh) * 2017-09-19 2020-12-08 百度在线网络技术(北京)有限公司 用于推送信息的方法和装置
WO2019084963A1 (zh) * 2017-11-06 2019-05-09 深圳市沃特沃德股份有限公司 机器人及其服务方法和装置
WO2019176018A1 (ja) * 2018-03-14 2019-09-19 株式会社ウフル Aiスピーカーシステム、aiスピーカーシステムの制御方法、及びプログラム
WO2019181144A1 (ja) * 2018-03-20 2019-09-26 ソニー株式会社 情報処理装置及び情報処理方法、並びにロボット装置
CN108765921A (zh) * 2018-04-04 2018-11-06 昆山市工研院智能制造技术有限公司 基于视觉语意分析应用于巡逻机器人的智能巡逻方法
CN110390938A (zh) * 2018-04-20 2019-10-29 比亚迪股份有限公司 基于声纹的语音处理方法、装置和终端设备
CN110290846B (zh) * 2018-06-29 2021-09-24 深圳市大疆创新科技有限公司 一种虚拟对战的处理方法、服务器以及可移动平台
CN109688183B (zh) * 2018-08-20 2022-08-19 深圳壹账通智能科技有限公司 群控设备识别方法、装置、设备及计算机可读存储介质
CN109446876B (zh) * 2018-08-31 2020-11-06 百度在线网络技术(北京)有限公司 手语信息处理方法、装置、电子设备和可读存储介质
CN111177329A (zh) * 2018-11-13 2020-05-19 奇酷互联网络科技(深圳)有限公司 一种智能终端的用户交互方法、智能终端及存储介质
CN111292734B (zh) * 2018-12-06 2024-03-08 阿里巴巴集团控股有限公司 一种语音交互方法和装置
CN109882985B (zh) * 2018-12-26 2020-07-28 珠海格力电器股份有限公司 一种语音播报方法、装置、存储介质及空调
US11151993B2 (en) * 2018-12-28 2021-10-19 Baidu Usa Llc Activating voice commands of a smart display device based on a vision-based mechanism

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6347261B1 (en) * 1999-08-04 2002-02-12 Yamaha Hatsudoki Kabushiki Kaisha User-machine interface system for enhanced interaction
US20060047362A1 (en) * 2002-12-02 2006-03-02 Kazumi Aoyama Dialogue control device and method, and robot device
US20130078600A1 (en) * 2011-08-29 2013-03-28 Worcester Polytechnic Institute System and method of pervasive developmental disorder interventions
CN103345232A (zh) 2013-07-15 2013-10-09 孟凡忠 个性化智能家居控制方法及系统
US8706827B1 (en) * 2012-06-21 2014-04-22 Amazon Technologies, Inc. Customized speech generation
US20140237576A1 (en) * 2013-01-29 2014-08-21 Tencent Technology (Shenzhen) Company Limited User authentication method and apparatus based on audio and video data
US20150088310A1 (en) * 2012-05-22 2015-03-26 Intouch Technologies, Inc. Social behavior rules for a medical telepresence robot
CN104700018A (zh) 2015-03-31 2015-06-10 江苏祥和电子科技有限公司 一种用于智能机器人的识别方法
CN204462847U (zh) 2014-12-28 2015-07-08 青岛通产软件科技有限公司 一种多功能酒店服务机器人
CN104951077A (zh) 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
US20160156771A1 (en) * 2014-11-28 2016-06-02 Samsung Electronics Co., Ltd. Electronic device, server, and method for outputting voice
US9538005B1 (en) * 2014-09-19 2017-01-03 Amazon Technologies, Inc. Automated response system
CN106570443A (zh) 2015-10-09 2017-04-19 芋头科技(杭州)有限公司 一种快速识别方法及家庭智能机器人
US20180136615A1 (en) * 2016-11-15 2018-05-17 Roborus Co., Ltd. Concierge robot system, concierge service method, and concierge robot
US20180144649A1 (en) * 2010-06-07 2018-05-24 Affectiva, Inc. Smart toy interaction using image analysis
US20180143645A1 (en) * 2016-11-18 2018-05-24 Robert Bosch Start-Up Platform North America, LLC, Series 1 Robotic creature and method of operation
US20180187969A1 (en) * 2017-01-03 2018-07-05 Samsung Electronics Co., Ltd. Refrigerator
US10127226B2 (en) * 2013-10-01 2018-11-13 Softbank Robotics Europe Method for dialogue between a machine, such as a humanoid robot, and a human interlocutor utilizing a plurality of dialog variables and a computer program product and humanoid robot for implementing such a method

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1079615A3 (en) * 1999-08-26 2002-09-25 Matsushita Electric Industrial Co., Ltd. System for identifying and adapting a TV-user profile by means of speech technology
JP2001277166A (ja) * 2000-03-31 2001-10-09 Sony Corp ロボット及びロボットの行動決定方法
EP1395803B1 (en) * 2001-05-10 2006-08-02 Koninklijke Philips Electronics N.V. Background learning of speaker voices
JP2003122392A (ja) * 2001-10-16 2003-04-25 Yamatake Corp 音声入力者を判定する方法及び装置
JP3951235B2 (ja) * 2003-02-19 2007-08-01 ソニー株式会社 学習装置及び学習方法並びにロボット装置
JP2005157086A (ja) * 2003-11-27 2005-06-16 Matsushita Electric Ind Co Ltd 音声認識装置
JP4595436B2 (ja) * 2004-03-25 2010-12-08 日本電気株式会社 ロボット、その制御方法及び制御用プログラム
JP2008233345A (ja) * 2007-03-19 2008-10-02 Toshiba Corp インタフェース装置及びインタフェース処理方法
JP2011504710A (ja) * 2007-11-21 2011-02-10 ジェスチャー テック,インコーポレイテッド メディア嗜好
CN102447786A (zh) * 2011-11-14 2012-05-09 候万春 一种个人专用生活协助装置和方法
CN202736475U (zh) * 2011-12-08 2013-02-13 华南理工大学 一种聊天机器人
US9117449B2 (en) * 2012-04-26 2015-08-25 Nuance Communications, Inc. Embedded system for construction of small footprint speech recognition with user-definable constraints
TW201408052A (zh) * 2012-08-14 2014-02-16 Kentec Inc 電視裝置及其虛擬主持人顯示方法
JP2014092777A (ja) * 2012-11-06 2014-05-19 Magic Hand:Kk モバイル通信機器の音声による起動
US9489171B2 (en) * 2014-03-04 2016-11-08 Microsoft Technology Licensing, Llc Voice-command suggestions based on user identity
JP2015176058A (ja) * 2014-03-17 2015-10-05 株式会社東芝 電子機器、方法及びプログラム
CN104834849B (zh) * 2015-04-14 2018-09-18 北京远鉴科技有限公司 基于声纹识别和人脸识别的双因素身份认证方法及系统

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6347261B1 (en) * 1999-08-04 2002-02-12 Yamaha Hatsudoki Kabushiki Kaisha User-machine interface system for enhanced interaction
US20060047362A1 (en) * 2002-12-02 2006-03-02 Kazumi Aoyama Dialogue control device and method, and robot device
US20180144649A1 (en) * 2010-06-07 2018-05-24 Affectiva, Inc. Smart toy interaction using image analysis
US20130078600A1 (en) * 2011-08-29 2013-03-28 Worcester Polytechnic Institute System and method of pervasive developmental disorder interventions
US20150088310A1 (en) * 2012-05-22 2015-03-26 Intouch Technologies, Inc. Social behavior rules for a medical telepresence robot
US8706827B1 (en) * 2012-06-21 2014-04-22 Amazon Technologies, Inc. Customized speech generation
US20140237576A1 (en) * 2013-01-29 2014-08-21 Tencent Technology (Shenzhen) Company Limited User authentication method and apparatus based on audio and video data
CN103345232A (zh) 2013-07-15 2013-10-09 孟凡忠 个性化智能家居控制方法及系统
US10127226B2 (en) * 2013-10-01 2018-11-13 Softbank Robotics Europe Method for dialogue between a machine, such as a humanoid robot, and a human interlocutor utilizing a plurality of dialog variables and a computer program product and humanoid robot for implementing such a method
US9538005B1 (en) * 2014-09-19 2017-01-03 Amazon Technologies, Inc. Automated response system
US20160156771A1 (en) * 2014-11-28 2016-06-02 Samsung Electronics Co., Ltd. Electronic device, server, and method for outputting voice
CN204462847U (zh) 2014-12-28 2015-07-08 青岛通产软件科技有限公司 一种多功能酒店服务机器人
CN104700018A (zh) 2015-03-31 2015-06-10 江苏祥和电子科技有限公司 一种用于智能机器人的识别方法
CN104951077A (zh) 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN106570443A (zh) 2015-10-09 2017-04-19 芋头科技(杭州)有限公司 一种快速识别方法及家庭智能机器人
US20180136615A1 (en) * 2016-11-15 2018-05-17 Roborus Co., Ltd. Concierge robot system, concierge service method, and concierge robot
US20180143645A1 (en) * 2016-11-18 2018-05-24 Robert Bosch Start-Up Platform North America, LLC, Series 1 Robotic creature and method of operation
US20180187969A1 (en) * 2017-01-03 2018-07-05 Samsung Electronics Co., Ltd. Refrigerator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report for International Patent Application No. PCT/CN2016/101567, dated Jan. 5, 2017, 7 pages.

Also Published As

Publication number Publication date
EP3361410A1 (en) 2018-08-15
WO2017059815A1 (zh) 2017-04-13
CN106570443A (zh) 2017-04-19
US20180293236A1 (en) 2018-10-11
JP6620230B2 (ja) 2019-12-11
TWI621470B (zh) 2018-04-21
TW201713402A (zh) 2017-04-16
EP3361410A4 (en) 2019-03-27
JP2018533064A (ja) 2018-11-08

Similar Documents

Publication Publication Date Title
US10664511B2 (en) Fast identification method and household intelligent robot
CN106463112B (zh) 语音识别方法、语音唤醒装置、语音识别装置及终端
CN102842306B (zh) 语音控制方法及装置、语音响应方法及装置
US11282519B2 (en) Voice interaction method, device and computer readable storage medium
US20210280172A1 (en) Voice Response Method and Device, and Smart Device
JP2019117623A (ja) 音声対話方法、装置、デバイス及び記憶媒体
CN106782540B (zh) 语音设备及包括所述语音设备的语音交互系统
CN105872687A (zh) 一种通过语音控制智能设备的方法及装置
US20210329101A1 (en) Creating a cinematic storytelling experience using network-addressable devices
CN106024009A (zh) 音频处理方法及装置
CN103516854A (zh) 终端装置及其控制方法
CN107342088B (zh) 一种声音信息的转换方法、装置及设备
CN105551488A (zh) 语音控制方法及系统
WO2017084185A1 (zh) 基于语义分析的智能终端控制方法、系统及智能终端
CN105791935A (zh) 一种电视的控制方法及装置
CN111343028A (zh) 配网控制方法及装置
CN104883503A (zh) 基于语音的个性化拍照技术
CN111508491A (zh) 一种基于深度学习的智能语音交互设备
CN107809654A (zh) 电视机系统及电视机控制方法
WO2019227370A1 (zh) 一种多语音助手控制方法、装置、系统及计算机可读存储介质
JP6054140B2 (ja) メッセージ管理装置、メッセージ提示装置、メッセージ管理装置の制御方法、およびメッセージ提示装置の制御方法
WO2017032146A1 (zh) 一种文件分享方法及装置
CN110415703A (zh) 语音备忘信息处理方法及装置
EP2913822B1 (en) Speaker recognition
WO2019176252A1 (ja) 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム

Legal Events

Date Code Title Description
AS Assignment

Owner name: YUTOU TECHNOLOGY (HANGZHOU) CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XIANG, WENJIE;ZHU, LEI;REEL/FRAME:045476/0955

Effective date: 20180408

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4