WO2019148491A1 - Procédé et dispositif d'interaction homme-ordinateur, robot et support d'informations lisible par ordinateur - Google Patents

Procédé et dispositif d'interaction homme-ordinateur, robot et support d'informations lisible par ordinateur Download PDF

Info

Publication number
WO2019148491A1
WO2019148491A1 PCT/CN2018/075263 CN2018075263W WO2019148491A1 WO 2019148491 A1 WO2019148491 A1 WO 2019148491A1 CN 2018075263 W CN2018075263 W CN 2018075263W WO 2019148491 A1 WO2019148491 A1 WO 2019148491A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
information
human
interacted
target
Prior art date
Application number
PCT/CN2018/075263
Other languages
English (en)
Chinese (zh)
Inventor
张含波
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201880001295.0A priority Critical patent/CN108780361A/zh
Priority to PCT/CN2018/075263 priority patent/WO2019148491A1/fr
Publication of WO2019148491A1 publication Critical patent/WO2019148491A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of robot technology, and in particular, to a human-computer interaction method, device, robot, and computer readable storage medium.
  • HMI Human-computer interaction or Human-Machine Interaction
  • the system can be a wide variety of machines or computerized systems and software. Taking interactive robots placed in public places such as bank business halls, large shopping malls, airports, etc. as an example, the robot can respond to the computer system and provide services for users, such as actively initiating greetings, answering user questions, guiding users to handle business, etc. .
  • One technical problem to be solved by some embodiments of the present application is to provide a human-computer interaction method, apparatus, robot, and computer readable storage medium to solve the above technical problems.
  • An embodiment of the present application provides a human-computer interaction method, the human-computer interaction method being applied to a robot, comprising: extracting biometric information of the identified at least one object; wherein the biometric information includes physiological characteristic information and/or Behavior characteristic information; determining, according to the biometric information, a target interaction object that needs to interact from at least one object; and controlling the robot to make a response that matches the target interaction object.
  • An embodiment of the present application provides a human-machine interaction device, which is applied to a robot, including: an extraction module, a determination module, and a control module; and an extraction module, configured to extract biometric characteristics of the identified at least one object Information; wherein the biometric information includes physiological characteristic information and/or behavior characteristic information; and a determining module, configured to determine, from the at least one object, a target interactive object that needs to interact according to the biometric information; and a control module, configured to control the robot to make The response that matches the target interaction object.
  • An embodiment of the present application provides a robot including at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being at least one The processor executes to enable the at least one processor to perform the human-computer interaction method involved in any of the method embodiments of the present application.
  • An embodiment of the present application provides a computer readable storage medium storing computer instructions for causing a computer to execute the human-computer interaction method involved in any of the method embodiments of the present application.
  • the robot detects the object by extracting the biometric information of the recognized object, and determines the object that really needs to be interacted according to the biometric information, and determines that the interaction is really needed.
  • the object is only made to respond to the object.
  • the robot can respond only to the objects that need to interact, thereby effectively avoiding the false response operation and greatly improving the user experience.
  • FIG. 1 is a flowchart of a human-computer interaction method in a first embodiment of the present application
  • FIG. 2 is a schematic diagram of a robot determining a target interaction object in the first embodiment of the present application
  • FIG. 3 is a flowchart of a human-computer interaction method in a second embodiment of the present application.
  • FIG. 4 is a schematic diagram of a robot determining a target interaction object in a second embodiment of the present application
  • FIG. 5 is a flowchart of a human-computer interaction method in a third embodiment of the present application.
  • FIG. 6 is a block diagram showing a human-machine interaction apparatus in a fourth embodiment of the present application.
  • Fig. 7 is a block diagram showing the robot in the fifth embodiment of the present application.
  • the first embodiment of the present application relates to a human-computer interaction method, and the human-computer interaction method is applied to a robot.
  • the specific process is shown in FIG. 1 .
  • the robot referred to in this embodiment is a common name for automatic control of a machine, including all machines that simulate human behavior or thoughts and simulate other creatures (such as robot dogs, robot cats, etc.).
  • step 101 biometric information of the identified at least one object is extracted.
  • the operation of extracting the biometric information of the identified at least one object may be, at least, detecting at least one within a preset range (eg, 5 meters) centered on the position of the robot. Triggered when the object is close, this detection method enables the robot to perceive objects within 360 degrees of its position.
  • a preset range eg, 5 meters
  • the robot determines the operation of identifying the object, specifically by using a proximity sensor installed on the robot, for example, after placing the robot in a public place and starting it, the proximity sensor is It is possible to perceive whether there is an object approaching within 5 meters of the center of the robot. If the movement information or presence information of the object is sensed, the sensed information is converted into an electrical signal, and the robot's processor controls the biological characteristics of the robot. The collecting device extracts biometric information of the identified at least one object.
  • Method 1 Control the robot to perform image acquisition, and extract biometric features of at least one object from the collected images to obtain biometric information of at least one object.
  • Method 2 Control the robot to perform voice collection, and extract biometric features of at least one object from the collected voices to obtain biometric information of at least one object.
  • Manner 3 controlling the robot to perform image acquisition and voice collection, and extracting biometric features of at least one object from the acquired image, obtaining biometric information of at least one object, and extracting biometric characteristics of at least one object from the collected speech. Obtaining biometric information of at least one object.
  • the biometric information of the object obtained from the image and the biometric information of the object obtained from the speech may be further analyzed to determine the biometric information belonging to the same object.
  • the biometric information of the object in the image and the biometric information from the object in the speech can be comprehensively analyzed according to the same object, thereby improving the accuracy of determining the target interaction object.
  • the extracted biometric information specifically includes physiological characteristic information and/or behavior characteristic information.
  • the physiological characteristic information may specifically be any one or any combination of the facial information, the eye information, and the voiceprint information (specifically, information indicating who the sound is from the voice), and the behavior characteristic information is specific. It may be any one or any combination of the displacement information of the recognized object, the related information such as the voice content information (specifically, the information capable of identifying the content) in the utterance.
  • physiological feature information such as facial information and/or eye information of the subject
  • behavior characteristic information such as displacement information
  • biometrics of at least one object are extracted from the collected speech
  • physiological characteristic information such as voiceprint information of the object
  • behavior characteristic information such as voice content information
  • control robot performs image acquisition, specifically, an image acquisition device that controls the robot itself, such as a camera for image acquisition, or an external image acquisition device that is connected to the robot, such as a monitoring device installed in a shopping mall, or Two ways to cooperate with the acquisition.
  • control robot performs voice collection, and may also be acquired by using the robot's own voice acquisition device and/or an external voice collection device connected thereto.
  • the robot can be controlled to rotate to the direction of the object to be recognized according to the direction information of the object, and then The robot is controlled to perform image and/or voice acquisition operations, thereby ensuring that the captured image and the voice have recognized objects, so that the biometric information of the subsequently extracted object is more complete, thereby ensuring that the final target interaction object is more accurate.
  • the collected image in the embodiment is not limited to image information such as a photo, and may be image information in the video, which is not limited herein.
  • the target interaction object can be.
  • a target interaction object that needs to be interacted is determined from at least one object according to the biometric information.
  • the operation of the target interaction object that needs to be interacted is determined from the at least one object according to the biometric information, which may be implemented by:
  • At least one object is determined to be an object to be interacted.
  • the present embodiment specifically describes the object to be interacted with.
  • the non-human object can be excluded by comparing the extracted biometric information with the sample information of the pre-stored person, thereby ensuring the accuracy of the subsequent operation.
  • each person's biometrics such as the direction of displacement (whether it is toward robot motion, etc.), eye information (whether or not the robot is watching, etc.) Are you looking for help to identify those who really seek help as objects to be interacted with?
  • a matching interactive object as the target interactive object, that is, the object selected by the robot for human-computer interaction.
  • the object to be interacted is directly determined as the target interaction object. If the number of the objects to be interacted is greater than 1, the condition is set according to the preset priority, and each of the objects to be interacted The object sets the priority, and then determines that the object with the highest priority is the target interaction object.
  • three objects appear in the range that can be recognized by the robot, namely A, B, and C, and after determining according to the biometric information, all three objects meet the interactive conditions, that is, The object to be interacted with.
  • the way to determine the target interaction object can be determined by the priority level, for example, according to the location information of the object to be interacted.
  • the obtained location information of the object A to be interacted is (x0, y0), the location information of the object B to be interacted with (x1, y1), and the location information of the object C to be interacted with ( X2, y2), formula according to distance It can be calculated that the distances of the objects A, B, and C to be interacted with each other are d0, d1, and d2, respectively. If d2 ⁇ d0 ⁇ d1, the conditions are set according to the preset priority (the closer to the robot, the higher the priority, the farther from the robot, the lower the priority), and the priority is set for the objects A, B, and C to be interacted with.
  • the priority of the setting is: the object to be interacted with C (the highest priority), the object to be interacted with B (the lowest priority), and the object to be interacted with A (the priority is between the object to be interacted and the object to be interacted with B). It can be determined that the object C to be interacted with is the target interactive object.
  • step 103 location information of the target interactive object is obtained.
  • step 104 the robot is controlled to move toward the target interactive object based on the location information.
  • the robot can be moved toward the target interaction object according to the acquired location information of the target interaction object, so that the robot can actively perform the interaction operation and improve the user experience.
  • the human-computer interaction method provided in the embodiment can enable the robot to respond only to the object that needs to interact, thereby effectively avoiding the error response operation and greatly improving the user experience.
  • a second embodiment of the present application relates to a human-computer interaction method.
  • the embodiment is further improved on the basis of the first embodiment, and the specific improvement is: in the process of controlling the response of the robot to match the target interaction object, the identity information of the target interaction object is also acquired, and the robot is After moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information.
  • the identity information of the target interaction object is also acquired, and the robot is After moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information.
  • step 301 to step 305 are included, wherein step 301 and step 302 and step 304 are substantially the same as step 101 and step 102 and step 104 in the first embodiment, respectively.
  • step 301 and step 302 and step 304 are substantially the same as step 101 and step 102 and step 104 in the first embodiment, respectively.
  • the following is a description of the differences.
  • the technical details that are not described in detail in this embodiment, refer to the human-computer interaction method provided in the first embodiment, and details are not described herein again.
  • step 303 location information and identity information of the target interaction object are obtained.
  • the identity information of the target interaction object obtained in this embodiment may include any one or any combination of name, gender, age, whether it is a VIP client or the like.
  • the above various identity information may specifically be the person stored in the face database of the user who has processed the business recorded by the face recognition technology and the location of the robot (such as the bank business hall).
  • the face data is matched.
  • the related identity information of the recorded user who has processed the service can be directly obtained. If there is no match, the gender and approximate age range are first determined according to the face recognition technology, and then the identity information of the target interactive object is further improved through the Internet search.
  • the identity information of the object to be interacted may also be determined, for example, the priority of the object to be interacted is set according to the VIP parameter carried in the identity information, and
  • the target interaction object is determined by considering factors such as distance, and is specifically described below in conjunction with FIG. 4 for ease of understanding.
  • the method of determining the target interaction object may be to prioritize the distance factor, and select the object C to be the target interaction object; or preferentially consider the VIP factor, and select the object A to be interacted as the target interaction object;
  • the age factor prioritizes the older objects to be interacted with as the target interaction object.
  • step 305 after moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information.
  • the target interaction object is C in Figure 4.
  • the robot can actively perform service inquiry or business guidance, such as "Zhang Yi Hello, sir, what business do you need to do?".
  • the object to be interacted with the object A to be interacted with and the object to be interacted with B may be made. Patience waiting! voice prompts.
  • the human-computer interaction method provided in this embodiment can further acquire the identity information of the target interaction object when acquiring the location information of the target interaction object, so that the robot can move according to the location information of the target interaction object. After the target interaction object is located, the response to the target interaction object can be made according to the identity information, thereby further improving the user experience.
  • a third embodiment of the present application relates to a human-computer interaction method.
  • the embodiment is further improved on the basis of the first embodiment or the second embodiment, and the specific improvement is: after the control robot makes a response that matches the target interaction object, and then re-determines the target interaction object that needs to be interacted. It is necessary to first determine whether there is a new object approaching the robot at present.
  • the specific process is shown in Figure 5.
  • step 501 to step 508 are included, where steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and details are not described herein again.
  • steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and details are not described herein again.
  • steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and details are not described herein again.
  • step 505 it is determined whether a new object is approaching the robot. If it is determined that a new object is close to the robot, go to step 506; otherwise, go directly to step 507 to re-select a target object to be interacted from the remaining objects to be interacted in the last human-computer interaction process to make the target interaction object.
  • the manner of judging whether a new object approaches the robot may be adopted as in the first embodiment, if the preset range (eg, 5 meters) is centered on the current position of the robot. If it is detected that a new object is approaching, it is determined that a new object is close to the robot, and the specific judgment operation will not be described here.
  • the preset range eg, 5 meters
  • the new object approaching the robot may be one, or may be greater than 1, and is not limited herein.
  • step 506 biometric information of the new object is extracted.
  • step 507 the target interaction object that needs to interact is re-determined.
  • the target object that needs to be interacted in the embodiment is specifically selected from the new object and the object other than the target interaction object of the last interaction operation.
  • the robot can only respond to an object to be interacted (that is, the target interaction object needs to be selected for interaction), and after completing an interaction, Other objects to be interacted with each other.
  • the robot may wait for the robot to respond in addition to the previously determined object to be interacted, and there may be new objects to be interacted with, so in this case, it is necessary to re-determine the interaction.
  • the target interactive object it is necessary to reselect a target to be interacted with the newly confirmed object to be interacted and the remaining objects to be interacted in the last human-computer interaction process to make the target interactive object.
  • the manner of re-determining the target interaction object that needs to be interacted is substantially the same as the determination manner in the first embodiment, and it is determined that the identified object is determined according to the biometric information. Interacting objects, and then selecting the target interaction objects that need to be interacted with from the objects to be interacted. The specific implementation details are not described here.
  • the priority of each object to be interacted can still be selected.
  • the new target interaction object can be determined according to other selection methods, and no limitation is made here.
  • step 508 the control robot is made to respond to the re-determined target interaction object.
  • the control robot makes a response that matches the re-determined target interaction object, and the response process may be: moving toward the target interaction object, and moving to the area where the target interaction object is located, actively conducting service consultation or service guidance, specifically
  • the response mode can be set according to the information about the re-determined target interaction object, and no limitation is made here.
  • the human-computer interaction method extracts a new one by monitoring whether a new object approaches the robot after completing a human-computer interaction operation, and when it is determined that a new object approaches the robot.
  • the human-computer interaction method provided in this embodiment enables the robot to dynamically update the state of the object during the work process, thereby accurately making a response conforming to the current scene, reducing misoperation, and further Improved user experience.
  • a fourth embodiment of the present application relates to a human-machine interaction device, which is applied to a robot, and the specific structure is as shown in FIG. 6.
  • the human-machine interaction device includes an extraction module 601, a determination module 602, and a control module 603.
  • the extraction module 601 is configured to extract biometric information of the identified at least one object.
  • the determining module 602 is configured to determine, from the at least one object, the target interactive object that needs to interact according to the biometric information.
  • the control module 603 is configured to control the robot to make a response that matches the target interaction object.
  • the biometric information of the identified at least one object extracted by the extraction module 601 may be any one of physiological characteristic information and behavior characteristic information or a combination of the two.
  • the physiological feature information extracted by the extraction module 601 in this embodiment may be any one or any combination of facial information, eye information, voiceprint information, and the like of the object.
  • the behavior characteristic information extracted by the extraction module 601 may specifically be any one of a displacement information of the object, voice content information, or a combination thereof.
  • the determining module 602 may determine, when determining the target interactive object that needs to be interacted from the at least one object, according to the foregoing various biometric information, first: determining, according to the foregoing various biometric information, that the identified object is the object to be interacted with ( An object that needs to be interacted with, such as analyzing the eye gaze of the object based on the ocular information of the recognized object, and the displacement information of the object to determine whether it is currently seeking help to determine whether it is the object to be interacted; After determining the objects to be interacted with, select an object that meets the requirements from the objects to be interacted as the target interaction object (the object that needs to be interacted eventually).
  • control module 603 controls the robot to make a response that matches the target interaction object, and specifically may control the robot to move toward the target interaction object.
  • the robot can be controlled to make a response matching the identity information of the object, such as active service inquiry, service guidance, etc., specifically: "Hello, may I ask? What business do you want to handle?”
  • the human-machine interaction device uses the extraction module to extract the biometric information of the identified at least one object, and the determining module determines that the interaction needs to be performed from the at least one object according to the biometric information.
  • the target interaction object and then the control module is used to control the robot to make a response matching the target interaction object, and the direct cooperation of the above modules enables the robot equipped with the human-machine interaction device to respond only to the object that needs to interact. Thereby effectively avoiding the false response operation and greatly improving the user experience.
  • a fifth embodiment of the present application relates to a robot, and the specific structure is as shown in FIG.
  • the robot can be a smart machine located in a public place such as a bank business hall, a large shopping mall, an airport, or the like.
  • the internal one specifically includes one or more processors 701 and a memory 702.
  • One processor 701 is taken as an example in FIG.
  • each functional module in the human-machine interaction device involved in the foregoing embodiment is deployed on the processor 701, and the processor 701 and the memory 702 can be connected through a bus or other manner, and the bus is used in FIG. Connection is an example.
  • the memory 702 is a computer readable storage medium, and can be used to store a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the human-computer interaction method involved in any method embodiment of the present application.
  • the processor 701 executes various functional applications and data processing of the server by executing software programs, instructions, and modules stored in the memory 702, that is, implementing the human-computer interaction method involved in any method embodiment of the present application.
  • the memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may establish a history database, store priority setting conditions, and the like.
  • the memory 702 may include a high speed random access memory, and may also include a readable and writable memory (RAM).
  • memory 702 can optionally include memory remotely located relative to processor 701 that can be connected to the terminal device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the memory 702 can store the instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can perform the human-computer interaction method involved in any method embodiment of the present application.
  • the human-computer interaction method provided in any embodiment of the present application can be referred to.
  • a sixth embodiment of the present application is directed to a computer readable storage medium having stored therein computer instructions that enable a computer to perform the human-computer interaction method involved in any of the method embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Manipulator (AREA)

Abstract

La présente invention, qui appartient au domaine technique des robots, concerne un procédé et un dispositif d'interaction homme-ordinateur, un robot et un support d'informations lisible par ordinateur. Dans la présente invention, le procédé d'interaction homme-ordinateur est appliqué à un robot et comprend : l'extraction d'informations biométriques d'au moins un objet reconnu, les informations biométriques comprenant des informations de caractéristiques physiologiques et/ou des informations de caractéristiques comportementales ; la détermination, à partir du ou des objets et selon les informations biométriques, d'un objet d'interaction cible nécessitant une interaction ; et la commande d'un robot pour exécuter une réponse correspondant à l'objet d'interaction cible. Le procédé d'interaction homme-ordinateur permet à un robot de répondre uniquement à un objet nécessitant une interaction, ce qui permet d'éviter de fausses opérations de réponse et d'améliorer grandement la convivialité d'utilisation.
PCT/CN2018/075263 2018-02-05 2018-02-05 Procédé et dispositif d'interaction homme-ordinateur, robot et support d'informations lisible par ordinateur WO2019148491A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001295.0A CN108780361A (zh) 2018-02-05 2018-02-05 人机交互方法、装置、机器人及计算机可读存储介质
PCT/CN2018/075263 WO2019148491A1 (fr) 2018-02-05 2018-02-05 Procédé et dispositif d'interaction homme-ordinateur, robot et support d'informations lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075263 WO2019148491A1 (fr) 2018-02-05 2018-02-05 Procédé et dispositif d'interaction homme-ordinateur, robot et support d'informations lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2019148491A1 true WO2019148491A1 (fr) 2019-08-08

Family

ID=64029123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/075263 WO2019148491A1 (fr) 2018-02-05 2018-02-05 Procédé et dispositif d'interaction homme-ordinateur, robot et support d'informations lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN108780361A (fr)
WO (1) WO2019148491A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716634A (zh) * 2019-08-28 2020-01-21 北京市商汤科技开发有限公司 交互方法、装置、设备以及显示设备
CN113724454A (zh) * 2021-08-25 2021-11-30 上海擎朗智能科技有限公司 移动设备的互动方法、移动设备、装置及存储介质
CN114633267A (zh) * 2022-03-17 2022-06-17 上海擎朗智能科技有限公司 互动内容的确定方法、移动设备、装置及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062482A (zh) * 2018-07-26 2018-12-21 百度在线网络技术(北京)有限公司 人机交互控制方法、装置、服务设备及存储介质
CN110085225B (zh) * 2019-04-24 2024-01-02 北京百度网讯科技有限公司 语音交互方法、装置、智能机器人及计算机可读存储介质
CN110228073A (zh) * 2019-06-26 2019-09-13 郑州中业科技股份有限公司 主动响应式智能机器人
CN110465947B (zh) * 2019-08-20 2021-07-02 苏州博众机器人有限公司 多模态融合人机交互方法、装置、存储介质、终端及系统
CN110689889B (zh) * 2019-10-11 2021-08-17 深圳追一科技有限公司 人机交互方法、装置、电子设备及存储介质
CN112764950B (zh) * 2021-01-27 2023-05-26 上海淇玥信息技术有限公司 一种基于组合行为的事件交互方法、装置和电子设备
CN115476366B (zh) * 2021-06-15 2024-01-09 北京小米移动软件有限公司 足式机器人的控制方法、装置、控制设备及存储介质
CN113486765B (zh) * 2021-06-30 2023-06-16 上海商汤临港智能科技有限公司 手势交互方法及装置、电子设备和存储介质
CN114715175A (zh) * 2022-05-06 2022-07-08 Oppo广东移动通信有限公司 目标对象的确定方法、装置、电子设备以及存储介质
CN117251048A (zh) * 2022-12-06 2023-12-19 北京小米移动软件有限公司 终端设备的控制方法、装置、终端设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143523A2 (fr) * 2010-05-13 2011-11-17 Alexander Poltorak Dispositif interactif personnel électronique
CN105701447A (zh) * 2015-12-30 2016-06-22 上海智臻智能网络科技股份有限公司 迎宾机器人
CN105843118A (zh) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 一种机器人交互方法及机器人系统
CN106113038A (zh) * 2016-07-08 2016-11-16 纳恩博(北京)科技有限公司 基于机器人的模式切换方法及装置
CN106203050A (zh) * 2016-07-22 2016-12-07 北京百度网讯科技有限公司 智能机器人的交互方法及装置
CN106873773A (zh) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 机器人交互控制方法、服务器和机器人
CN107450729A (zh) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 机器人交互方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104936091B (zh) * 2015-05-14 2018-06-15 讯飞智元信息科技有限公司 基于圆形麦克风阵列的智能交互方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143523A2 (fr) * 2010-05-13 2011-11-17 Alexander Poltorak Dispositif interactif personnel électronique
CN105701447A (zh) * 2015-12-30 2016-06-22 上海智臻智能网络科技股份有限公司 迎宾机器人
CN105843118A (zh) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 一种机器人交互方法及机器人系统
CN106113038A (zh) * 2016-07-08 2016-11-16 纳恩博(北京)科技有限公司 基于机器人的模式切换方法及装置
CN106203050A (zh) * 2016-07-22 2016-12-07 北京百度网讯科技有限公司 智能机器人的交互方法及装置
CN106873773A (zh) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 机器人交互控制方法、服务器和机器人
CN107450729A (zh) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 机器人交互方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716634A (zh) * 2019-08-28 2020-01-21 北京市商汤科技开发有限公司 交互方法、装置、设备以及显示设备
CN113724454A (zh) * 2021-08-25 2021-11-30 上海擎朗智能科技有限公司 移动设备的互动方法、移动设备、装置及存储介质
CN114633267A (zh) * 2022-03-17 2022-06-17 上海擎朗智能科技有限公司 互动内容的确定方法、移动设备、装置及存储介质

Also Published As

Publication number Publication date
CN108780361A (zh) 2018-11-09

Similar Documents

Publication Publication Date Title
WO2019148491A1 (fr) Procédé et dispositif d'interaction homme-ordinateur, robot et support d'informations lisible par ordinateur
US10913463B2 (en) Gesture based control of autonomous vehicles
US11152006B2 (en) Voice identification enrollment
EP4044146A1 (fr) Procédé et appareil de détection d'espace de stationnement et de direction et d'angle de celui-ci, dispositif et support
US11145299B2 (en) Managing voice interface devices
KR20220148319A (ko) 장치에 대한 다중 사용자 인증
CN106294774A (zh) 基于对话服务的用户个性化数据处理方法及装置
US11120275B2 (en) Visual perception method, apparatus, device, and medium based on an autonomous vehicle
CN108351707A (zh) 人机交互方法、装置、终端设备及计算机可读存储介质
WO2020230340A1 (fr) Système de reconnaissance faciale, procédé de reconnaissance faciale, et programme de reconnaissance faciale
WO2020043040A1 (fr) Procédé et dispositif de reconnaissance vocale
KR20210048272A (ko) 음성 및 영상 자동 포커싱 방법 및 장치
CN109241721A (zh) 用于推送信息的方法和装置
CN109714233B (zh) 一种家居控制方法及其对应的路由设备
CN114881680A (zh) 机器人、机器人交互方法及存储介质
US10917721B1 (en) Device and method of performing automatic audio focusing on multiple objects
Hasler et al. Interactive incremental online learning of objects onboard of a cooperative autonomous mobile robot
JP6607092B2 (ja) 案内ロボット制御システム、プログラム及び案内ロボット
CN110892412B (zh) 脸部辨识系统、脸部辨识方法及脸部辨识程序
KR101933822B1 (ko) 얼굴인식 기반 지능형 스피커, 이를 이용한 능동적인 대화 제공 방법 및 이를 수행하기 위한 기록매체
WO2023231211A1 (fr) Procédé et appareil de reconnaissance vocale, dispositif électronique, support de stockage et produit
CN111930227A (zh) 可穿戴设备的自动切换nfc卡的方法、装置及可穿戴设备
WO2023081605A1 (fr) Identification aidée par le contexte
CN113031588B (zh) 商场机器人导航系统
CN111950431B (zh) 一种对象查找方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18903913

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/12/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18903913

Country of ref document: EP

Kind code of ref document: A1