WO2018191970A1 - 一种机器人控制方法、机器人装置及机器人设备 - Google Patents

一种机器人控制方法、机器人装置及机器人设备 Download PDF

Info

Publication number
WO2018191970A1
WO2018191970A1 PCT/CN2017/081484 CN2017081484W WO2018191970A1 WO 2018191970 A1 WO2018191970 A1 WO 2018191970A1 CN 2017081484 W CN2017081484 W CN 2017081484W WO 2018191970 A1 WO2018191970 A1 WO 2018191970A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
user
gaze
target
line
Prior art date
Application number
PCT/CN2017/081484
Other languages
English (en)
French (fr)
Inventor
骆磊
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201780000641.9A priority Critical patent/CN107223082B/zh
Priority to PCT/CN2017/081484 priority patent/WO2018191970A1/zh
Priority to JP2019554521A priority patent/JP6893607B2/ja
Publication of WO2018191970A1 publication Critical patent/WO2018191970A1/zh
Priority to US16/668,647 priority patent/US11325255B2/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/003Controls for manipulators by means of an audio-responsive input
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • B25J13/089Determining the position of the robot with reference to its environment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0003Home robots, i.e. small robots for domestic use
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40002Camera, robot follows direction movement of operator head, helmet, headstick
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40411Robot assists human in non-industrial environment like home or office
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present application relates to the field of robot control, and in particular, to a robot control method, a robot device, and a robot device.
  • the current robot application is still relatively in the initial stage, mostly based on conversational chat. Even if some robots can perform other tasks, the experience is not very good.
  • the related art has at least the following problem: in the case where the user requests the robot to take one thing, the robot can only know what the user wants by voice recognition, at what position, and then find the item that the user said. If the space is large, the search scope is also large, the failure rate is high, and the user experience is poor. On the other hand, users often don't want to describe in detail where the item is and what it looks like, but just look at it and look at it. The current robot obviously can't know the user's line of sight, so it can't know what the user is looking at. object.
  • the present application is directed to a robot of the prior art to find a target according to a user's simple voice indication, and has a technical problem of a large search range, a high failure rate, and a poor user experience, and the prior art robot cannot know the user's sight. Therefore, it is impossible to know the technical problem of what object the user is looking at, and provide a robot control method, a robot device and a robot device.
  • the technical solution is as follows:
  • the embodiment of the present application provides a robot control method, including:
  • the robot smoothly scans the gaze plane to search for an indicated target in the gaze direction of the user.
  • the embodiment of the present application further provides a robot apparatus, including:
  • a coordinate system establishing module for establishing a reference coordinate system
  • a capture and calculation module for capturing a gaze direction of the user indicating the target, obtaining a line of sight of the robot, acquiring a position of the robot, and obtaining a linear distance between the robot and the user, according to the line of sight of the robot, Calculating a gaze plane of the user gaze direction relative to the reference coordinate system in real time by the robot position and a linear distance between the robot and the user;
  • a scanning module configured to: the robot smoothly scans the gaze plane to search for an indication target in a gaze direction of the user.
  • the embodiment of the present application further provides a robot apparatus, including at least one processor;
  • the memory stores an instruction program executable by the at least one processor, the instruction program being executed by the at least one processor to enable the at least one processor to perform the method described above.
  • Embodiments of the present application also provide a computer program product comprising software code portions, the software code portions being configured to perform the method steps described above when run in a memory of a computer.
  • the robot control method provided by the embodiment of the present application includes: establishing a reference coordinate system; capturing a gaze direction of the user, acquiring a line of sight of the robot, acquiring a position of the robot, and obtaining a linear distance between the robot and the user, according to Calculating a gaze plane of the user gaze direction relative to the reference coordinate system in real time by the robot line of sight angle, the robot position, and a linear distance between the robot and the user; at least one robot smoothly scans the gaze plane to search The user looks at the indicated target in the direction.
  • the robot smoothly scans the gaze plane to search for an indication target, and the search range is small, and the search failure rate is low, and the search is slower than that of the prior art.
  • the search accuracy is good, and the user experience is better; and the robot can capture the user's gaze direction and calculate the gaze direction relative to the reference in real time in the method provided by the embodiment of the present application.
  • the gaze plane of the coordinate system can scan the gaze plane to know what the user is looking at, the success rate of searching for the indicated target is higher, and the user experience is better.
  • FIG. 1 is an application environment diagram of a robot apparatus according to an embodiment of the present application
  • FIGS. 2a, 2b and 2c are top plan views of a robot control method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flow chart of a robot control method according to an embodiment of the present application.
  • FIG. 4 is a schematic flow chart of a robot control method according to another embodiment of the present application.
  • FIG. 5 is a structural block diagram of a robot apparatus according to another embodiment of the present application.
  • FIG. 6 is a structural block diagram of a robot apparatus according to an embodiment of the present application.
  • FIG. 7 is a structural block diagram of a robot apparatus according to an embodiment of the present application.
  • GNSS Global Navigation Satellite System
  • UWB Ultra Wideband
  • the robot can know the position information with stable and stable accuracy, and can realize higher precision positioning in a complex environment.
  • other robot positioning methods may also be adopted, which are not limited in this embodiment of the present application.
  • the robot is usually equipped with one or more cameras.
  • a camera is placed on the face of the robot, which is equivalent to a human eye.
  • the optical axis, the axis of symmetry of the optical system, the optical axis of the camera is the line passing through the center of the lens, the center line of the beam (light column), and the beam is rotated around this axis, and there should be no change in optical characteristics.
  • the robot searches for the target according to the user's simple voice instruction.
  • the user often does not want to describe in detail where the item is and what it is, but just refers to it and looks at it, and the related art robot obviously cannot know.
  • the user's line of sight can't know what the user is looking at.
  • the robot can't use the voice information because it doesn't know which one is pointing.
  • Corresponding items are related, and it is naturally difficult to collect valid information about the items that the user is talking about.
  • the embodiment of the present application provides a robot control method, which is intended to make the robot know what the user is looking at, or at least know the user's line of sight, the scope can be greatly narrowed and the target can be easily learned.
  • FIG. 1 is an application environment of a robot control method according to an embodiment of the present application.
  • the application The environment includes: user 100, robot 200, and item 300.
  • the user 100 mentions and looks at a certain item 300 in the middle of talking with the robot 200 or during conversation with other people, he says "Tom, hand me over the book” to the robot, the robot 200 needs The line of sight of the user 100 is obtained to more accurately know which book the user requested.
  • FIGS. 2a and 2b are top views of a scenario environment of a robot control method according to an embodiment of the present application.
  • the AM line is the angle of view of the robot, specifically the optical axis of the robot's line of sight.
  • the top view of the space in which the user 100 is located is as shown in Figs. 2a and 2b, and the position of the lower left corner is the coordinate (0, 0) of the reference coordinate system, and the user is at point C
  • the robot may be looking directly at the user at this moment, or it may be working on another job at the moment, and is now turning the head to the user but has not fully looked at the user, but the user is in the field of view.
  • the angle of the orientation (facing the positive direction) and the optical axis of the robot can be determined by a specific face recognition analysis algorithm (other algorithms can also be used) As long as you get this data, you can calculate the absolute direction of the user's orientation in the current coordinate system.
  • the robot control method will be specifically described below.
  • the robot control method provided by the embodiment of the present application includes:
  • Step 101 Establish a reference coordinate system.
  • the above robot positioning method can be used to locate the indoor environment, and a certain point of the real space is set as the coordinate origin (0, 0) point. According to the indoor positioning, the robot can acquire its own position at any time and calculate the coordinates of its own.
  • Step 102 Capture a user's gaze direction indicating the target, acquire a robot's line of sight angle, acquire a robot position, and obtain a linear distance between the robot and the user, according to the robot's line of sight angle, the robot position, and between the robot and the user.
  • the straight line distance calculates the gaze plane of the user's gaze direction relative to the reference coordinate system in real time.
  • the gaze plane is a plane perpendicular to the ground in a three-dimensional space, and the gaze plane is viewed from a straight line where the gaze direction of the user is located. This is the CB line shown in Figures 2a and 2b. 2a differs from FIG. 2b in that, in the reference coordinate system, the robot of FIG. 2a is located on the right side of the user shown in the figure, and the robot of FIG. 2b is located on the left side of the user shown in the figure.
  • the robot's line of sight angle is the optical axis of the robot's line of sight. Calculating the gaze plane of the user gaze direction relative to the reference coordinate system in real time according to the robot line of sight angle, the robot position, and the linear distance between the robot and the user specifically includes:
  • the user position is point C, indicating that the position of the target is point B
  • the robot position is point A
  • a longitudinal line parallel to the Y-axis of the reference coordinate system is drawn along the user position C.
  • a transverse line parallel to the X-axis of the reference coordinate system is drawn along the point A of the robot, the transverse line intersects the longitudinal line perpendicularly to the D point, and the transverse line intersects the line of the user's gaze direction at the E point, according to the inner angle of the triangle and the inference of the theorem:
  • An outer angle of a triangle is equal to two inner angle sums not adjacent to it, and the outer angle ⁇ of the ACE triangle at point E is calculated by the following formula:
  • the value of ⁇ when the robot's line of sight angle rotates the angle ⁇ to the user position counterclockwise, the value of ⁇ is defined as a positive number; when the angle of view of the robot rotates the angle ⁇ to the user position clockwise, the value of ⁇ is defined as a negative number.
  • capturing a gaze direction of the user generating a signal for searching for a target of the user according to the gaze direction of the user; acquiring a line of sight of the robot according to the signal indicating the target of the searched user, acquiring a position of the robot, and acquiring between the robot and the user Straight line distance.
  • the robot can actively capture the user's gaze direction, or according to the user's instructions, such as the user's dialogue with the robot: "Tom, help me put that The book is handed over, this voice command, while capturing the user's gaze direction.
  • the robot can actively capture the user's gaze direction without requiring the user to issue an instruction.
  • the robot receives the search target command issued by the user, and captures the user's gaze direction according to the search target instruction.
  • Step 103 The robot smoothly scans the gaze plane to search for an indication target in the gaze direction of the user.
  • the number of robots may be one or more.
  • the gaze plane is smoothly scanned by an interval preset duration to search for an indication target in a gaze direction of the user.
  • the interval preset duration is specifically 0.1 second to 5 seconds, and is set in combination with the robot angle of view and the resolution of the image captured by the robot camera.
  • the camera focus controlling the at least one robot falls within a partial plane of the gaze plane between the user position and the user indicated target to search for an indication target in the user's gaze direction.
  • the robot searches for an indication target in the gaze direction of the user, and does not necessarily need to move past. If the position where the robot stands is located on the gaze plane, the robot only needs to advance or retreat along the gaze plane. If the robot stands in a position where the gaze plane can be captured very clearly and the gaze plane can be smoothly scanned, the robot only needs to turn the head.
  • the reference coordinate system X-axis angle ⁇ ' calculating the coordinates of the robot in the reference coordinate system according to the robot position; according to the coordinates of the robot in the reference coordinate system, the deflected visual line of sight and the reference coordinate
  • the angle ⁇ ' of the X-axis and the gaze plane calculate a focusing distance of the camera after the deflection; controlling the focus of the camera of the robot to fall according to the focusing distance of the deflected rear view optical axis and the deflected camera
  • the gaze plane is within a partial plane between the user location and the user indicated target to search for the user to indicate the target.
  • the robot control method provided by the embodiment of the present application includes: establishing a reference coordinate system; capturing a gaze direction of the user, acquiring a line of sight of the robot, acquiring a position of the robot, and obtaining a linear distance between the robot and the user, according to Calculating the user's gaze direction relative to the reference coordinate system in real time by the robot line of sight angle, the robot position, and the linear distance between the robot and the user A gaze plane; at least one robot smoothly scans the gaze plane to search for an indication target in the gaze direction of the user.
  • the robot smoothly scans the gaze plane to search for an indication target, and the search range is small, and the search failure rate is low, and the search is slower than that of the prior art.
  • the search accuracy is good, and the user experience is better; and the robot can capture the user's gaze direction and calculate the gaze direction relative to the reference in real time in the method provided by the embodiment of the present application.
  • the gaze plane of the coordinate system can scan the gaze plane to know what the user is looking at, the success rate of searching for the indicated target is higher, and the user experience is better.
  • the robot If the robot is in a poor position, it needs to move to be close to the gaze plane, find that the gaze plane can be clearly captured, and the position of the gaze plane can be smoothly scanned, and the robot movement needs to be controlled.
  • the robot can stand in a position and rotate the head to calculate the focus distance in real time, so that the focus is always kept on the line during the rotation of the head (in the top view is a line, in the three-dimensional space is a vertical to the ground)
  • the plane keeps the focus on the gaze plane during the rotation of the head.
  • the robot may have to find the object on the user's line of sight during the movement according to the terrain change, obstacle occlusion, etc., but also Real-time calculation can be. As shown in the figure below, the robot will smoothly scan the CB line starting from point C during its own motion to search for the target indicated by the user.
  • the robot control method further includes:
  • Step 105 Obtain a real-time position and a real-time line of sight angle during the movement of the robot itself, determine a line of intersection between the real-time line of sight angle of the robot and the gaze plane, and a focus distance. During the moving process, scan the intersection line according to the focus distance image. The area until the indicated target of the user's gaze on the gaze plane is identified.
  • the robot smoothly scans the area around the intersection line by a graphic search method; when at least two targets that match the target image of the user are found, a request for confirmation is issued; if the user responds to the request for confirmation, a given If the result is confirmed, the robot selects one of the targets that meet the user's indicated target figure as the household instruction target according to the confirmation result returned by the request confirmation request.
  • the real-time line of sight of the robot is the real-time line of sight of the robot.
  • the robot obtains its real-time position and real-time line of sight angle during the movement of the robot itself, and determines the real-time line of sight angle of the robot and the gaze level.
  • the intersection and focus distance of the face include:
  • FIG. 2c is a schematic diagram of a scenario in a robot moving process in a robot control method according to an embodiment of the present application.
  • the real-time position of the robot is acquired as A′ point during the movement of the robot itself
  • the real-time coordinates (p, q) of the robot in the reference coordinate system are calculated according to the real-time position of the robot
  • the real-time visual line of the robot is calculated and described. Referring to the angle ⁇ ′′ of the X-axis of the coordinate system; the intersection of the real-time line of sight optical axis of the robot and the line of the user's gaze direction is H, then the distance of A′H is the focus distance;
  • the solved (x, y) is the intersection point H coordinate of the real-time line of sight of the robot camera and the line where the user's gaze direction is located;
  • the calculation should be performed in real time as the robot moves or rotates, such as 30 times per second (the actual number depends on the robot's The speed of movement or rotation, the higher the calculation frequency, the more accurate, but the calculation amount will also increase.
  • the specific calculation frequency is not limited here.
  • the above process of determining the intersection of the robot's real-time line of sight angle with the gaze plane and the focus distance is separately controlled from the movement process of recognizing the target after moving the pointed object on the gaze plane.
  • the robot is controlled to go to the target, and the target can be taken.
  • the robot position is obtained, the robot coordinates are calculated, the linear distance between the robot and the user is obtained, and the indoor walking environment, the dynamic obstacle, and the like are planned, the robot walking path is planned, and the indication target is taken to the user.
  • the process of determining the intersection of the robot's real-time line of sight angle with the gaze plane and the focus distance, and the movement process of recognizing the pointing target after recognizing the gaze on the gaze plane may also be controlled together. Control the movement of the robot, during the movement, according to the focus distance image scanning The area around the line is crossed until the pointing target of the user's gaze on the gaze plane is recognized, and then the robot is moved to a position where the focus distance (the focus distance of the focus indicating target) is zero, and the pointing target can be reached.
  • the robot control method provided by the embodiment obtains its real-time position and real-time line-of-sight angle during the movement of the robot itself, and determines the intersection line and the focus distance of the real-time line of sight angle of the robot and the gaze plane.
  • the camera focus of the control robot falls on the gaze plane, thereby capturing a clear image which may include the indication target, which is specifically an image of the area around the intersection.
  • the surrounding area of the intersection line may be scanned according to the focus distance image until the pointing target of the user's gaze on the gaze plane is recognized, and the required parameters can be obtained in real time, and the indicated target can be quickly and accurately identified.
  • the user experience is better.
  • the above technical solution is to obtain an angle ⁇ with the optical axis of the robot's line of sight by analyzing the facial angle of the user, and if a robot sees the back of the user when the call is heard, the technique of face recognition and analysis cannot be determined.
  • the angle between the user-oriented direction and the optical axis should be forwarded to the front or side of the user, as long as the user's face can be seen. (The user may need to re-instruct the previous command to re-determine the user.
  • the face is oriented at an angle to the optical axis of the line of sight).
  • the robot control method further includes:
  • Step 104 The robot includes a network module for networking with a robot in the same coordinate system, the network module is configured to share data of the gaze plane.
  • an advantageous effect of the embodiment of the present application is that the network of the robot is networked with the robot in the same coordinate system to share the data of the gaze plane, and the robot control server can combine the position of the robot according to the state of the current processing task of the plurality of robots. Orientation, etc., through calculation or analysis, control one of the robots to search for the indicated target in the direction of the user's gaze, so that multiple robots can work in coordination, improve work efficiency, quickly meet user needs, and improve user experience.
  • the same coordinate system is the reference coordinate system.
  • the robot control method further includes:
  • Step 106 Extract target feature information of the indicated target, and store the target feature information to a target directory.
  • the target directory can be set to store the target feature information.
  • the target feature information may be various attributes of the target, such as name, location, attribution, shape, size, color, price, degree of user preference, purchase path or reason for purchase, and the like.
  • the target feature information for extracting the indicated target is specifically extracted by a graphic search method, and the video feature is extracted, and may also be extracted by the user's voice data.
  • the robot control method further includes:
  • Step 107 Acquire voice data and video data of the user, identify key information of the voice data and the video data of the user, and match the target feature information with the key information to obtain related key information, and store the information in the target directory.
  • the associated key information is sent to the corresponding target feature information to update the target feature information.
  • the target feature information field has multiple attributes, and the target feature information is matched with the key information to obtain the associated key information, and then the attribute information column of the associated key information is added to the corresponding target feature information, so that the target feature information The properties of the field are increased relative to the updated target feature information.
  • the robot control method can better help the robot to collect information when two or more people talk. If the robot is idle or has the ability to focus on the surrounding people through hearing and vision, if you hear and see more people talking, and someone appears to indicate something to another person or multiple people, the robot still You can apply this method to know what people are talking about (by acquiring the user's voice data and video data), how to comment on the object (identifying the user's voice data and key information of the video data), and even join the discussion, that is, The auditory information and the visual information are associated (matching the target feature information with the key information to find the associated key information), and then the relevant user information is collected, and then the user can be more intelligently chatted. For example, the two are discussing a porcelain vase decorated at home.
  • the robot can hear the conversation between the two parties, but it does not necessarily know what the item is being discussed, which makes the collected voice information completely useless.
  • the robot has a greater probability of visually knowing the item discussed by the user, and associates the voice information with the visual object, knowing that the user likes the vase very much, the place of purchase, and the purchase. The amount of information and so on, so that the robot knows more about the user, thereby providing users with more ideal and smarter services.
  • the robot is embodied in a chat with the user, or the robot participates in a discussion between multiple people.
  • the robot control method further includes:
  • Step 108 After determining the user chat, the robot collects the voice data and video data of the user chat, identifies the voice data of the user chat and the topic information of the video data, and matches the updated target feature information with the topic information. According to the matching result, the voice and video communication with the user is completed.
  • the robot determines that the user is in the chat, for example, capturing the user's line of sight on his or her own head, or the user scans the robot and shouts the name of the robot, etc., and determines that the user is Robot chat.
  • the robot first obtains the voice data and video data of the user's chat, and the knowledge The theme information of the voice data and the video data of the user chat is not mentioned, the target feature information of the target directory after the update is called, and the updated target feature information is matched with the topic information to output the content communicated with the user, and the voice can be output.
  • the robot is more intelligent, can provide users with more ideal and smarter services, and enhance the user experience.
  • the embodiment of the present application further provides a robot apparatus 400.
  • the robot apparatus 400 includes a coordinate system establishing module 401, a capturing and calculating module 402, and a scanning module 403.
  • a coordinate system establishing module 401 configured to establish a reference coordinate system
  • a capture and calculation module 402 configured to capture a user's gaze direction indicating the target, acquire a robot's line of sight angle, acquire a robot position, and obtain a linear distance between the robot and the user, according to the robot's line of sight angle, the robot position, and the robot Calculating a gaze plane of the user's gaze direction relative to the reference coordinate system in real time with a straight line distance from the user;
  • the scanning module 403 is configured to smoothly scan the gaze plane by the robot to search for an indication target in the gaze direction of the user.
  • the robot apparatus 400 proposed by the embodiment of the present application and the robot control method proposed by the method embodiment of the present application are based on the same inventive concept, and the corresponding technical contents in the method embodiment and the apparatus embodiment may be applicable to each other. More details.
  • the robot control method provided by the embodiment of the present application includes: establishing a reference coordinate system; capturing a gaze direction of the user, acquiring a line of sight of the robot, acquiring a position of the robot, and obtaining a linear distance between the robot and the user, according to Calculating a gaze plane of the user gaze direction relative to the reference coordinate system in real time by the robot line of sight angle, the robot position, and a linear distance between the robot and the user; at least one robot smoothly scans the gaze plane to search The user looks at the indicated target in the direction.
  • the robot smoothly scans the gaze plane to search for an indication target, the search range is small, and the search failure rate is low, and the user is relatively low in the search method.
  • the experience is better; and the robot can capture the gaze direction of the user in the method provided by the embodiment of the present application, and calculate the gaze plane of the gaze direction relative to the reference coordinate system in real time.
  • scanning the gaze plane it is possible to know what the user is looking at, the success rate of searching for the indicated target is higher, and the user experience is better.
  • the robot apparatus 400 further includes:
  • a determining and identifying module 405 configured to acquire a real-time position and a real-time line of sight angle during the movement of the robot itself, determine a line of intersection between the real-time line of sight angle of the robot and the gaze plane, and a focus distance, During the process, the area around the intersection line is scanned according to the focus distance image until the indicated target of the user's gaze on the gaze plane is recognized.
  • the robot apparatus 400 obtained by the embodiment obtains its real-time position and real-time line-of-sight angle during the movement of the robot itself, and determines the intersection line and the focusing distance of the real-time line of sight angle of the robot and the gaze plane.
  • the camera focus of the control robot falls on the gaze plane, thereby capturing a clear image which may include the indication target, which is specifically an image of the area around the intersection.
  • the surrounding area of the intersection line may be scanned according to the focus distance image until the pointing target of the user's gaze on the gaze plane is recognized, and the required parameters can be obtained in real time, and the indicated target can be quickly and accurately identified.
  • the user experience is better.
  • the robot line of sight angle is a robot line of sight optical axis
  • the capture and calculation module 402 is further configured to:
  • d is the linear distance between the robot and the user
  • the real-time line of sight angle of the robot is a real-time line of sight optical axis of the robot
  • the determining and identifying module 405 is further configured to:
  • the above technical solution is to obtain an angle ⁇ with the optical axis of the robot's line of sight by analyzing the facial angle of the user. If there is another equation around the line where the other robot can obtain the line connecting the user position and the robot position, the same equation can also be synchronized. The robot that was ordered.
  • the robot includes a network module for networking with a robot in the same coordinate system, the network module for sharing data of the gaze plane.
  • an advantageous effect of the embodiment of the present application is that the network of the robot is networked with the robot in the same coordinate system to share the data of the gaze plane, and the robot control server can combine the position of the robot according to the state of the current processing task of the plurality of robots. Orientation, etc., through calculation or analysis, control one of the robots to search for the indicated target in the direction of the user's gaze, so that multiple robots can work in coordination, improve work efficiency, quickly meet user needs, and improve user experience.
  • the same coordinate system is the reference coordinate system.
  • the robot may have further processing after searching for the indicated target in the user's gaze direction.
  • the robot apparatus 400 further includes:
  • the extracting and storing module 406 is configured to extract the target feature information of the indicated target, and store the target feature information to the target directory.
  • the target feature information for extracting the indicated target is specifically extracted by a graphic search method, and the video feature is extracted, and may also be extracted by the user's voice data.
  • the robot apparatus 400 further includes:
  • the identification and matching module 407 is configured to acquire voice data and video data of the user, identify key information of the voice data and the video data of the user, and match the target feature information with the key information to obtain key information associated with the target information.
  • the associated key information is stored in the directory to the corresponding target feature information to update the target feature information.
  • the robotic device 400 can also better assist the robot in collecting information when two or more people talk.
  • the identification and matching module 407 is configured to acquire and identify key information of the user's voice data and video data, and match the target feature information with the key information to find related key information and store the update.
  • the target feature information is matched with the key information, and the robot has a greater probability of hearing the object discussed by the user through auditory or visual, and associating the voice information with the visual information to find the associated key information, even if The robot knows more about the user and makes the robot more intelligent.
  • the robot is embodied in a chat with the user, or the robot participates in a discussion between multiple people.
  • the robot apparatus 400 further includes:
  • the matching and communication module 408 is configured to collect voice data and video data of the user's chat after determining the user's chat, identify the voice data of the user and the theme information of the video data, and update the target feature information with the theme. The information is matched, and the voice and video communication with the user is completed according to the matching result.
  • the robot When the robot communicates with the user for voice and video, first obtain the voice data and video data of the user's chat, identify the voice data of the user's chat and the subject information of the video data, call the updated target feature information of the target directory, and update the target.
  • the feature information is matched with the topic information to output content communicated with the user, and the voice or action can be output to make the robot more intelligent and enhance the user experience.
  • FIG. 7 is a schematic structural diagram of hardware of a robot apparatus according to an embodiment of the present application.
  • the robotic device can be any suitable robotic device 800 that performs a robotic control method.
  • the robotic device 800 can also have one or more power devices for driving the robot to move along a particular trajectory.
  • the device includes: one or more processors 810 and a memory 820, and one processor 810 is taken as an example in FIG.
  • the processor 810 and the memory 820 may be connected by a bus or other means, and the connection by a bus is taken as an example in FIG.
  • the memory 820 is used as a non-volatile computer readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions corresponding to the robot control method in the embodiment of the present application.
  • / module for example, the coordinate system establishing module 401, the capturing and calculating module 402 and the scanning module 403 shown in FIG. 5, the coordinate system establishing module 401, the capturing and calculating module 402, the scanning module 403, determining and identifying, shown in FIG. Module 405, extraction and storage module 406, identification and matching module 407, and matching and communication module 408).
  • the processor 810 executes various functional applications of the server and data processing by executing non-volatile software programs, instructions, and modules stored in the memory 820, that is, implementing the above-described method embodiment robot control method.
  • the memory 820 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the robot device, and the like.
  • memory 820 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 820 can optionally include memory remotely located relative to processor 820, which can be connected to the robotic device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 820, and when executed by the one or more processors 810, perform the robot control method in any of the above method embodiments.
  • the computer software can be stored in a computer readable storage medium, which, when executed, can include the flow of an embodiment of the methods described above.
  • the storage medium may be a magnetic disk, an optical disk, a read-only storage memory, or a random storage memory.

Abstract

一种机器人控制方法、机器人装置及机器人设备。该方法包括:建立参照坐标系;捕捉用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据机器人视线角度、机器人位置以及机器人与用户之间的直线距离实时计算用户注视方向相对参照坐标系的注视平面;至少一个机器人平滑扫描注视平面,以搜寻用户注视方向上的指示目标。机器人平滑扫描注视平面来搜索指示目标,搜索范围较小,搜索失败率较低,可提升搜索准确率,改善用户体验。

Description

一种机器人控制方法、机器人装置及机器人设备 技术领域
本申请涉及机器人控制领域,尤其涉及一种机器人控制方法、机器人装置及机器人设备。
背景技术
当前机器人应用相对还处于相对初级阶段,大多以对话聊天为主,即使有部分机器人能够执行其他任务,体验也并不是很好。
发明人发现相关技术至少存在以下问题:以用户要求机器人取一个东西为例,机器人只能通过语音识别得知用户要的是什么东西,大概在什么位置,然后去寻找用户所说的物品。如果所处空间较大的话,搜索范围也较大,失败率较高,用户体验较差。另一方面,用户很多时候并不希望详细的描述这个物品在哪儿,什么样子,而只是指一下和望一眼,而当前机器人显然无法获知用户的视线,也就无法知道用户所看的到底是什么物体。
发明内容
本申请针对现有技术的机器人根据用户较简单的语音指示去寻找目标,存在搜索范围较大,失败率较高,用户体验较差的技术问题,以及现有技术的机器人由于无法获知用户的视线,也就无法知道用户所看的到底是什么物体的技术问题,提供一种机器人控制方法、机器人装置及机器人设备,技术方案如下:
本申请实施例提供一种机器人控制方法,包括:
建立参照坐标系;
捕捉用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面;
机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。
本申请实施例还提供一种机器人装置,包括:
坐标系建立模块,用于建立参照坐标系;
捕捉和计算模块,用于捕捉指示目标的用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、 所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面;
扫描模块,用于机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。
本申请实施例还提供一种机器人设备,包括至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令程序,所述指令程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的方法。
本申请实施例还提供一种包括软件代码部分的计算机程序产品,所述软件代码部分被配置用于当在计算机的存储器中运行时执行上述的方法步骤。
本申请实施例的有益效果在于,本申请实施例提供的机器人控制方法包括:建立参照坐标系;捕捉用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面;至少一个机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。相对于现有技术的机器人根据用户较简单的语音指示去寻找目标,本申请实施例提供的方法中机器人平滑扫描所述注视平面来搜索指示目标,搜索范围较小,搜索失败率较低,提升搜索准确率,用户体验较好;而且相对于现有技术的机器人由于无法获知用户的视线,本申请实施例提供的方法中机器人可以捕捉用户注视方向,并实时计算所述注视方向相对所述参照坐标系的注视平面,通过扫描所述注视平面从而可以获知用户所注视的到底是什么目标,搜寻所述指示目标的成功率较高,用户体验更好。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供的机器人设备的应用环境图;
图2a、2b和2c是本申请实施例提供的机器人控制方法的场景俯视图;
图3是本申请实施例提供的机器人控制方法的流程示意图;
图4是本申请另一实施例提供的机器人控制方法的流程示意图;
图5是本申请又一实施例提供的机器人装置的结构框图;
图6是本申请实施例提供的机器人装置的结构框图;
图7是本申请实施例提供的机器人设备的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本说明书中在本申请的说明书中所使用的术语只是为了描述具体的实施方式的目的,不是用于限制本申请。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
此外,下面所描述的本申请各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
近年来,随着计算机技术、信息技术、通讯技术、微电子技术的飞速发展,移动机器人技术得到了广泛的应用,例如扫地机器人、餐馆传菜机器人以及可与人对话聊天的机器人等。机器人在向目标跟踪移动的过程中,可通过全球卫星导航系统(Global Navigation Satellite System,GNSS)和超宽带(Ultra Wideband,UWB)与预先布设的网络节点共同完成定位,通过GNSS和UWB的定位方式,机器人能够获知精度持续稳定的位置信息,可实现复杂环境下较高精度地定位。当然,也可以采用其他机器人定位方式,本申请实施例对此不做限制。
机器人上一般搭载一个或多个摄像头,例如,在机器人的脸部设置摄像头,相当于人的眼睛。光轴,即光学系统的对称轴,摄像头的光轴为通过镜头中心的线,光束(光柱)的中心线,光束绕此轴转动,不应有任何光学特性的变化。
相对于相关技术的机器人根据用户较简单的语音指示去寻找目标,用户很多时候并不希望详细的描述这个物品在哪儿,什么样子,而只是指一下和望一眼,而相关技术的机器人显然无法获知用户的视线,也就无法知道用户注视的到底是什么物品。另一方面,在两人或多人谈论一样物品时,时常也会出现只是指一下看一下来指示的情况,机器人对这种情况因为不知道到底指的哪一样东西,就无法将语音信息与对应的物品做关联,自然也就很难收集到用户谈论物品的有效信息。
因此,如果本申请实施例提供一种机器人控制方法,旨在使机器人获知用户在注视的是什么,或者至少获知用户的视线,则可以大幅缩小范围,较容易获悉目标。
图1为本申请实施例提供的机器人控制方法的应用环境。如图1所示,该应用 环境包括:用户100、机器人200和物品300。当用户100在和机器人200交谈过程中或者和其他人交谈过程中,提到并注视附近的某样物品300时,对机器人说道“Tom,帮我把那本书递过来”,该机器人200需要得到用户100的视线方向来更准确的知道用户要求的书是哪一本。
图2a、2b为本申请实施例提供的机器人控制方法的场景环境的俯视图。AM直线为机器人视线角度,具体为机器人视线光轴。假设用户100所处空间俯视图如图2a、2b所示,以左下角位置为参照坐标系的坐标(0,0)点,用户处于C点,对机器人S说:“Tom,帮我把那本书递过来”,一边说着一边望向并指向处于位置T的一本书。机器人此时刻可能正直视用户,也可能之前正在处理别的工作,此刻正将头转向用户但还没完全直视用户,只是用户出现在了视野范围内。
这个时刻,如果听到呼叫的机器人自身视角中能够看到用户的脸,则可通过具体脸部识别分析算法确定其面向方向(面向正方向)与机器人光轴的夹角(也可通过其他算法,只要得到此数据即可),进而也就可计算出用户面向方向在当前坐标系中的绝对方向。下面具体阐述机器人控制方法。
如图3所示,本申请实施例提供的机器人控制方法包括:
步骤101、建立参照坐标系;
可以采用上述机器人定位方式,对室内环境定位,设立真实空间的某个点为坐标原点(0,0)点。根据室内定位,机器人任何时刻都可获取自身位置,计算自身所处的坐标。
步骤102、捕捉指示目标的用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面。
所述注视平面在三维空间中为垂直于地面的平面,俯视所述注视平面得到用户注视方向所在直线。即为图2a、2b所示的CB直线。图2a与图2b的不同之处在于,在参照坐标系中,图2a的机器人位于图中所示用户的右侧,图2b的机器人位于图中所示用户的左侧。
进一步地,机器人视线角度为机器人视线光轴。根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面具体包括:
根据所述机器人位置计算所述机器人在参照坐标系的坐标(m,n)并计算所述机器人视线光轴与所述参照坐标系X轴的夹角γ;根据所述机器人视线角度、机器人 位置和用户位置计算用户位置和机器人位置的连线所在直线与所述机器人视线光轴的夹角β;计算所述用户位置和机器人位置的连线所在直线与用户注视方向所在直线的夹角α;根据所述机器人视线光轴与参照坐标系X轴的夹角γ、所述用户位置和机器人位置的连线所在直线与所述机器人视线光轴的夹角β、所述用户位置和机器人位置的连线所在直线与用户注视方向所在直线的夹角α和所述机器人与用户之间的直线距离实时计算所述注视方向相对所述参照坐标系的注视平面。
如图2a、2b所示,所述用户位置为C点,指示目标的位置为B点,所述机器人位置为A点,沿用户位置C点画与所述参照坐标系Y轴平行的纵向线,沿机器人位置A点画与所述参照坐标系X轴平行的横向线,横向线与纵向线垂直相交于D点,横向线与用户注视方向所在直线相交于E点,根据三角形内角和定理的推论:三角形的一个外角等于和它不相邻的两个内角和,通过如下算式,计算ACE三角形在E点的外角θ:
θ=α+β+γ;
值得说明的是,图2a中θ=α+β+γ;、图2b中,θ=α-β+γ。β的取值,机器人视线角度逆时针转动夹角β到用户位置时,则定义β的值为正数;机器人视线角度顺时针转动夹角β到用户位置时,则定义β的值为负数。
在三角形ACD中,通过如下算式,计算AD和CD:
AD=d*cos(180°-β-γ)=-d*cos(β+γ);
CD=d*sin(180°-β-γ)=d*sin(β+γ);
在所述参照坐标系中,O为原点,横向线与Y轴相交于F点,机器人位置A点坐标为(m,n),可知m=FA;n=OF;用户位置C点坐标为(j,k),可知j=FA-AD,k=OF+CD,将AD、CD、m和n代入,可得C点坐标(m+d*cos(β+γ),n+d*sin(β+γ));
将用户注视方向所在直线设定为一次方程y=k*x+b;此处k=tanθ=tan(α+β+γ);
代入如上C点坐标(m+d*cos(β+γ),n+d*sin(β+γ)),可得:
b=y–k*x=n+d*sin(β+γ)-tan(α+β+γ)*(m+d*cos(β+γ));
所述用户注视方向所在直线一次方程为:
y=tan(α+β+γ)*x+n+d*sin(β+γ)-tan(α+β+γ)*(m+d*cos(β+γ));
所述注视平面为以(m+d*cos(β+γ),n+d*sin(β+γ))为起始点,沿着 一次方程y=tan(α+β+γ)*x+n+d*sin(β+γ)-tan(α+β+γ)*(m+d*cos(β+γ))向前的方向并且垂直于地面的平面。
在一些实施例中,捕捉用户注视方向,可根据所述用户注视方向生成搜寻用户指示目标的信号;根据所述搜寻用户指示目标的信号获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离。
用户在与其他人交谈过程中,用户注视了指示目标,此时,机器人可以主动捕捉用户注视方向,也可以根据用户的指令,例如上文中用户跟机器人的对话:“Tom,帮我把那本书递过来”,这种语音指令,而捕捉用户注视方向。机器人可以主动捕捉用户注视方向时,并不需要用户发出指令。
如果用户发出了指令,则机器人接收由用户发出的搜寻目标指令,根据搜寻目标指令捕捉用户注视方向。
步骤103、机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。
具体地,机器人的数量可为一个或多个。可选地,间隔预设时长平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。间隔预设时长具体为0.1秒-5秒,结合机器人视角以及机器人摄像头拍摄图像的分辨率等设定。
具体地,控制至少一个机器人的摄像头焦点落在所述注视平面在所述用户位置到所述用户指示目标之间的部分平面内,以搜寻用户注视方向上的指示目标。
更具体地,机器人搜寻用户注视方向上的指示目标,并不一定需要移动过去,如果机器人站立的位置刚好位于所述注视平面上,则机器人只需沿着注视平面前进或后退即可。如果机器人站立的位置刚好可以很清晰的捕捉到所述注视平面,以及可平滑扫描所述注视平面,则机器人只需转动头部即可。此时,则控制至少一个机器人视线光轴从所述用户位置往所述用户指示目标偏转并获取偏转后视线光轴;根据所述机器人在参照坐标系的坐标计算所述偏转后视线光轴与所述参照坐标系X轴的夹角γ′;根据所述机器人位置计算所述机器人在参照坐标系的坐标;根据所述机器人在参照坐标系的坐标、所述偏转后视线光轴与参照坐标系X轴的夹角γ′和所述注视平面计算偏转后摄像头的对焦距离;根据所述偏转后视线光轴和所述偏转后摄像头的对焦距离控制所述机器人的摄像头的焦点落在所述注视平面在所述用户位置到所述用户指示目标之间的部分平面内,以搜寻所述用户指示目标。
本申请实施例的有益效果在于,本申请实施例提供的机器人控制方法包括:建立参照坐标系;捕捉用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的 注视平面;至少一个机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。相对于现有技术的机器人根据用户较简单的语音指示去寻找目标,本申请实施例提供的方法中机器人平滑扫描所述注视平面来搜索指示目标,搜索范围较小,搜索失败率较低,提升搜索准确率,用户体验较好;而且相对于现有技术的机器人由于无法获知用户的视线,本申请实施例提供的方法中机器人可以捕捉用户注视方向,并实时计算所述注视方向相对所述参照坐标系的注视平面,通过扫描所述注视平面从而可以获知用户所注视的到底是什么目标,搜寻所述指示目标的成功率较高,用户体验更好。
如果机器人所处位置不佳,需要移动以靠近注视平面,找到可以清晰的捕捉到所述注视平面,以及可平滑扫描所述注视平面的位置,需要控制机器人移动。
随着机器人或人的位置变化或视线角度变化,如上的m,n,d,α,β,γ都会变化,因此不管是人在运动还是机器人在运动,该计算应取用户注视方向的那一刻的所有数据作为计算参数,因为这条一次方程的直线是不会变的,只取决于用户指示的那一刻。
得到用户注视方向所在直线的一次方程,机器人只要追踪这条线上的点即可。不管自身位置在哪儿,都可以得到自身视线与此线的距离,也就是摄像头的对焦距离。机器人可以站在某处一边转动头部一边实时计算对焦距离,以使头部转动过程中始终使对焦点保持在这条线上(在俯视图中是一条线,在三维空间中是一个垂直于地面的平面,使头部转动过程中始终使对焦点保持在注视平面上),当然,机器人也可能根据地形变化,障碍物遮挡等原因不得不在移动过程中找寻之前用户视线上的物体,但同样只要实时计算即可。如下图所示,机器人将在自身运动过程中平滑扫描从C点开始的CB直线,以搜寻用户所指示目标。
在一些实施例中,机器人控制方法还包括:
步骤105、在机器人自身运动过程中获取自身实时位置和实时视线角度,确定机器人实时视线角度与所述注视平面的交线以及对焦距离,在移动过程中,根据对焦距离图像扫描所述交线周围区域,直至识别出所述注视平面上用户注视的指示目标。
进一步地,机器人通过图形搜索法平滑扫描所述交线周围区域;当搜寻到至少两个符合用户指示目标图形的目标时,发出请求确认的请求;如果用户对请求确认的请求给予响应,给出确认结果,则机器人根据所述请求确认的请求返回的确认结果选取其中一个符合用户指示目标图形的目标作为所述户指示目标。
具体地,机器人实时视线角度为机器人实时视线光轴。所述在机器人自身运动过程中获取自身实时位置和实时视线角度,确定机器人实时视线角度与所述注视平 面的交线以及对焦距离包括:
图2c是本申请实施例提供的机器人控制方法的中机器人移动过程中的场景示意图。在图2c中,在机器人自身运动过程中获取自身实时位置为A′点,根据所述机器人实时位置计算机器人在参照坐标系的实时坐标(p,q)并计算机器人实时视线光轴与所述参照坐标系X轴的夹角γ″;所述机器人实时视线光轴与所述用户注视方向所在直线的交点为H,则可知A′H的距离为所述对焦距离;
将A′H所在直线设定为一次方程y=k*x+b,此处k=tanγ″,代入k和x、y,可得:
b=y–k*x=q–tan(γ″)*p;
继而得到A′H所在直线的一次方程为:
y=tan(γ″)*x+q–tan(γ″)*p;
联合求解A′H所在直线和用户注视方向所在直线的二元方程:
y=tan(α+β+γ)*x+n+d*sin(β+γ)-tan(α+β+γ)*(m+d*cos(β+γ));
y=tan(γ″)*x+q–tan(γ″)*p;
求解出的(x,y)即为机器人摄像头实时视线光轴与所述用户注视方向所在直线的交点H点坐标;
根据H点坐标和A点坐标计算A′H的距离,为所述对焦距离:
Figure PCTCN2017081484-appb-000001
因为机器人在移动过程中或者摄像头转动时,p,q,γ″都可能改变。因此,该计算应该是随着机器人的移动或转动实时进行的,如每秒30次(实际次数取决于机器人的移动或转动速度,计算频率越高则越准确,但计算量也将增大,具体计算频率在此不做限定)。
上述确定机器人实时视线角度与所述注视平面的交线以及对焦距离的过程,与识别出所述注视平面上用户注视的指示目标之后移动过去取到指示目标的移动过程是分开控制的。识别出指示目标后,根据对焦距离,也就是机器人位置与指示目标的距离,机器人的姿态等,控制机器人往指示目标行进,即可去取指示目标。再提供获取机器人位置,计算机器人坐标,获取机器人与用户之间的直线距离,结合室内静态环境、动态障碍物等,规划机器人行走路径,将指示目标拿给用户。
确定机器人实时视线角度与所述注视平面的交线以及对焦距离的过程,与识别出所述注视平面上用户注视的指示目标之后移动过去取到指示目标的移动过程也可以一起控制,此时,控制机器人移动,在移动过程中,根据对焦距离图像扫描所述 交线周围区域,直至识别出所述注视平面上用户注视的指示目标,再控制机器人往对焦距离(对焦指示目标的对焦距离)为零的位置上移动,即可到达指示目标。
本申请实施例的有益效果在于,本实施例提供的机器人控制方法,在机器人自身运动过程中获取自身实时位置和实时视线角度,确定机器人实时视线角度与所述注视平面的交线以及对焦距离。以此实现控制机器人的摄像头焦点落在所述注视平面,以此拍摄到清晰的可能包含指示目标的图像,该图像具体为交线周围区域的图像。在移动过程中,可根据对焦距离图像扫描所述交线周围区域,直至识别出所述注视平面上用户注视的指示目标,需要的参数均可实时获取,可较快捷且准确地识别出指示目标,用户体验较好。
上述技术方案是通过对用户的面部角度分析得到一个和机器人视线光轴的角度α,而如果一个机器人听到呼叫时看到的是用户的背面,则通过脸部识别与分析的技术将无法确定用户面向的方向与光轴间的角度,这种时候应先转到用户的前方或侧方,只要能看到用户的脸即可(有可能需要用户重新指示一下刚才的命令,重新确定一下用户的面部朝向与视线光轴的夹角)。另一方面,如果有其他的算法能够通过图像识别的方式从用户后方也能判定出用户面部正对的方向与光轴的夹角。再或者,用户周围有其他机器人能够获取到用户位置和机器人位置的连线所在直线的一次方程,同样也可以同步给被命令的机器人。具体地,机器人控制方法还包括:
步骤104、所述机器人包括用于与同一坐标系下的机器人联网的网络模块,所述网络模块用于共享所述注视平面的数据。
本申请实施例的有益效果在于,通过机器人的网络模块与同一坐标系下的机器人联网,共享所述注视平面的数据,机器人控制服务器可根据多个机器人当前处理任务的状态,结合机器人的位置,面向角度等,通过计算或分析,控制其中一个机器人去搜寻用户注视方向上的指示目标,可整体上让多个机器人协调工作,提高工作效率,快速满足用户需求,提升用户体验。其中,同一坐标系为所述参照坐标系。
在又一实施例中,如图4所示,机器人搜寻到用户注视方向上的指示目标后,还可有进一步的处理。所述机器人控制方法还包括:
步骤106、提取所述指示目标的目标特征信息,存储所述目标特征信息至目标目录。
机器人控制服务器上,或者机器人自动的存储器上,可设置目标目录来存储目标特征信息。
目标特征信息可以为目标的各种属性,如名称、所处位置、归属、形状、大小、颜色、价格、用户的喜欢程度、购买途径或购买原因等等。
提取所述指示目标的目标特征信息具体为通过图形搜索方法提取,视频特征提取,也可以通过用户的语音数据提取。具体地,在一些实施例中,如图4所示,所述机器人控制方法还包括:
步骤107、获取用户的语音数据与视频数据,识别所述用户的语音数据与视频数据的关键信息,将目标特征信息与所述关键信息进行匹配得到关联的关键信息,在目标目录中存储所述关联的关键信息至对应的目标特征信息下以更新目标特征信息。
在目标目录中,目标特征信息字段有多个属性,通过目标特征信息与关键信息进行匹配得到关联的关键信息,然后给对应的目标特征信息增加关联的关键信息的属性列,这样,目标特征信息字段的属性增加了,相对于更新了目标特征信息。
例如,机器人控制方法还可以在两人或多人对话时,更好的帮助机器人收集信息。如机器人在闲置状态或者还有能力通过听觉和视觉关注周围人时,如果听到且看到多人正在对话,且过程中出现某人向另外一个人或多个人指示了某物时,机器人依然可以应用此方法知道大家在谈论什么(通过获取用户的语音数据与视频数据),对该物体的评论如何(识别所述用户的语音数据与视频数据的关键信息),甚至可以加入讨论,也就是将听觉信息和视觉信息得以关联(将目标特征信息与所述关键信息进行匹配找出关联的关键信息),继而也就收集了相关用户信息,之后可以借此与用户更加智能的聊天。比如,两人正在讨论一个家里装饰的瓷花瓶,主人可能边注视花瓶边对客人说:“看这个怎么样?我超级喜欢”,客人可能回答:“这个花纹确实很漂亮,不错不错,在哪儿买的?”……按照传统的方法,机器人能够听到两方的对话,但并不一定能得知正在讨论的是什么物品,也就造成收集的语音信息是完全没有用处的。而结合本实施例的机器人控制方法,机器人有更大的概率通过视觉得知用户讨论的物品,并将语音信息与视觉上的该物体做关联,得知用户非常喜欢这个花瓶,购买地点,购买金额等等信息,使机器人对用户的了解更多,借此为用户提供更理想更智能的服务。
在再一个实施例中,体现机器人与用户聊天,或机器人参与多人之间的讨论。如图4所示,所述机器人控制方法还包括:
步骤108、所述机器人在判断用户聊天后,收集用户聊天的语音数据和视频数据,识别所述用户聊天的语音数据和视频数据的主题信息,将更新后目标特征信息与所述主题信息进行匹配,根据匹配结果完成与用户的语音与视频交流。
机器人与用户进行语音与视频交流时,机器人判断到用户在于其聊天,例如捕捉到用户的视线落在自身的头部,或者用户扫视了机器人并喊出机器人的名字等,则判断为用户在与机器人聊天。机器人先获取用户聊天的语音数据和视频数据,识 别出述用户聊天的语音数据和视频数据的主题信息,调用更新后目标目录的目标特征信息,将更新后目标特征信息与所述主题信息进行匹配,以输出与用户交流的内容,可以输出语音或动作,机器人更加智能化,可为用户提供更理想更智能的服务,提升用户体验。
本申请实施例还提供一种机器人装置400,如图5所示,机器人装置400包括坐标系建立模块401、捕捉和计算模块402和扫描模块403。
坐标系建立模块401,用于建立参照坐标系;
捕捉和计算模块402,用于捕捉指示目标的用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面;
扫描模块403,用于机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。
需要说明的是,本申请实施例提出的机器人装置400与本申请方法实施例提出的机器人控制方法基于相同的发明构思,方法实施例与装置实施例中的相应技术内容可相互适用,此处不再详述。
本申请实施例的有益效果在于,本申请实施例提供的机器人控制方法包括:建立参照坐标系;捕捉用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面;至少一个机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。相对于现有技术的机器人根据用户较简单的语音指示去寻找目标,本申请实施例提供的方法中机器人平滑扫描所述注视平面来搜索指示目标,搜索范围较小,搜索失败率较低,用户体验较好;而且相对于现有技术的机器人由于无法获知用户的视线,本申请实施例提供的方法中机器人可以捕捉用户注视方向,并实时计算所述注视方向相对所述参照坐标系的注视平面,通过扫描所述注视平面从而可以获知用户所注视的到底是什么目标,搜寻所述指示目标的成功率较高,用户体验更好。
如果机器人所处位置不佳,需要移动以靠近注视平面,找到可以清晰的捕捉到所述注视平面,以及可平滑扫描所述注视平面的位置,需要控制机器人移动。在一些实施例中,如图6所示,机器人装置400还包括:
确定和识别模块405,用于在机器人自身运动过程中获取自身实时位置和实时视线角度,确定机器人实时视线角度与所述注视平面的交线以及对焦距离,在移动过 程中,根据对焦距离图像扫描所述交线周围区域,直至识别出所述注视平面上用户注视的指示目标。
本申请实施例的有益效果在于,本实施例提供的机器人装置400,在机器人自身运动过程中获取自身实时位置和实时视线角度,确定机器人实时视线角度与所述注视平面的交线以及对焦距离。以此实现控制机器人的摄像头焦点落在所述注视平面,以此拍摄到清晰的可能包含指示目标的图像,该图像具体为交线周围区域的图像。在移动过程中,可根据对焦距离图像扫描所述交线周围区域,直至识别出所述注视平面上用户注视的指示目标,需要的参数均可实时获取,可较快捷且准确地识别出指示目标,用户体验较好。
进一步地,所述机器人视线角度为机器人视线光轴,所述捕捉和计算模块402还用于:
根据所述机器人位置计算所述机器人在参照坐标系的坐标(m,n)并计算所述机器人视线光轴与所述参照坐标系X轴的夹角γ;根据所述机器人视线角度、机器人位置和用户位置计算用户位置和机器人位置的连线所在直线与所述机器人视线光轴的夹角β;计算所述用户位置和机器人位置的连线所在直线与用户注视方向所在直线的夹角α;
所述注视平面在三维空间中为垂直于地面的平面,俯视所述注视平面得到用户注视方向所在直线,设定为一次方程y=k*x+b,所述注视平面具体通过如下算式计算:
y=tan(α+β+γ)*x+n+d*sin(β+γ)-tan(α+β+γ)*(m+d*cos(β+γ));
式中,d为机器人与用户之间的直线距离;
所述注视平面为以(m+d*cos(β+γ),n+d*sin(β+γ))为起始点,沿着一次方程y=tan(α+β+γ)*x+n+d*sin(β+γ)-tan(α+β+γ)*(m+d*cos(β+γ))向前的方向并且垂直于地面的平面。
进一步地,所述机器人实时视线角度为机器人实时视线光轴,所述确定和识别模块405还用于:
根据所述机器人实时位置A′点计算机器人在参照坐标系的实时坐标(p,q)并计算机器人实时视线光轴与所述参照坐标系X轴的夹角γ″;所述机器人实时视线光轴与所述用户注视方向所在直线的交点为H,则可知A′H的距离为所述对焦距离;将A′H所在直线设定为一次方程y=k*x+b,所述对焦距离具体通过如下算式计算:
y=tan(γ″)*x+q–tan(γ″)*p;
联合求解A′H所在直线和用户注视方向所在直线的二元方程,解出的(x,y)即为机器人实时视线光轴与所述用户注视方向所在直线的交点H点坐标;
根据H点坐标和A′点坐标计算A′H的距离,为所述对焦距离:
Figure PCTCN2017081484-appb-000002
上述技术方案是通过对用户的面部角度分析得到一个和机器人视线光轴的角度α,如果用户周围有其他机器人能够获取到用户位置和机器人位置的连线所在直线的一次方程,同样也可以同步给被命令的机器人。
具体地,所述机器人包括用于与同一坐标系下的机器人联网的网络模块,所述网络模块用于共享所述注视平面的数据。
本申请实施例的有益效果在于,通过机器人的网络模块与同一坐标系下的机器人联网,共享所述注视平面的数据,机器人控制服务器可根据多个机器人当前处理任务的状态,结合机器人的位置,面向角度等,通过计算或分析,控制其中一个机器人去搜寻用户注视方向上的指示目标,可整体上让多个机器人协调工作,提高工作效率,快速满足用户需求,提升用户体验。其中,同一坐标系为所述参照坐标系。
在又一实施例中,机器人搜寻到用户注视方向上的指示目标后,还可有进一步的处理。所述机器人装置400还包括:
提取和存储模块406,用于提取所述指示目标的目标特征信息,存储所述目标特征信息至目标目录。
提取所述指示目标的目标特征信息具体为通过图形搜索方法提取,视频特征提取,也可以通过用户的语音数据提取。具体地,在一些实施例中,如图6所示,所述机器人装置400还包括:
识别和匹配模块407,用于获取用户的语音数据与视频数据,识别所述用户的语音数据与视频数据的关键信息,将目标特征信息与所述关键信息进行匹配得到关联的关键信息,在目标目录中存储所述关联的关键信息至对应的目标特征信息下以更新目标特征信息。
例如,机器人装置400还可以在两人或多人对话时,更好的帮助机器人收集信息。识别和匹配模块407用于获取及识别所述用户的语音数据与视频数据的关键信息,将目标特征信息与所述关键信息进行匹配找出关联的关键信息并存储更新。目标特征信息与所述关键信息进行匹配,机器人有更大的概率通过听觉或视觉得知用户讨论的物品,并将语音信息与视觉信息与该物体做关联,找出关联的关键信息,也即使机器人对用户的了解更多,使机器人更加智能化。
在再一个实施例中,体现机器人与用户聊天,或机器人参与多人之间的讨论。 如图6所示,所述机器人装置400还包括:
匹配和交流模块408,用于机器人在判断用户聊天后,收集用户聊天的语音数据和视频数据,识别所述用户聊天的语音数据和视频数据的主题信息,将更新后目标特征信息与所述主题信息进行匹配,根据匹配结果完成与用户的语音与视频交流。
机器人与用户进行语音与视频交流时,先获取用户聊天的语音数据和视频数据,识别出述用户聊天的语音数据和视频数据的主题信息,调用目标目录的更新后目标特征信息,将更新后目标特征信息与所述主题信息进行匹配,以输出与用户交流的内容,可以输出语音或动作,使机器人更加智能化,提升用户体验。
图7是本申请实施例提供的机器人设备的硬件结构示意图。该机器人设备可以是任何合适执行机器人控制方法的机器人设备800。该机器人设备800还可以具有一个或者多个动力装置,用以驱动机器人沿特定的轨迹移动。
如图7所示,该设备包括:一个或多个处理器810以及存储器820,图7中以一个处理器810为例。
处理器810、存储器820可以通过总线或者其他方式连接,图7中以通过总线连接为例。
存储器820作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的机器人控制方法对应的程序指令/模块(例如,图5所示的坐标系建立模块401、捕捉和计算模块402和扫描模块403,图6所示的坐标系建立模块401、捕捉和计算模块402、扫描模块403、确定和识别模块405、提取和存储模块406、识别和匹配模块407和匹配和交流模块408)。处理器810通过运行存储在存储器820中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例机器人控制方法。
存储器820可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据机器人装置的使用所创建的数据等。此外,存储器820可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器820可选包括相对于处理器820远程设置的存储器,这些远程存储器可以通过网络连接至机器人装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器820中,当被所述一个或者多个处理器810执行时,执行上述任意方法实施例中的机器人控制方法。
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。所述的计算机软件可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体或随机存储记忆体等。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (14)

  1. 一种机器人控制方法,其特征在于,包括:
    建立参照坐标系;
    捕捉指示目标的用户注视方向,获取机器人视线角度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面;
    机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。
  2. 如权利要求1所述的方法,其特征在于,所述机器人包括用于与同一坐标系下的机器人联网的网络模块,所述网络模块用于共享所述注视平面的数据。
  3. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    在机器人自身运动过程中获取自身实时位置和实时视线角度,确定机器人实时视线角度与所述注视平面的交线以及对焦距离,在移动过程中,根据对焦距离图像扫描所述交线周围区域,直至识别出所述注视平面上用户注视的指示目标。
  4. 如权利要求3所述的方法,其特征在于,所述方法还包括:
    提取所述指示目标的目标特征信息,存储所述目标特征信息至目标目录。
  5. 如权利要求4所述的方法,其特征在于,所述方法还包括:
    获取用户的语音数据与视频数据,识别所述用户的语音数据与视频数据的关键信息,将目标特征信息与所述关键信息进行匹配得到关联的关键信息,在目标目录中存储所述关联的关键信息至对应的目标特征信息下以更新目标特征信息。
  6. 如权利要求5所述的方法,其特征在于,所述方法还包括:
    所述机器人在判断用户聊天后,收集用户聊天的语音数据和视频数据,识别所述用户聊天的语音数据和视频数据的主题信息,将更新后目标特征信息与所述主题信息进行匹配,根据匹配结果完成与用户的语音与视频交流。
  7. 一种机器人装置,其特征在于,包括:
    坐标系建立模块,用于建立参照坐标系;
    捕捉和计算模块,用于捕捉指示目标的用户注视方向,获取机器人视线角 度,获取机器人位置,获取机器人与用户之间的直线距离,根据所述机器人视线角度、所述机器人位置以及所述机器人与用户之间的直线距离实时计算所述用户注视方向相对所述参照坐标系的注视平面;
    扫描模块,用于机器人平滑扫描所述注视平面,以搜寻用户注视方向上的指示目标。
  8. 如权利要求7所述的装置,其特征在于,所述机器人包括用于与同一坐标系下的机器人联网的网络模块,所述网络模块用于共享所述注视平面的数据。
  9. 如权利要求8所述的装置,其特征在于,所述装置还包括:
    确定和识别模块,用于在机器人自身运动过程中获取自身实时位置和实时视线角度,确定机器人实时视线角度与所述注视平面的交线以及对焦距离,在移动过程中,根据对焦距离图像扫描所述交线周围区域,直至识别出所述注视平面上用户注视的指示目标。
  10. 如权利要求9所述的装置,其特征在于,所述装置还包括:
    提取和存储模块,用于提取所述指示目标的目标特征信息,存储所述目标特征信息至目标目录。
  11. 如权利要求10所述的装置,其特征在于,所述装置还包括:
    识别和匹配模块,用于获取用户的语音数据与视频数据,识别所述用户的语音数据与视频数据的关键信息,将目标特征信息与所述关键信息进行匹配得到关联的关键信息,在目标目录中存储所述关联的关键信息至对应的目标特征信息下以更新目标特征信息。
  12. 如权利要求11所述的装置,其特征在于,所述装置还包括:
    匹配和交流模块,用于所述机器人在判断用户聊天后,收集用户聊天的语音数据和视频数据,识别所述用户聊天的语音数据和视频数据的主题信息,将更新后目标特征信息与所述主题信息进行匹配,根据匹配结果完成与用户的语音与视频交流。
  13. 一种机器人设备,其特征在于,包括至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令程序,所述指令程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-6任一项所述的方法。
  14. 一种包括软件代码部分的计算机程序产品,其特征在于,所述软件代码部分被配置用于当在计算机的存储器中运行时执行根据权利要求1-6中任一项所述的方法步骤。
PCT/CN2017/081484 2017-04-21 2017-04-21 一种机器人控制方法、机器人装置及机器人设备 WO2018191970A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201780000641.9A CN107223082B (zh) 2017-04-21 2017-04-21 一种机器人控制方法、机器人装置及机器人设备
PCT/CN2017/081484 WO2018191970A1 (zh) 2017-04-21 2017-04-21 一种机器人控制方法、机器人装置及机器人设备
JP2019554521A JP6893607B2 (ja) 2017-04-21 2017-04-21 ロボット制御方法、ロボット装置及びロボット機器
US16/668,647 US11325255B2 (en) 2017-04-21 2019-10-30 Method for controlling robot and robot device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/081484 WO2018191970A1 (zh) 2017-04-21 2017-04-21 一种机器人控制方法、机器人装置及机器人设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/668,647 Continuation US11325255B2 (en) 2017-04-21 2019-10-30 Method for controlling robot and robot device

Publications (1)

Publication Number Publication Date
WO2018191970A1 true WO2018191970A1 (zh) 2018-10-25

Family

ID=59953867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/081484 WO2018191970A1 (zh) 2017-04-21 2017-04-21 一种机器人控制方法、机器人装置及机器人设备

Country Status (4)

Country Link
US (1) US11325255B2 (zh)
JP (1) JP6893607B2 (zh)
CN (1) CN107223082B (zh)
WO (1) WO2018191970A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990594A (zh) * 2019-11-29 2020-04-10 华中科技大学 一种基于自然语言交互的机器人空间认知方法及系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6882147B2 (ja) 2017-11-28 2021-06-02 シュナイダーエレクトリックホールディングス株式会社 操作案内システム
CN109199240B (zh) * 2018-07-24 2023-10-20 深圳市云洁科技有限公司 一种基于手势控制的扫地机器人控制方法及系统
US20220026914A1 (en) * 2019-01-22 2022-01-27 Honda Motor Co., Ltd. Accompanying mobile body
CN109934867B (zh) * 2019-03-11 2021-11-09 达闼机器人有限公司 一种图像讲解的方法、终端和计算机可读存储介质
CN111652103B (zh) * 2020-05-27 2023-09-19 北京百度网讯科技有限公司 室内定位方法、装置、设备以及存储介质
CN111803213B (zh) * 2020-07-07 2022-02-01 武汉联影智融医疗科技有限公司 一种协作式机器人引导定位方法及装置
KR20220021581A (ko) * 2020-08-14 2022-02-22 삼성전자주식회사 로봇 및 이의 제어 방법
CN112507531A (zh) * 2020-11-24 2021-03-16 北京电子工程总体研究所 一种平面空间二对一场景下防守区域扩大方法
CN114566171A (zh) * 2020-11-27 2022-05-31 华为技术有限公司 一种语音唤醒方法及电子设备
CN113359996A (zh) * 2021-08-09 2021-09-07 季华实验室 生活辅助机器人控制系统、方法、装置及电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005028468A (ja) * 2003-07-08 2005-02-03 National Institute Of Advanced Industrial & Technology ロボットの視覚座標系位置姿勢同定方法、座標変換方法および装置
CN101576384A (zh) * 2009-06-18 2009-11-11 北京航空航天大学 一种基于视觉信息校正的室内移动机器人实时导航方法
CN102323817A (zh) * 2011-06-07 2012-01-18 上海大学 一种服务机器人控制平台系统及其多模式智能交互与智能行为的实现方法
CN102915039A (zh) * 2012-11-09 2013-02-06 河海大学常州校区 一种仿动物空间认知的多机器人联合目标搜寻方法
CN103170980A (zh) * 2013-03-11 2013-06-26 常州铭赛机器人科技有限公司 一种家用服务机器人的定位系统及定位方法
CN103264393A (zh) * 2013-05-22 2013-08-28 常州铭赛机器人科技有限公司 家用服务机器人的使用方法
CN104951808A (zh) * 2015-07-10 2015-09-30 电子科技大学 一种用于机器人交互对象检测的3d视线方向估计方法
JP5891553B2 (ja) * 2011-03-01 2016-03-23 株式会社国際電気通信基礎技術研究所 ルートパースペクティブモデル構築方法およびロボット

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69532916D1 (de) * 1994-01-28 2004-05-27 Schneider Medical Technologies Verfahren und vorrichtung zur bilddarstellung
US5481622A (en) * 1994-03-01 1996-01-02 Rensselaer Polytechnic Institute Eye tracking apparatus and method employing grayscale threshold values
US5912721A (en) * 1996-03-13 1999-06-15 Kabushiki Kaisha Toshiba Gaze detection apparatus and its method as well as information display apparatus
US6118888A (en) * 1997-02-28 2000-09-12 Kabushiki Kaisha Toshiba Multi-modal interface apparatus and method
AUPQ896000A0 (en) * 2000-07-24 2000-08-17 Seeing Machines Pty Ltd Facial image processing system
CA2545202C (en) * 2003-11-14 2014-01-14 Queen's University At Kingston Method and apparatus for calibration-free eye tracking
JP2006003263A (ja) * 2004-06-18 2006-01-05 Hitachi Ltd 視覚情報処理装置および適用システム
CN101489467B (zh) * 2006-07-14 2011-05-04 松下电器产业株式会社 视线方向检测装置和视线方向检测方法
JP4976903B2 (ja) * 2007-04-05 2012-07-18 本田技研工業株式会社 ロボット
JP5163202B2 (ja) * 2008-03-18 2013-03-13 株式会社国際電気通信基礎技術研究所 物品推定システム
US8041456B1 (en) * 2008-10-22 2011-10-18 Anybots, Inc. Self-balancing robot including an ultracapacitor power source
JP2010112979A (ja) * 2008-11-04 2010-05-20 Advanced Telecommunication Research Institute International インタラクティブ看板システム
WO2010102288A2 (en) * 2009-03-06 2010-09-10 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for shader-lamps based physical avatars of real and virtual people
JP5816257B2 (ja) * 2010-03-22 2015-11-18 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 観察者の視線を追跡するシステム及び方法
CN102830793B (zh) * 2011-06-16 2017-04-05 北京三星通信技术研究有限公司 视线跟踪方法和设备
US8885882B1 (en) * 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
US8879801B2 (en) * 2011-10-03 2014-11-04 Qualcomm Incorporated Image-based head position tracking method and system
EP2709060B1 (en) * 2012-09-17 2020-02-26 Apple Inc. Method and an apparatus for determining a gaze point on a three-dimensional object
CN103761519B (zh) * 2013-12-20 2017-05-17 哈尔滨工业大学深圳研究生院 一种基于自适应校准的非接触式视线追踪方法
JP6126028B2 (ja) * 2014-02-28 2017-05-10 三井不動産株式会社 ロボット制御システム、ロボット制御サーバ及びロボット制御プログラム
US9552061B2 (en) * 2014-03-26 2017-01-24 Microsoft Technology Licensing, Llc Eye gaze tracking using binocular fixation constraints
JP2015197329A (ja) * 2014-03-31 2015-11-09 三菱重工業株式会社 データ伝送システム、データ伝送装置、データ伝送方法、及びデータ伝送プログラム
US10682038B1 (en) * 2014-09-19 2020-06-16 Colorado School Of Mines Autonomous robotic laparoscope based on eye tracking
US9796093B2 (en) * 2014-10-24 2017-10-24 Fellow, Inc. Customer service robot and related systems and methods
JP6468643B2 (ja) * 2015-03-09 2019-02-13 株式会社国際電気通信基礎技術研究所 コミュニケーションシステム、確認行動決定装置、確認行動決定プログラムおよび確認行動決定方法
CN106294678B (zh) * 2016-08-05 2020-06-26 北京光年无限科技有限公司 一种智能机器人的话题发起装置及方法
CA3105192A1 (en) * 2018-06-27 2020-01-02 SentiAR, Inc. Gaze based interface for augmented reality environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005028468A (ja) * 2003-07-08 2005-02-03 National Institute Of Advanced Industrial & Technology ロボットの視覚座標系位置姿勢同定方法、座標変換方法および装置
CN101576384A (zh) * 2009-06-18 2009-11-11 北京航空航天大学 一种基于视觉信息校正的室内移动机器人实时导航方法
JP5891553B2 (ja) * 2011-03-01 2016-03-23 株式会社国際電気通信基礎技術研究所 ルートパースペクティブモデル構築方法およびロボット
CN102323817A (zh) * 2011-06-07 2012-01-18 上海大学 一种服务机器人控制平台系统及其多模式智能交互与智能行为的实现方法
CN102915039A (zh) * 2012-11-09 2013-02-06 河海大学常州校区 一种仿动物空间认知的多机器人联合目标搜寻方法
CN103170980A (zh) * 2013-03-11 2013-06-26 常州铭赛机器人科技有限公司 一种家用服务机器人的定位系统及定位方法
CN103264393A (zh) * 2013-05-22 2013-08-28 常州铭赛机器人科技有限公司 家用服务机器人的使用方法
CN104951808A (zh) * 2015-07-10 2015-09-30 电子科技大学 一种用于机器人交互对象检测的3d视线方向估计方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990594A (zh) * 2019-11-29 2020-04-10 华中科技大学 一种基于自然语言交互的机器人空间认知方法及系统
CN110990594B (zh) * 2019-11-29 2023-07-04 华中科技大学 一种基于自然语言交互的机器人空间认知方法及系统

Also Published As

Publication number Publication date
JP2020520308A (ja) 2020-07-09
JP6893607B2 (ja) 2021-06-23
CN107223082B (zh) 2020-05-12
US20200061822A1 (en) 2020-02-27
CN107223082A (zh) 2017-09-29
US11325255B2 (en) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2018191970A1 (zh) 一种机器人控制方法、机器人装置及机器人设备
US11126257B2 (en) System and method for detecting human gaze and gesture in unconstrained environments
US9646384B2 (en) 3D feature descriptors with camera pose information
US9384594B2 (en) Anchoring virtual images to real world surfaces in augmented reality systems
US20130342652A1 (en) Tracking and following people with a mobile robotic device
JP2004078316A (ja) 姿勢認識装置及び自律ロボット
WO2020000395A1 (en) Systems and methods for robust self-relocalization in pre-built visual map
US20200145639A1 (en) Portable 3d scanning systems and scanning methods
CN108681399A (zh) 一种设备控制方法、装置、控制设备及存储介质
US10534426B2 (en) Interactive system, remote controller and operating method thereof
JP2008090807A (ja) 小型携帯端末
Fernández et al. A Kinect-based system to enable interaction by pointing in smart spaces
WO2019091118A1 (en) Robotic 3d scanning systems and scanning methods
US20220157032A1 (en) Multi-modality localization of users
Sprute et al. Gesture-based object localization for robot applications in intelligent environments
Atienza et al. Intuitive human-robot interaction through active 3d gaze tracking
Tee et al. Gesture-based attention direction for a telepresence robot: Design and experimental study
Zhou et al. Information-efficient 3-D visual SLAM for unstructured domains
TW201821977A (zh) 在影像中展現業務對象資料的方法和裝置
Ikai et al. Evaluation of finger direction recognition method for behavior control of Robot
TWI768724B (zh) 立體空間中定位方法與定位系統
Shen et al. A trifocal tensor based camera-projector system for robot-human interaction
CN117268343A (zh) 目标的远程寻找方法、装置、增强现实设备及存储介质
Medvedev et al. Method for discovering spatial arm positions with depth sensor data at low-performance devices
WO2022174371A1 (zh) 控制可移动平台的方法、装置及可移动平台

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17906392

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019554521

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.02.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17906392

Country of ref document: EP

Kind code of ref document: A1