WO2021223611A1 - Robot control method and apparatus, and robot and storage medium - Google Patents

Robot control method and apparatus, and robot and storage medium Download PDF

Info

Publication number
WO2021223611A1
WO2021223611A1 PCT/CN2021/089709 CN2021089709W WO2021223611A1 WO 2021223611 A1 WO2021223611 A1 WO 2021223611A1 CN 2021089709 W CN2021089709 W CN 2021089709W WO 2021223611 A1 WO2021223611 A1 WO 2021223611A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
rotation
target
angle
head
Prior art date
Application number
PCT/CN2021/089709
Other languages
French (fr)
Chinese (zh)
Inventor
陈辰
胡文
许春晖
陶志东
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021223611A1 publication Critical patent/WO2021223611A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Definitions

  • This application belongs to the field of robotics, and in particular relates to a control method, device, robot, and storage medium of a robot.
  • parents can use intelligent robots to accompany and educate their children.
  • the existing intelligent robots can communicate with their children. , And based on the communication with the child, learn and update the way of communication with the child.
  • Anthropomorphism is an important indicator to measure the intelligence of robots.
  • the accuracy of target following and gaze tracking behavior directly reflects the degree of anthropomorphism of the robot.
  • the child's experience can be improved.
  • the existing robot control technology cannot achieve smooth gaze tracking behavior, thereby reducing the personification of the robot and the accuracy of the gaze tracking behavior.
  • the embodiments of the present application provide a robot control method, device, robot, and storage medium, which can solve the problem of the existing robot control technology, which cannot achieve smooth gaze tracking behavior.
  • the degree of personification of the robot and the accuracy of the gaze tracking behavior are relatively high. Low question.
  • an embodiment of the present application provides a method for controlling a robot, including:
  • the output screen of the display module for simulating eye features on the robot is dynamically adjusted, so that the output screen of the display module on the robot can simulate the eye features of the eye.
  • the partial line of sight is the direction of looking at the target object during the rotation of the robot.
  • the above-mentioned controlling the rotation of the robot specifically refers to controlling the rotation of the rotating parts in the body of the robot. If the robot includes a head, the above-mentioned controlling the rotation of the robot specifically refers to controlling the rotation of the rotating parts connecting the head and the body of the robot to change the head of the robot. Frontal orientation; if the robot includes a head, a torso, and a bottom (such as a base or a leg), the above-mentioned controlling the rotation of the robot specifically refers to controlling the rotation of the rotating part connecting the head and the torso of the robot, and controlling the connection between the torso and the bottom of the robot The rotating parts in the middle rotate to change the orientation of the head and torso of the robot.
  • the rotating part connecting any two parts of the robot can be one or two or more.
  • the rotating part connecting any two parts includes two or more, the rotation with multiple degrees of freedom can be realized.
  • the above-mentioned multi-degree-of-freedom rotation includes: vertical rotation and horizontal rotation.
  • the above-mentioned simulated eye feature is specifically a picture that simulates the eyeball of a real person generated based on the video where the eye area of a real person is taken in various directions. It is also possible to construct eye pictures with eye characteristics through animation, cartoons, three-dimensional models, etc., including Used to simulate the dots or arbitrary shapes of the eyes.
  • the output screen that simulates eye features may include simulated pupils, eyeballs, eyelids, eyebrows, eyelashes and other display objects, and the simulated eye screens are formed by the display objects.
  • the dynamic adjustment of the output screen of the display module for simulating eye features on the robot during the rotation of the robot is specifically:
  • the output screen of the display module for simulating eye features on the robot is dynamically adjusted according to the target deflection angle; the target deflection angle is the corresponding deflection angle when the robot is rotated from the initial angle to facing the target object .
  • the above-mentioned initial angle is specifically the direction angle corresponding to the front of the robot; if the robot includes a head, the direction angle corresponding to the front of the body is specifically the direction corresponding to the front of the robot's head, where the front of the robot's head Specifically, it refers to the surface that contains the simulated eyeball.
  • the output screen of the display module for simulating eye features on the robot is dynamically adjusted, so that the robot is The eye line of sight of the simulated eye feature of the output screen of the display module is the direction of looking at the target object during the rotation of the robot, including:
  • the eye feature target position is the position in the direction in which the line of sight of the eye simulated by the output screen looks toward the target object for the first time;
  • the output screen is dynamically adjusted Simulate the position of the eye feature to the target location of the eye feature, including:
  • the real-time offset is specifically:
  • eye_yaw is the horizontal component of the eye characteristic target position
  • eye_pitch is the vertical component of the eye characteristic target position
  • target_yaw is the horizontal component of the target deflection angle
  • target_pitch is the vertical component of the target deflection angle
  • motor_yaw is the horizontal deflection amount
  • motor_pitch is the horizontal deflection amount
  • controlling the rotation of the robot according to the position information of the target object includes:
  • the target deflection angle is the corresponding deflection angle when the torso of the robot rotates from the initial angle to the target object;
  • the head of the robot is dynamically controlled to rotate to the target head position, and the target head position is the robot The position of the head facing the target object for the first time;
  • the controlling the torso rotation of the robot based on the target deflection angle includes:
  • the torso of the robot is controlled to rotate according to each of the first cycle rotation instructions.
  • the rotation angle in the first cycle rotation instruction is specifically:
  • motor 0 _yaw is the rotation angle in the first cycle rotation command
  • current_motor 0 _yaw is the first reference rotation angle of the current control cycle
  • target_yaw is the target deflection angle
  • motor 0 _yaw_diff is the current control The first tracking error angle of the cycle
  • motor 0 _yaw_last_diff is the first tracking error angle of the previous control cycle
  • motor 0 _Kp and motor 0 _Kd are preset adjustment parameters of the trunk rotating part.
  • the head of the robot in the process of rotating the torso of the robot, according to the real-time torso angle fed back in real time and the target deflection angle, the head of the robot is dynamically controlled to rotate to The head target position, where the head target position is the position of the robot head facing the target object for the first time, includes:
  • the head of the robot is dynamically controlled to rotate in the horizontal direction to a horizontal target position;
  • the horizontal target position Is the position of the robot head facing the target object in the horizontal direction for the first time.
  • controlling the head of the robot to rotate in a vertical direction according to the vertical deflection angle includes:
  • the head of the robot is controlled to rotate in the vertical direction according to each of the second cycle rotation instructions.
  • the rotation angle in the second cycle rotation instruction is specifically:
  • motor 2 _pitch is the rotation angle in the second cycle rotation command
  • current_motor 2 _pitch is the second reference rotation angle of the current control cycle
  • target_pitch is the vertical deflection angle
  • motor 2 _pitch_diff is the current control
  • motor 2 _pitch_last_diff is the second tracking error angle of the previous control cycle
  • motor 2 _Kp and motor 2 _Kd are preset adjustment parameters of the second head rotating part.
  • the head of the robot in the process of rotating the torso of the robot, according to the real-time torso angle fed back in real time and the horizontal deflection angle, the head of the robot is dynamically controlled to be horizontal. Rotate in the direction to the horizontal target position, including:
  • the third reference rotation angle of the current control period the third tracking error angle of the previous control period, the real-time torso angle, and the horizontal deflection angle, the third reference rotation angle corresponding to the current control period is generated.
  • Cycle rotation instruction
  • the head of the robot is dynamically controlled to rotate in the horizontal direction according to each of the three-cycle rotation instructions.
  • the rotation angle in the third cycle rotation instruction is specifically:
  • motor 1 _yaw is the rotation angle in the third cycle rotation command
  • current_motor 1 _yaw is the third reference rotation angle of the current control cycle
  • target_yaw is the horizontal deflection angle
  • motor 1 _yaw_diff is the current control
  • motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle
  • motor 1 _Kp and motor 1 _Kd are the preset adjustment parameters of the first head rotating part
  • current_motor 0 _yaw Is the real-time torso angle.
  • the acquiring location information of the target object includes:
  • the position information of the target object includes the object center coordinates of the target object ;
  • the target deflection angle is specifically:
  • target_yaw is the horizontal component of the target deflection angle
  • target_pitch is the vertical component of the target deflection angle
  • OC is the focal length of the camera module
  • CD is the horizontal deviation between the object center coordinates and the image center coordinates
  • AC is The vertical deviation between the object center coordinate and the image center coordinate.
  • the acquiring a scene image containing the target object through a camera module built into the robot includes:
  • the original image collected by the camera module is adjusted based on the image correction parameter to generate the scene image.
  • an embodiment of the present application provides a robot control device, including:
  • the location information acquiring unit is used to acquire the location information of the target object
  • a rotation control unit for controlling the rotation of the robot according to the position information of the target object
  • the screen adjustment unit is used to dynamically adjust the output screen of the display module for simulating eye features on the robot during the rotation of the robot, and look in the direction of the target object so that the robot is on the
  • the line of sight of the eye that simulates the eye feature of the output screen of the display module is the direction of looking at the target object during the rotation of the robot.
  • an embodiment of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the The computer program realizes the control method of the robot according to any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a robot, including a processor, a display, and a transmission component for controlling the rotation of the robot, wherein the processor executes a computer program to implement any one of the first aspects
  • the transmission component controls the rotation of the robot according to the control instruction output by the computer program
  • the display dynamically adjusts the output for simulating eye features according to the output instruction of the computer program Picture.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and is characterized in that, when the computer program is executed by a processor, any of the above-mentioned aspects of the first aspect is implemented.
  • a method of controlling the robot is characterized in that, when the computer program is executed by a processor, any of the above-mentioned aspects of the first aspect is implemented.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a robot, causes the robot to execute the robot control method described in any one of the above-mentioned first aspects.
  • the embodiment of the present application determines the target deflection angle according to the relative position between the target object and the robot, and adjusts the output screen and rotating parts of the display module based on the target deflection angle to realize the gaze tracking behavior, because the eye movement of the robot is displayed
  • the output interface of the module is simulated, no motor drive is needed to achieve deflection during movement, the response time is short, smooth gaze tracking behavior is realized, and the degree of personification and the accuracy of gaze tracking behavior are improved.
  • FIG. 1 is a block diagram of a part of the structure of a robot provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of the software structure of the robot according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a robot provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a robot provided by an embodiment of the present application.
  • FIG. 5 is an implementation flowchart of the robot control method provided by the first embodiment of the present application.
  • Fig. 6 is a schematic diagram of a key center of a target object provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of obtaining a target deflection angle provided by an embodiment of the present application.
  • FIG. 8 is a specific implementation flowchart of a robot control method S501 and S502 according to another embodiment of the present application.
  • FIG. 9 is a schematic diagram of determining the center coordinates of an object provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of obtaining a target offset angle provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the principle of shooting offset provided by an embodiment of the present application.
  • FIG. 12 is a specific implementation flowchart of a robot control method S5011 provided by another embodiment of the present application.
  • FIG. 13 is a schematic diagram of a motor-driven multi-degree-of-freedom robot provided by an embodiment of the present application.
  • Fig. 14 is a schematic diagram of a robot with 2 degrees of freedom provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a robot's eye sight aligning with a target object for the first time according to an embodiment of the present application
  • FIG. 17 is a specific implementation flowchart of a robot control method S5031 according to another embodiment of the present application.
  • FIG. 18 is a schematic diagram of a robot with one degree of freedom in a rotating part provided by an embodiment of the present application.
  • 19 is a schematic diagram of a robot with two degrees of freedom in a rotating part provided by an embodiment of the present application.
  • FIG. 21 is a specific implementation flowchart of a robot control method S2001 provided by another embodiment of the present application.
  • FIG. 22 is a specific implementation flowchart of a robot control method S2002 provided by another embodiment of the present application.
  • FIG. 23 is a schematic diagram of a robot with three degrees of freedom in a rotating part provided by an embodiment of the present application.
  • FIG. 24 is a specific implementation flowchart of a robot control method S2202 provided by another embodiment of the present application.
  • 25 is a specific implementation flowchart of a robot control method S2203 provided by another embodiment of the present application.
  • FIG. 26 is a structural block diagram of a robot control device provided by an embodiment of the present application.
  • FIG. 27 is a schematic diagram of a robot provided by another embodiment of the present application.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • Gaze tracking behavior refers to the robot's active tracking of the target object through the coordinated actions of the robot's torso, head, and eyes. It is the most important function of human-computer non-verbal interaction and an important step in human-computer interaction. For anthropomorphic robots, gaze tracking is particularly important. Only when the system design realizes the tracking gaze and natural confrontation between the robot and the interactive object, can further deep-level interaction be carried out. Gaze tracking behavior specifically includes aspects, which are the robot's tracking of the target and the eye contact between the robot and the target object.
  • embodiments of the present application provide a robot control method, device, robot, and storage medium According to the relative position between the target object and the robot, determine the target deflection angle, and adjust the output screen and rotating parts of the display module based on the target deflection angle to realize the gaze tracking behavior, because the eye movement of the robot is output through the display module
  • the interface is simulated, and no motor is required to achieve deflection during movement.
  • the response time is short, and smooth gaze tracking behavior is realized, and the degree of personification and the accuracy of gaze tracking behavior are improved.
  • the robot control method provided in the embodiments of the present application can be applied to an intelligent robot that can realize human-computer interaction.
  • the above-mentioned human-computer interaction includes, but is not limited to, target tracking, gaze tracking behavior, intelligent question and answer, intelligent navigation, music on demand, and intelligent
  • intelligent robots can also automatically perform task operations associated with instructions according to pre-configured instructions. For example, when the robot detects the arrival of the preset time node, it can give voice prompts to the user, or meet the preset
  • a trigger condition is triggered, a response operation associated with the trigger condition is executed, for example, when it is detected that the current indoor temperature is greater than a preset temperature threshold, the indoor air-conditioning device is turned on.
  • the eyes of the robot can move with the movement of the user.
  • the above process is the gaze tracking behavior.
  • the above-mentioned robot may be a robot 100 having a hardware structure as shown in FIG. 1.
  • the robot 100 may specifically include: a communication module 110, a memory 120, a camera module 130, a display unit 140, The sensor 150, the audio circuit 160, the rotating component 170, the processor 180, and the power supply 190 and other components.
  • a communication module 110 may specifically include: a communication module 110, a memory 120, a camera module 130, a display unit 140, The sensor 150, the audio circuit 160, the rotating component 170, the processor 180, and the power supply 190 and other components.
  • the robot may include more or fewer components than shown, or combine certain components, or different component arrangements. .
  • the communication module 110 can be used to establish a communication connection with other devices, so as to receive control instructions and firmware update data packets sent by other devices, and can also send operation records of the robot to other devices.
  • the robot can also use the communication module 110 Establish communication connections with other robots in the scene, so as to cooperate with other robots.
  • the communication module 110 may send the downlink information received from other devices to the processor 180 for processing; in addition, it may send the designed uplink data to other devices connected to it.
  • the communication module 110 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, a wireless communication module, a Bluetooth communication module, and so on.
  • LNA low noise amplifier
  • the communication module 110 may also communicate with the network and other devices through wireless communication.
  • the above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), Email, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile Communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • Email Short Messaging Service
  • the memory 120 may be used to store software programs and modules.
  • the processor 180 executes various functional applications and data processing of the robot by running the software programs and modules stored in the memory 120.
  • the memory 120 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, at least one application program required by at least one function (such as a gaze tracking behavior application, an intelligent escort application, an intelligent education application, etc.), etc.;
  • the data area may store data created according to the use of the robot (for example, image data collected by the camera module 130, rotation angle fed back by the rotating component 170, etc.) and the like.
  • the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the camera module 130 can be used to collect images of the environment where the robot is located.
  • the shooting direction of the camera module 130 can be consistent with the direction of the front of the robot, so that the robot can simulate the environment where the robot is "seeing".
  • the robot has a plurality of camera modules 130 built in, and different camera modules 130 are used to collect environmental images of the robot in different directions; optionally, the robot has a built-in camera module 130, which can preset a trajectory Move or rotate around an axis to obtain environmental images of different angles and directions.
  • the camera module 130 can store the collected images in the memory 120, and can also directly transmit the collected images to the processor 180.
  • the display unit 140 may be used to display information input by the user or information provided to the user and various menus of the robot.
  • the display unit 140 may include a display panel 141.
  • the display panel 141 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • the touch panel can cover the display panel 141, and when the touch panel detects a touch operation on or near it, it transmits it to the processor 180 to determine the type of the touch event, and then the processor 180 determines the type of the touch event according to the type of the touch event.
  • Corresponding visual output is provided on the display panel 141.
  • the head area of the robot contains a display module for simulating binoculars.
  • the output screen of the display module is specifically a screen that simulates the eyes.
  • the eyes can be generated based on the video corresponding to the movement of the eye area of a real person in various directions.
  • the screen that simulates the eyeballs of real people can also be constructed through animation, comics, three-dimensional models and other methods.
  • the output screen of the simulated eye may include simulated pupils, eyeballs, eyelids, eyebrows, eyelashes, and other display objects, and the simulated eye screens are formed by the above-mentioned display objects.
  • the head of the robot can include a display module, which can be used to output an output screen containing two eyes; the head of the robot can also include two display modules, namely a left-eye display module and a right-eye display module.
  • the display module, the left-eye display module is used to output an output image that simulates the left eye
  • the right-eye display module is used to output an output image that simulates the right eye.
  • the robot 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display unit 140 according to the brightness of the ambient light.
  • the proximity sensor is used to determine the distance between the robot and the user. When the preset distance threshold is used, the transmission components of the robot are used to control the robot to be far away from the user, so as to avoid collision with the user.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary, which can be used to recognize robot posture applications, vibration recognition related functions, etc.;
  • sensors such as gyroscopes, thermometers, infrared sensors, etc., that the robot can also configure, I won't repeat them here.
  • the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the robot.
  • the audio circuit 160 can transmit the electrical signal converted from the received audio data to the speaker 161, which is converted into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, and the audio circuit 160 After being received, it is converted into audio data, processed by the audio data output processor 180, and sent to, for example, another robot via the communication module 110, or the audio data is output to the memory 120 for further processing.
  • the audio circuit 160 can collect the voice command initiated by the user, and respond to the response operation based on the voice command; the audio circuit 160 can also be used in the process of human-computer interaction. In, the intelligent response voice signal is output, and the simulation interaction process with the user is realized.
  • the rotating part 170 can be used to control the robot to perform corresponding movements.
  • the rotating part 170 can be driven by a motor, and the motor provides a driving force to control the rotating part 170 to rotate, so as to realize movement operations such as robot movement, posture adjustment, and orientation adjustment.
  • multiple rotating parts 170 can be installed on the head, torso, etc. of the robot. Different rotating parts can control the robot to rotate in one direction. That is, corresponding to one degree of freedom, according to the installation of multiple rotating parts 170 in the robot, a gaze tracking behavior with multiple degrees of freedom can be realized.
  • the rotating component 170 can also feed back the real-time rotation angle to the processor 180 in real time, and the processor 180 can control the movement of the robot according to the feedback real-time rotation angle, so as to implement smooth gaze tracking behavior on the target object.
  • the processor 180 is the control center of the robot. It uses various interfaces and lines to connect various parts of the entire robot. Various functions and processing data of the robot, so as to monitor the robot as a whole.
  • the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 180.
  • the robot 100 also includes a power source 190 (such as a battery) for supplying power to various components.
  • a power source 190 such as a battery
  • the power source may be logically connected to the processor 180 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
  • the software system of the robot 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the robot 100 by way of example.
  • FIG. 2 is a block diagram of the software structure of the robot 100 according to an embodiment of the present application.
  • the Android system is divided into four layers, which are application layer, application framework layer (framework, FWK), system layer, and hardware abstraction layer.
  • the layers communicate through software interfaces.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, smart question and answer, smart escort, multimedia on-demand, smart learning and education.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display module, determine the display status of the current display module, and so on.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include video, image, audio, the rotation angle fed back by the rotating component, and so on.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, etc.
  • the visual controls include controls for simulating eyeballs.
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, graphics, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format recording, and still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing. Specifically, the output screen used to simulate the eyeballs can be rendered and synthesized through a three-dimensional graphics processing library.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the above-mentioned kernel layer further includes a PCIE driver.
  • the execution subject of the process is a robot.
  • the robot includes a display module and a rotating part, an output screen for simulating eyeballs is output through the display module, and the robot is controlled to move through the rotating part.
  • FIG. 3 shows a schematic diagram of a robot provided by an embodiment of the present application.
  • the robot provided by the present application can be a robot with a simulated human form (a-shaped robot in Fig. 3), or a non-human-shaped robot, such as a robot with a simulated animal form (b in Fig. 3).
  • Type robots) or non-biological robots such as type c robots in Figure 3).
  • the above-mentioned robots are any equipment with motion functions, and motions include movement, rotation, and so on.
  • FIG. 4 shows a schematic structural diagram of a robot provided by an embodiment of the present application.
  • the display module of the robot is installed on the head of the robot.
  • the display module includes a left-eye display module for simulating the left eye and a right-eye display module for simulating the right eye.
  • the output of the two display modules is The screen simulates eye movement and realizes gaze tracking.
  • the rotating part of the robot includes a trunk rotating part for controlling the left and right rotation of the robot's torso, a first head rotating part for controlling the left and right rotation of the robot head, and a second head rotating part for controlling the up and down rotation of the robot head.
  • the robot realizes smooth gaze tracking behavior by controlling the output screen of the display module and the rotation of the rotating parts.
  • FIG. 5 shows an implementation flowchart of the robot control method provided by the first embodiment of the present application, and the details are as follows:
  • the robot is controlled to rotate according to the position information of the target object.
  • the robot can determine the target object through automatic recognition or manual setting, and determine the location information of the target object.
  • the position information may specifically be the relative position between the robot and the target object.
  • the relative position may be a relative direction, such as left, right, front, rear, etc.; it may also be a specific angle, such as +60°, -120°, where the above angle may specifically be a vector Angle, the positive and negative values of the angle represent the corresponding direction.
  • the left side can be defined as the positive direction
  • the right side can be defined as the negative direction.
  • the robot can control the rotation of the robot according to the above-mentioned position information.
  • S502 may specifically be: determining the initial angle of the robot to rotate to the target deflection angle corresponding to the target object according to the position information exclusive to the target; controlling the target deflection angle according to the target deflection angle; The robot rotates.
  • the target deflection angle is determined according to the initial angle of the robot itself and the pose of the target object.
  • the initial angle of the above-mentioned robot is specifically the direction angle corresponding to the front of the robot. If the robot includes a head, the direction angle corresponding to the front of the body is specifically the direction corresponding to the front of the head of the robot.
  • the front of specifically refers to the surface that contains the simulated eyeball.
  • the above-mentioned target deflection angle is specifically used to view the current line of sight of the robot in the direction of the target object. If the target object is a physical person, the target deflection angle is specifically used to set the current robot Align the line of sight to the key center of the target object.
  • FIG. 6 shows a schematic diagram of the key center of the target object provided by an embodiment of the present application.
  • the key center of the target object can be specifically: the geometric center of the target object, the geometric center of the head of the target object. If the target object is a physical person, the above-mentioned key center can also be the eyes of the target object.
  • the middle area can be specifically: the geometric center of the target object, the geometric center of the head of the target object. If the target object is a physical person, the above-mentioned key center can also be the eyes of the target object.
  • the middle area is an example and not a limitation.
  • FIG. 7 shows a schematic diagram of acquiring a target deflection angle provided by an embodiment of the present application.
  • the target object is a physical person. Since the physical person has a certain volume of space, any point in the space belongs to the target object.
  • the most important thing is to match the line of sight of the robot with the target object.
  • the line of sight meets, that is, in the same plane. Therefore, the robot needs to determine its current line of sight direction and the line of sight direction of the target object, and determine the deflection angle based on the two line of sight directions.
  • the robot can automatically recognize the target object in the following three ways:
  • Method 1 Determine the target object based on the distance sensor.
  • the robot can be equipped with a distance sensor, which can contain multiple distance detection units, and build the scene where the robot is based on the distance values collected by the multiple distance detection units Based on the depth map, determine the scene objects contained in the current scene, and select the scene object with the smallest distance as the target object.
  • the target object of the gaze tracking behavior is a physical person
  • the contour line of each scene image can be determined according to the constructed depth map, and the contour line can be matched with the standard contour line of the physical person based on the matching result from the scene object Identify the target audience.
  • the robot also includes an infrared thermal imaging module, which can collect the temperature value of the outer surface of each scene object in the scene, so as to construct a depth map containing temperature information, according to the contour information and temperature value in the depth map ,
  • the object type of the scene object contained in the scene where the robot is located can be determined, and the scene object whose object type is human is selected as the target object. If there are multiple scene objects whose object type is human, the closest scene object whose object type is human may be selected as the target object.
  • Method 2 Determine the target object based on the captured image.
  • the implementation method is as follows: the robot can obtain the scene image of the current scene through the camera module. The contour information contained in the scene image is extracted by the contour recognition algorithm, and the scene image is divided based on the contour information to obtain multiple subject areas. The subject type is identified for each subject area, and the target object is determined based on the subject type. For example, if the target object of the robot's gaze tracking behavior is a physical person, a physical person-type subject can be selected from the subject type as the target object; if multiple scene types included in the shooting scene are physical human subjects, then The subject with the largest subject area can be selected as the target object.
  • the robot can locate the face area through a face recognition algorithm, and determine the target object based on the face area.
  • Method 3 Determine the target object based on the voice signal.
  • the implementation method is as follows: the robot can be equipped with a microphone module, and the sound signal in the current scene can be obtained through the microphone module. If it is detected that the signal strength of the sound signal in the current scene is greater than the preset decibel threshold, voice analysis is performed on the sound signal, and the target object is determined based on the voice analysis result.
  • the above-mentioned voice analysis may specifically be the conversion of a sound signal into text information.
  • the robot may be equipped with an activation password. If the above-mentioned text information matches the activation password, it is recognized that the user needs to interact with the robot. At this time, the corresponding voice signal can be obtained.
  • the sounding direction, and the object corresponding to the sounding direction in the current scene is identified as the target object.
  • the aforementioned voice analysis may also be to extract the voiceprint feature parameters of the voice signal, match the voiceprint feature parameters with the standard biological parameters of each registered user, and determine the registered user corresponding to the voice signal based on the matching result , And identify the registered user corresponding to the sound signal as the target user.
  • the robot can obtain the environment image in the current scene through the camera module, and determine the location information of the target user based on the pre-stored standard face image of the target object and the above-mentioned environment image.
  • the method for the robot to determine the target object can be a method set by the user, and the method set by the user can include the following two:
  • Method 1 Determine the target object according to the user's selection instruction.
  • the realization method is as follows: the robot can display the candidate objects in the scene on the interactive interface (for example, the display module configured on the body part of the robot outputs the scene image captured by the current scene, Face recognition is performed on the scene image, and the recognized face is used as a candidate object.
  • the user can send a selection instruction to the robot through interactive means such as touch or keypress, and select one of multiple candidate objects as the target object.
  • Method 2 Determine the target object according to the positioning device.
  • the implementation is as follows: The user who needs to follow the gaze can wear a positioning device, which can be a smart watch, smart necklace, smart phone, smart glasses, etc., equipped with a positioning module Wearable device.
  • the positioning module can send positioning information to the robot in a preset feedback cycle, and the robot can recognize the target object according to the positioning information.
  • the robot can trigger the gaze and follow behavior in the following three ways:
  • the robot can trigger the gaze and follow behavior when it detects a change in the pose of the target object.
  • the robot can determine the position of the target object in the above-mentioned manner, and determine the posture of the target object according to the contour information of the target object. If the position movement and/or posture of the target object is detected to change, it is determined that the posture of the target object has changed. At this time, the gaze tracking behavior is triggered, and the operations of S501 and S502 are executed.
  • the robot can trigger the gaze and follow behavior when it detects a change in the robot's pose.
  • the robot may have a built-in motion sensor.
  • the motion sensor includes but is not limited to: gyroscope, vibration sensor, acceleration sensor, gravity sensor, etc.
  • the robot can obtain the sensing value fed back by each motion sensor to determine whether the pose of the robot has changed, for example, if the gyro If the reading of the meter changes or the value of the acceleration sensor is not 0, then the pose of the robot can be changed.
  • the gaze tracking behavior is triggered, and the operations of S501 and S502 are executed.
  • the robot can trigger the gaze tracking behavior when detecting the change of the target object.
  • the aforementioned change of the target object includes the change of the target object from a non-target object to a target object, and also includes a change from object A to object B.
  • the above-mentioned target change can be based on the user's selection instruction to determine the object that needs to be changed, or the target can be changed through automatic recognition by the robot (for example, the target object appears in the collection screen of the robot, and the target object B is close to the robot to recognize it.
  • the distance between the object B and the robot is smaller than the distance between the object A and the robot, and then the robot switches the target object from the object A to the object B).
  • FIG. 8 shows a specific implementation flowchart of a robot control method S501 and S502 provided by another embodiment of the present application.
  • S501 in a robot control method provided in this embodiment includes: S5011 to S5012, and S502 includes S5021, and the details are as follows:
  • a scene image containing the target object is acquired through a camera module built into the robot.
  • the robot has a built-in camera module, and the scene image in the current scene is collected through the camera module.
  • the camera module can obtain the scene video in the current scene in the form of a video format, and the robot can use the latest video image frame captured in the scene video as the above scene image and recognize it from the scene image target.
  • the robot may send a shooting instruction to the camera module.
  • the camera module After the camera module receives the shooting instruction, it obtains the image corresponding to the moment the shooting instruction is received, and recognizes the currently shot image as the aforementioned scene image.
  • the robot may be equipped with a target object recognition algorithm.
  • the robot analyzes the image data fed back from the camera image through the above-mentioned target object recognition algorithm to determine whether the image data contains the target object.
  • the image data is a scene image containing the target object, and the operation of S5012 is performed; on the contrary, if the target object is not included in the image data, there is no need to perform gaze tracking.
  • the recognition algorithm of the target object may be a face recognition algorithm.
  • the robot can determine whether a human face is included in the scene image fed back by the camera module through a face recognition algorithm, and if it does, perform the operation of S5012. On the contrary, if the scene image does not contain a human face, it is recognized that there is no target object for gaze tracking in the current scene, and the standby state is entered or the original posture is maintained.
  • the boundary coordinates of the target object are marked from the scene image, and the position information of the target object of the target object is determined according to the boundary coordinates; the position information of the target object includes the target object The center coordinates of the object.
  • the robot recognizes the image of the area where the target object is located in the scene object, and determines the above-mentioned boundary coordinates from the image of the area where the target object is located.
  • the robot is equipped with a contour recognition algorithm
  • the scene image can be imported into the above contour recognition algorithm
  • the contour line contained in the scene image is extracted
  • at least one pixel point is selected from the contour line as the aforementioned contour recognition algorithm.
  • the boundary coordinates are used to determine the contour line as the aforementioned contour recognition algorithm.
  • the robot can select the closest boundary point between the target object and the origin coordinates of the scene image (for example, the scene image uses the upper left corner of the image as the origin coordinates) as the aforementioned boundary coordinates, and according to the target object in the scene
  • the area and boundary coordinates in the image determine the center coordinates of the object.
  • the image coordinate system is constructed with the horizontal and vertical directions of the scene image as the coordinate axis. According to the relative position between the boundary coordinates and the origin coordinates of the scene image, the value of the boundary coordinates can be (left, top), where left is the origin The horizontal distance between the coordinates; top is the vertical distance between the coordinates of the origin.
  • the area of the corresponding area can be expressed as (width, height), where width is the width of the rectangular area, and height is the length of the rectangular area.
  • FIG. 9 shows a schematic diagram of determining the center coordinates of an object provided by an embodiment of the present application.
  • the target object T can be expressed as [left, top, width, height].
  • the object center coordinate A [left+width/2,top+height/2] of the target object can be determined.
  • the boundary coordinates of the target object may be the eye coordinates of the target object, and the midpoint of the line segment formed by the two human eyes of the target object is taken as the object of the target object Center, use the coordinates of the midpoint as the object center coordinates of the target object.
  • target_yaw is the horizontal component of the target deflection angle
  • target_pitch is the vertical component of the target deflection angle
  • OC is the focal length of the camera module
  • CD is the horizontal deviation between the object center coordinates and the image center coordinates
  • AC is The vertical deviation between the object center coordinate and the image center coordinate.
  • the shooting angle of the camera module is consistent with the eye sight direction of the robot. Therefore, if the eye sight is to be aligned with the target object, the object center coordinates of the target object need to be moved to the center coordinates of the scene image. Therefore, an offset vector can be generated according to the center coordinates of the object and the image center coordinates of the scene image, and the above-mentioned target deflection angle can be determined according to the offset vector and the initial angle of the robot's eye line of sight.
  • FIG. 10 shows a schematic diagram of obtaining a target offset angle provided by an embodiment of the present application.
  • the object center of the target object is point A
  • the focal point of the scene image is point C
  • the center point of the robot's eyes is point O. Therefore, the line of sight of the robot's eyes is a straight line OD
  • the focal length of the camera module is OC
  • the distance from the image center coordinate offset to the object center coordinate is AC.
  • the scene image containing the target object is acquired through the camera module, and the target offset angle is calculated according to the deviation between the center coordinates of the target object in the scene image and the coordinates of the eye line of sight in the scene image , Calculate the amount of deflection in the three-dimensional direction through the two-dimensional image, the calculation method is simple, and the calculation amount is small, which can reduce the calculation pressure of the robot.
  • FIG. 11 shows a schematic diagram of the principle of shooting offset provided by an embodiment of the present application.
  • the robot includes a camera module, which is located above the display module and has a certain offset from the eyeball in the output image of the display module.
  • a camera module is also configured at the position of the simulated eye feature, the position of the target object in the image observed by the eyeball and the position of the target object captured by the camera will have a certain offset, so that in the subsequent When the gaze follows, the line of sight of the robot's eyes will shift from the center of the target object. Therefore, it is necessary to calibrate the image collected by the aforementioned camera module.
  • S5011 in the previous embodiment may include a calibration operation.
  • FIG. 12 shows a specific implementation flowchart of a robot control method S5011 provided by another embodiment of the present application. Referring to FIG. 12, with respect to the embodiment described in FIG. 8, S5011 in a robot control method provided in this embodiment includes: S1201 to S1202, which are detailed as follows:
  • the image correction parameter is determined according to the offset between the position of the simulated eye feature of the output screen of the camera module and the display module.
  • the robot can determine the position of the eyeball relative to the robot body according to the display position of the eyeball in the output screen, and according to the local position of the camera module in the robot and the above-determined position of the eyeball relative to the robot, Determine the above offset.
  • the robot can be configured with a conversion algorithm between the offset and the correction amount, and the offset is imported into the above conversion algorithm to calculate the image correction parameters.
  • the original image collected by the camera module is adjusted based on the image correction parameter to generate the scene image.
  • the robot after the robot receives the original image collected by the camera module, it can calibrate the original image using the image correction parameters.
  • the original image can be rotated, stretched, stretched horizontally, and stretched according to the correction parameters.
  • Correction operations such as vertical stretching and translation of the target object are performed, and the corrected original image is used as the above-mentioned scene image, and the subsequent determination of the target offset angle is performed.
  • the original image captured by the camera module is corrected according to the offset between the camera module and the line of sight of the eye, so that the accuracy of the subsequent target deflection angle can be improved, and the deviation between the robot modules can be eliminated.
  • the deviation of the line of sight caused by the movement further improves the accuracy of the gaze tracking behavior and improves the degree of personification of the robot.
  • the robot after the robot has determined the relative position between its frontal pose and the frontal facing position of the target object, that is, after the position information of the target object, it can simulate eye features through the output screen of the display module While moving, control the rotation of the built-in rotating parts of the robot, so that the simulated eye features in the output screen of the robot display module, such as the simulated eyeball or the simulated eye aligning with the target object.
  • the dynamic adjustment of the output screen of the display module for simulating eye features on the robot during the rotation of the robot is specifically: during the rotation of the robot, according to The target deflection angle dynamically adjusts the output screen of the display module used for simulating eye features on the robot; the target deflection angle is the corresponding deflection angle when the robot rotates from the initial angle to face the target object.
  • the way to adjust the sight of the robot's eyes to the target object may specifically be: determining the offset of the simulated eye feature in the output screen and the rotation angle of the rotating part of the robot according to the target deflection angle , While adjusting the position of the simulated eye features in the output screen, control the rotation of the robot's built-in rotating parts.
  • the method of determining the rotation angle and the offset of the simulated eyeball can be: Use the target offset angle as the rotation angle of the rotating part, and calculate the eyeball offset according to the following calculation method:
  • eye_angle is the offset of the eyeball
  • target_angle is the target deflection angle
  • motor_angle is the rotation angle of the rotating part during the movement of the eyeball
  • motor_rate is the rotation speed of the rotating part
  • time is the time required for the above-mentioned offset for the eyeball movement
  • Eye_rate is the speed of eyeball movement.
  • the robot can collect the historical speed of the rotating part under different target deflection angles through big data analysis, so as to construct the correspondence between the target deflection angle and the rotation angle.
  • the robot can determine the rotation speed of the rotating part according to the target deflection angle obtained this time, and then import it into the above calculation equation to calculate the eyeball offset.
  • the robot control method determines the target deflection angle according to the relative position between the target object and the robot, and adjusts the output screen and rotating parts of the display module based on the target deflection angle to achieve this.
  • Eye tracking because the robot's eye movement is simulated through the output interface of the display module, no motor drive is needed to achieve deflection during the movement, and the response time is short, achieving smooth eye tracking, improving the degree of personification and the accuracy of eye tracking .
  • FIG. 13 shows a schematic diagram of a motor-driven multi-degree-of-freedom robot provided by an embodiment of the present application. As shown in Figure 13, the robot head can rotate in three directions during the tracking and gaze process, namely based on left and right rotation, vertical rotation, and internal and external rotation.
  • the robot’s head has a built-in motor-driven
  • the eye part of, the eye part can be rotated based on two degrees of freedom, namely, the up and down direction rotation and the left and right direction rotation.
  • the head of the robot is also equipped with an inertial measurement unit IMU for motion control compensation; for the gaze tracking control algorithm, forward and inverse kinematics calculation and IMU compensation can be used, and P (proportional) I (integral) D (derivative) control is integrated. algorithm.
  • IMU inertial measurement unit
  • the gaze tracking control algorithm forward and inverse kinematics calculation and IMU compensation can be used, and P (proportional) I (integral) D (derivative) control is integrated. algorithm.
  • the IMU feedback angle is required for motion compensation.
  • complex forward and inverse kinematics are required to solve the target position, and gaze tracking cannot be taken into account at the same time.
  • the start-up time is longer, which reduces the sensitivity of gaze tracking, and has a stronger mechanical sense, which further reduces the effect of the gaze.
  • Fig. 14 shows a schematic diagram of a robot with 2 degrees of freedom provided by an embodiment of the present application.
  • the eye component of the robot is fixedly installed on the head, that is, it cannot move, and the head is equipped with two rotating parts, which can control the head of the robot to rotate in the left and right directions and rotate in the up and down directions.
  • the process of gaze tracking it mainly relies on the deflection of the head to achieve the alignment of the target object.
  • the above-mentioned method has a low degree of freedom and cannot move the eyes, so that the degree of personification is low, and the user experience is low.
  • the embodiment of the present application uses the output screen of the display module to simulate eye movement. Compared with the realization of eye movement based on a motor-driven rotating part, in addition to having a faster response speed, it also has a faster response speed for motor-driven
  • the calculation amount of the algorithm is also reduced, and the requirements for forward and inverse kinematics calculation are also reduced. It can improve the smoothness of gaze tracking behavior and reduce the computing pressure of the robot, achieve smooth and stable gaze tracking, and effectively improve the robot’s performance.
  • Athletic performance and degree of personification At the same time, through the display module to simulate eye movement, only the display module needs to be installed on the head of the robot, without the need to configure motor parts, rotating parts and physical eye parts, thereby reducing the cost of the robot.
  • FIG. 15 shows a method for controlling a robot provided by another embodiment of the present application.
  • the specific output screen of the display module for simulating eyeball features on the robot is dynamically adjusted. Realize the flowchart.
  • the display on the robot for simulating eye features is dynamically adjusted according to the target deflection angle
  • the output screen of the module includes: S5031 ⁇ S5033, the details are as follows:
  • the eye feature target position is the position in the direction in which the line of sight of the eye simulated by the output screen looks toward the target object for the first time.
  • the process of robot gaze tracking can be specifically divided into two aspects, one of which is to align the front of the robot with the front of the target object, and the other is to adjust the direction of the robot’s eye line of sight Aim at the key center of the target object.
  • the above two different aspects can be accomplished by different parts of the robot.
  • the pose of the robot can be adjusted by controlling the rotation of the robot to realize the alignment of the front face; and the alignment of the eye sight can be completed by the operation of S5031. That is, the position of the simulated eye feature in the output screen of the display module is dynamically adjusted, so that the line of sight of the robot's eye is aligned with the key center of the target object.
  • controlling the rotation of the robot and adjusting the output screen are performed at the same time. While the robot is controlling the rotating part of the robot to rotate, it can dynamically refresh the position of the simulated eye feature in the output screen of the display module. Among them, the position of the simulated eye feature in the output screen of the display model does not directly jump to the above-mentioned target object position, but can be rotated at the above-mentioned eyeball rotation speed from the initial position before adjustment based on the preset eyeball rotation speed A certain amount of rotation time is required to reach the target position of the eye feature, that is, from the initial position to the position where the target simulates the eye feature.
  • the initial position of the eyeball will rotate to the target position of the eye feature.
  • the eye movement in the output screen of the display module will be coupled with the movement of the main body of the robot.
  • the eyeball is equivalent to the rotation angle between the target object, including the relative The rotation angle of the robot (that is, the angle at which the robot controls the rotation of the eyeball), and the rotation of the robot itself (that is, the rotation of the robot through the rotating part). Therefore, the rotation angle of the eyeball relative to the target object is the same as the target offset angle. But in the end, the angle at which the eyeball is equivalent to the eyeball rotation of the robot needs to be adjusted according to the actual angle of the rotating part.
  • the rotating part will feed back the real-time rotation angle to the robot in real time.
  • the robot can superimpose the current rotation angle of the robot's eyeball and the real-time rotation angle of the rotating part to determine whether the above-mentioned target offset angle has been reached.
  • the real-time rotation angle fed back by the above-mentioned rotating component is the rotation angle during the entire rotation process from the start of rotation to the time of feedback.
  • the rotating component can feed back the real-time rotation angle to the robot at a preset time interval.
  • the time interval can be 30 ⁇ s or 30 ms.
  • the real-time rotation angle can be fed back approximately in real time.
  • the determination process of the above-mentioned rotation angle can be completed by calling the built-in rotation angle control module of the robot.
  • the rotation angle control module can be a P (proportional) D (differential) controller, a P (proportional) I (integral) controller, and a P (proportional) D (differential) controller.
  • P (proportional) I (integral) D (derivative) controller and other control modules.
  • the angle calculation process of the rotation angle control module can be calculated based on the principle of proportional integral derivative, predictive control, sliding mode control and other principles. Make a limit.
  • the robot when the robot controls the rotation of the rotating part, it needs to start the motor drive, and drive the rotating part to rotate through the motor drive, so as to realize the mechanical rotation of the robot, but the motor-driven start, motor-driven traction and other operations are time-consuming.
  • the eyeball movement can be realized by only refreshing the output image. Since the output image of the display module can reach a refresh rate of 60Hz, 120Hz or even higher, the eyeball movement is equivalent to an instant response, which is much faster than mechanical rotation.
  • the eyes often rotate faster than the body.
  • the rotation of the robot's body is controlled by rotating parts, and the eye movement is simulated by the display module, which makes the robot
  • the gaze tracking process is more personified, which further improves the user experience.
  • FIG. 16 shows a schematic diagram of a robot's eye sight aligned with a target object for the first time according to an embodiment of the present application.
  • the direction of the robot's eye line of sight is the PA direction, and the front face of the robot is also facing the PA direction at this time.
  • the target offset angle between the robot's eye direction and the target object is ⁇ APD, which is ⁇ .
  • the robot can control the eyeballs in the output screen of the display module to rotate and the robot's body to move.
  • the eyeball rotates fast and the rotation angle of the robot's body is superimposed, when the robot's eye line of sight is aligned with the target object, the front face of the robot is still in the PB direction and is not aligned with the target object. At this time, the eyeballs are equivalent.
  • the rotation angle of the robot is ⁇
  • the rotation angle of the robot's body equivalent to the target object is ⁇
  • FIG. 17 shows a specific implementation flowchart of a robot control method S5031 provided by another embodiment of the present application.
  • S5031 in a robot control method provided in this embodiment includes: S1701 to S1702, which are detailed as follows:
  • the horizontal deflection amount and the vertical deflection amount are determined based on the real-time rotation angle.
  • the simulated eyeball in the output image of the display module contains two degrees of freedom, it can move in various directions of the plane where the display screen is located, and all moving directions can be decomposed into horizontal and vertical components. Therefore, after the robot receives the real-time rotation angle fed back by the rotating component, the rotation angle can be decomposed in the horizontal direction and the vertical direction to obtain the horizontal deflection amount and the vertical deflection amount.
  • the above-mentioned real-time rotation angle may be composed of the rotation components of each rotating part .
  • the robot can determine the rotation direction corresponding to each rotating component, superimpose all the rotation components in the horizontal direction to obtain the above-mentioned horizontal deflection amount, and superimpose all the rotation components in the vertical direction to obtain the above-mentioned vertical deflection amount.
  • the rotating part can rotate in multiple directions.
  • the robot can decompose according to the three-dimensional rotation angle fed back by the rotating part to obtain the above-mentioned horizontal deflection amount and vertical deflection amount.
  • eye_yaw is the horizontal component of the eye characteristic target position
  • eye_pitch is the vertical component of the eye characteristic target position
  • target_yaw is the horizontal component of the target deflection angle
  • target_pitch is the vertical component of the target deflection angle
  • motor_yaw is the horizontal deflection amount
  • motor_pitch is the horizontal deflection amount
  • the above-mentioned eye feature target position refers to the offset position of the robot when the eyeball is aligned with the target object in the display screen of the robot.
  • the horizontal component of the position of the target simulated eye feature and the vertical component of the position of the target simulated eye feature can be obtained, thereby uniquely determining
  • the above-mentioned calculation process of the eye feature target position is simple, which can reduce the calculation pressure of the robot.
  • the position of the simulated eye feature is dynamically adjusted according to the real-time rotation angle to keep the line of sight of the eye looking towards the The direction of the target object.
  • the robot can obtain the real-time rotation angle, and continue to adjust the position of the simulated eye feature of the simulated eye in the output screen of the display module, that is, the position of the simulated eye feature will gradually return to the right, and finally after the rotating part is rotated to the target deflection angle, At this time, the front face of the robot is also facing the target object, and the line of sight of the robot's eyes is also consistent with the front face of the robot, that is, it rotates to the center area of the eyes of the robot (that is, the eyeball returns to the center).
  • the above-mentioned method of adjusting the position of the simulated eye feature according to the real-time rotation angle may specifically be: based on the real-time rotation angle fed back by the rotation component at each feedback moment, determine the rotation speed of the rotation component, and control the display module
  • the simulated eyeball moves in the opposite direction of the rotation direction of the rotating member and moves at the above-mentioned rotation speed, so as to offset the offset caused by the rotation of the rotating member, thereby keeping the eye sight aligned with the target object.
  • the rotating part of the robot includes a degree of freedom, which is used to control the head to rotate in the left and right directions.
  • FIG. 18 shows a schematic diagram of a robot with one degree of freedom in a rotating part provided by an embodiment of the present application.
  • the robot controls the head to rotate left and right
  • the line of sight of the simulated eyeball on the display module will first be aimed at the target object.
  • the head of the robot can continue to rotate, and the eyeball of the robot can be adjusted according to the head position.
  • the amount of rotation in the horizontal direction moving in the opposite direction in the horizontal direction, while keeping the eyeball still in the vertical direction.
  • the robot rotating part contains two degrees of freedom, that is, it is used to control the head to rotate in the left-right direction and the up-down direction.
  • FIG. 19 shows a schematic diagram of a robot with two degrees of freedom in a rotating part provided by an embodiment of the present application.
  • the head of the robot can be rotated in two directions.
  • the rotating part 1 controls the head to rotate left and right, and the rotating part 2 controls the head to rotate up and down. Since the rotation of the above two rotating modules takes a long time, the line of sight of the simulated eyeball on the display module will first be aimed at the target object. At this time, the robot can continue to control the head to rotate through the rotating part 1 and the rotating part 2.
  • the eyeball can move in the opposite direction in the horizontal direction according to the amount of rotation of the rotating part 1 in the horizontal direction, and move in the opposite direction in the vertical direction according to the amount of rotation of the rotating part 2 in the vertical direction, so as to realize the line of sight of the robot Keep track of the target object.
  • the robot is coupled with the real-time rotation of the rotating part of the robot during the process of controlling the movement of the eyeball, so that the accuracy of gaze tracking can be improved.
  • Real-time rotation dynamically adjust the position of the simulated eye features, so as to keep the line of sight always aligned with the target object, thereby improving the degree of personification.
  • FIG. 20 shows a specific implementation flow chart of controlling the rotation of the robot according to the position information of the target object in a method S502 for controlling a robot according to another embodiment of the present application.
  • S502 in a robot control method provided in this embodiment includes: S2001 to S2003, which are detailed as follows:
  • the torso rotation of the robot is controlled based on the target deflection angle; the target deflection angle is the corresponding deflection angle when the robot's torso rotates from an initial angle to a target object.
  • the rotation angle required for the initial angle of the robot to align with the target object is the aforementioned target deflection angle, which can be based on the target deflection angle Control the torso of the robot to rotate.
  • the initial angle is the direction angle to which the front of the torso of the robot faces.
  • FIG. 21 shows a specific implementation flowchart of a robot control method S2001 provided by another embodiment of the present application.
  • S2001 in a robot control method provided in this embodiment includes: S2101 to S2103, which are detailed as follows:
  • the first reference rotation angle of the torso of the robot in each control period is acquired in a preset control period.
  • the torso rotation of the robot can be controlled by the robot deployed on the torso rotating part of the robot torso.
  • the robot will send a control command to the torso rotating part in a preset control cycle, and each control command sent may include It is related to the rotation angle of the trunk rotating component during the entire rotation process, and the rotation angle is dynamically adjusted according to the actual rotation amount fed back in each control cycle, so as to ensure the accuracy of the rotation operation. Therefore, when the robot sends the first cycle rotation instruction to the trunk rotation component, it needs to determine the current robot's posture, that is, the rotation angle of the robot's torso at the start time corresponding to the control cycle, that is, the aforementioned first reference rotation angle.
  • the first period rotation corresponding to the current control period is generated Instruction;
  • the rotation angle in the first cycle rotation instruction is specifically:
  • motor 0 _yaw is the rotation angle in the first cycle rotation command
  • current_motor 0 _yaw is the first reference rotation angle of the current control cycle
  • target_yaw is the target deflection angle
  • motor 0 _yaw_diff is the current control The first tracking error angle of the cycle
  • motor 0 _yaw_last_diff is the first tracking error angle of the previous control cycle
  • motor 0 _Kp and motor 0 _Kd are preset adjustment parameters of the trunk rotating part.
  • the rotation angle can be adjusted in real time through the motor 0 _Kp and motor 0 _Kd corresponding to the torso rotating parts.
  • the history rotation record in the history control operation is determined by the way of big data learning.
  • the methods for obtaining the aforementioned preset parameters are not limited one by one here. Among them, motor 0 _Kp is the calibration parameter of the current cycle; motor 0 _Kd is the calibration parameter of the cycle iteration.
  • the first tracking error angle is the angle that represents the phase difference with the target deviation angle, that is, the angle at which the body of the robot still needs to be rotated.
  • current_motor 0 _yaw is the deflection angle of the current robot's torso, that is, the first A reference angle of rotation.
  • the above value is 0, and there is no deviation.
  • the front of the robot's torso is aligned with the target object, and there is no need to continue to rotate; on the contrary, if the above value is non-zero, That is, the robot still needs to continue to rotate and generate the first cycle control command.
  • the torso of the robot is controlled to rotate according to each of the first cycle rotation instructions.
  • the robot generates a first-cycle rotation command in each control cycle.
  • the first-cycle rotation command includes the above-mentioned determining the required rotation angle of the rotating component, and sends the first-cycle control command to
  • the torso rotating part is used to control the torso rotating part to rotate until the rotation angle of the target rotating part reaches the target deflection angle. At this time, it is recognized that the current frontal pose of the robot is aligned with the target object.
  • the cycle precision control of the rotating part is realized, and when the first-cycle rotation command is generated, the motor 0 _Kp And motor 0 _Kd to eliminate mechanical rotation deviation and improve the accuracy of robot control.
  • the head of the robot is dynamically controlled to rotate to the target head position according to the real-time torso angle fed back in real time and the target deflection angle. Is the position of the robot head facing the target object for the first time.
  • the process of the robot aligning the front face of the head with the front face of the target object can be specifically divided into two aspects, one of which is aligning the front face of the robot head with the front face of the target object, and the other On the one hand, the front face of the robot torso is aligned with the front face of the target object.
  • the above two parts are aligned with the front face of the target object, it is recognized that the robot has been aligned with the target object.
  • the above two different aspects can be accomplished by different parts of the robot.
  • the head position of the robot can be adjusted through the head rotation part of the robot, so as to realize the alignment of the front face of the head with the front face of the target object; and
  • the head position of the robot can be adjusted through the torso rotating part of the robot, so as to realize the alignment of the front face of the head with the front face of the target object.
  • S2001 and S2002 are executed at the same time, so while the robot controls the torso rotating part of the robot to perform a rotating operation, it can control the head rotating part of the robot to perform a rotating operation.
  • the rotation speed of the head rotating part relative to the target object will be higher than the rotation speed of the body rotating part relative to the target object, that is, the head part will be aligned with the front face of the target object faster than the body part.
  • the rotation angle of the head part relative to the target object includes the rotation of the head part relative to the robot.
  • the angle that is, the angle at which the robot controls the head part
  • the rotation of the robot's torso that is, the rotation of the robot through the torso rotating part
  • the actual rotation angle of the head part needs to be adjusted according to the actual rotation angle of the torso rotating part. Based on this, during the rotation of the torso rotating part, the torso rotating part will feed back the real-time torso angle to the robot in real time.
  • the robot can superimpose the current head rotation angle of the robot with the real-time torso angle of the torso rotating part to determine whether it has reached the above The target offset angle, so as to determine whether the front face of the robot's head is aligned with the target object; if the current head rotation angle and the real-time torso angle of the torso rotating part are superimposed less than the target deflection angle, the front face of the robot's head faces If the target object is not aligned, continue to control the head rotating part and the torso rotating part to rotate; conversely, if the current head rotation angle and the real-time torso angle of the torso rotating part are superimposed equal to the target deflection angle, the head of the robot is recognized The front face has been aligned with the target object.
  • the head rotates faster than the torso. Therefore, the torso rotation of the robot is controlled by the torso rotation component, and the head rotation is controlled by coupling the torso rotation.
  • the component controls the rotation of the head of the robot, so that the head is aligned with the target object prior to the torso.
  • the gaze tracking process of the robot is more personified, which further improves the user experience.
  • the head of the robot is dynamically controlled to rotate according to the real-time torso angle to keep the head of the robot facing the target object. direction.
  • the front face of the robot head is aligned with the target object for the first time, because the torso rotating part is still rotating, that is, the rotation angle of the torso rotating part does not reach the preset target deflection angle, in order to be able to hold the robot head
  • the front face of the department continuously aligns the target object.
  • the robot can obtain the real-time torso angle of the torso rotating part, and continue to control the head of the robot to rotate through the head rotating part, that is, gradually return the head to the center, and finally after the torso rotating part rotates to the target deflection angle, the robot torso
  • the front face of is also facing the target object, and the front face of the robot's head is also consistent with the front face of the robot's torso, that is, the head is returned to the center.
  • the above-mentioned method of adjusting the position of the simulated eye feature according to the real-time rotation angle may specifically be: based on the real-time torso angle fed back by the torso rotation component at each feedback moment, determine the torso rotation speed of the torso rotation component, and control the head
  • the part rotation member moves in the opposite direction of the rotation direction of the trunk rotation member and moves at the above-mentioned trunk rotation speed, so as to offset the offset caused by the rotation of the trunk rotation member, thereby keeping the front face of the head facing the target object.
  • the robot is coupled with the real-time rotation of the rotating parts of the robot's torso during the process of controlling the rotation of the head, so that the accuracy of gaze tracking can be improved, and the front of the head is aligned with the target object for the first time Later, the rotation of the head rotating part can be dynamically adjusted according to the real-time rotation amount of the torso rotating part, so that the front face of the head can always be aligned with the target object, so that the degree of personification can be improved.
  • FIG. 22 shows a specific implementation flowchart of a robot control method S2002 provided by another embodiment of the present application.
  • S2202 in a robot control method provided in this embodiment includes: S2201 to S2203, which are detailed as follows:
  • the horizontal deflection angle and the vertical deflection angle are determined based on the target deflection angle.
  • the robot head since the robot head includes two degrees of freedom, the first head rotating part is used to control the head to rotate in the horizontal direction, and the second head part is used to control the rotation in the vertical direction. Therefore, the robot can decompose the target deflection angle in the horizontal direction and the vertical direction to obtain the horizontal deflection angle and the vertical deflection angle.
  • FIG. 23 shows a schematic diagram of a robot with three degrees of freedom in a rotating part provided by an embodiment of the present application.
  • the head of the robot can be rotated in two directions.
  • Rotating part 1 controls the head to rotate left and right
  • rotating part 2 controls the head to rotate up and down.
  • the torso of the robot can rotate in one direction. That is, the body is controlled to rotate in the left-right direction by the rotating part 0.
  • the second head rotating part assumes all the vertical rotation; while in the horizontal direction, there is a first head rotating part used to control the left and right rotation of the head (ie, rotating Part 1) and the torso rotating part (ie, rotating part 0) used to control the left and right rotation of the torso, so the real-time torso angle of the torso needs to be considered in the process of head rotation.
  • the head of the robot is controlled to rotate in the vertical direction according to the vertical deflection angle.
  • the second head rotation component since only the second head rotation component is controlled in the vertical direction, the rotation of the torso does not affect the vertical rotation angle, so the second head rotation component can be controlled according to the vertical deflection angle determined above. Turn to move the head to the target position in the vertical direction.
  • FIG. 24 shows a specific implementation flowchart of a robot control method S2202 provided by another embodiment of the present application.
  • S2202 in a robot control method provided in this embodiment includes: S2401 to S2403, which are detailed as follows:
  • the second reference rotation angle of the head of the robot in each control period in the vertical direction is acquired in a preset control period.
  • the robot will send a control command to the second head rotating part in a preset control cycle.
  • Each control command sent may include information about the vertical direction of the second head rotating part during the entire rotation.
  • the rotation angle of the vertical direction is dynamically adjusted according to the actual vertical rotation amount fed back in each control cycle. Therefore, when the robot sends the second cycle rotation instruction to the second head rotation component, it needs to determine the current posture of the robot head in the vertical direction, that is, the rotation angle of the robot head in the vertical direction at the beginning of the control cycle, That is, the above-mentioned second reference rotation angle.
  • a second period rotation corresponding to the current control period is generated Instruction;
  • the rotation angle in the second cycle rotation instruction is specifically:
  • motor 2 _pitch is the rotation angle in the second cycle rotation command
  • current_motor 2 _pitch is the second reference rotation angle of the current control cycle
  • target_pitch is the vertical deflection angle
  • motor 1 _yaw_diff is the current control
  • motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle
  • motor 2 _Kp and motor 2 _Kd are preset adjustment parameters of the second head rotating part.
  • the rotation angle can be adjusted in real time through the motor 2 _Kp and motor 2 _Kd corresponding to the second head rotating part.
  • the above two parameters can be configured by default when the robot is shipped from the factory, or Based on the historical rotation record of the robot in the historical control operation, it is determined by way of big data learning. Of course, it is also possible to update the above-mentioned adjustment parameters in real time by communicating with the cloud server.
  • the methods for obtaining the aforementioned preset parameters are not limited one by one here. Among them, motor 2 _Kp is the calibration parameter of the current cycle; motor 2 _Kd is the calibration parameter of the cycle iteration.
  • the second tracking error angle is the angle of the vertical deviation from the target deviation angle, that is, the angle at which the robot's head still needs to rotate in the vertical direction.
  • current_motor 2 _pitch is the current robot's head
  • the deflection angle in the vertical direction that is, the second reference rotation angle.
  • the angle is the same as the deflection angle of the vertical line of defense of the target deflection angle, and the above value is 0, there is no deviation.
  • the head of the robot is in the vertical direction.
  • the target position has been reached, and there is no need to continue to rotate; on the contrary, if the above-mentioned value is non-zero, that is, the second head rotating part still needs to continue to rotate and generate a second cycle control command.
  • the head of the robot is controlled to rotate in the vertical direction according to each of the second cycle rotation instructions.
  • the robot generates a second-cycle rotation instruction in each control cycle.
  • the second-cycle rotation instruction includes the above-mentioned determination of the required rotation angle of the second head rotating part, and the second cycle is changed in each control cycle.
  • the control instruction is sent to the second head rotating part to control the second head rotating part to rotate until the vertical deflection angle is reached.
  • the head of the robot is dynamically controlled to rotate in the horizontal direction to a horizontal target position according to the real-time torso angle and the horizontal deflection angle fed back in real time;
  • the target position is the position of the robot head facing the target object in the horizontal direction for the first time.
  • the horizontal rotation of the head part will be coupled with the horizontal movement of the torso of the robot.
  • the rotation angle includes the rotation angle of the head part relative to the robot in the horizontal direction (that is, the angle at which the robot controls the first head part), and the rotation of the robot torso in the horizontal direction (that is, the rotation of the robot through the torso rotating part) Therefore, the actual rotation angle of the head part in the horizontal direction needs to be adjusted according to the actual rotation angle of the trunk rotation part in the horizontal direction.
  • FIG. 25 shows a specific implementation flowchart of a robot control method S2203 provided by another embodiment of the present application.
  • S2203 in a robot control method provided in this embodiment includes: S2501 to S2503, which are detailed as follows:
  • the third reference rotation angle of the head of the robot in each control period in the horizontal direction is acquired in a preset control period.
  • the robot obtains the third reference rotation angle from the second head rotating part in a preset control cycle.
  • the relevant description of S2401 please refer to the relevant description of S2401, which will not be repeated here.
  • the current control period is generated according to the third reference rotation angle of the current control period, the third tracking error angle of the previous control period, the real-time torso angle, and the horizontal deflection angle
  • the corresponding third cycle rotation command; the rotation angle in the third cycle rotation command is specifically:
  • motor 1 _yaw is the rotation angle in the third cycle rotation command
  • current_motor 1 _yaw is the third reference rotation angle of the current control cycle
  • target_yaw is the horizontal deflection angle
  • motor 1 _yaw_diff is the current control
  • motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle
  • motor 1 _Kp and motor 1 _Kd are the preset adjustment parameters of the first head rotating part
  • current_motor 0 _yaw Is the real-time torso angle.
  • motor 1 _Kp and motor 1 _Kd can be introduced to realize parameter correction.
  • the head of the robot is dynamically controlled to rotate in the horizontal direction according to each of the three-cycle rotation instructions.
  • the robot generates a third-cycle rotation instruction in each control cycle.
  • the third-cycle rotation instruction includes the above-mentioned determining the required rotation angle of the first head rotating part, and the third cycle is changed in each control cycle.
  • the control instruction is sent to the first head rotating part to control the first head rotating part to rotate until the horizontal deflection angle is reached.
  • the horizontal rotation amount of the robot torso is considered in the process of controlling the horizontal rotation of the head, so that the accuracy of the rotation control can be improved.
  • FIG. 26 shows a structural block diagram of a robot control device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
  • the robot control device includes:
  • the location information acquiring unit 261 is configured to acquire location information of the target object
  • the rotation control unit 262 is configured to control the rotation of the robot according to the position information of the target object
  • the screen adjustment unit 263 is configured to dynamically adjust the output screen of the display module for simulating eye features on the robot during the rotation of the robot, so that the output screen of the display module on the robot
  • the eye line of sight that simulates the eye features is the direction of looking at the target object during the rotation of the robot.
  • the image adjustment unit 263 is specifically configured to: during the rotation of the robot, dynamically adjust the output image of the display module for simulating eye features on the robot according to the target deflection angle; the target deflection angle is the initial angle of the robot. Rotate to the corresponding deflection angle when facing the target object.
  • the picture adjustment unit 263 includes:
  • the eye movement control unit is used to dynamically adjust the position of the simulated eye feature in the output screen according to the real-time rotation angle fed back by the robot in real time and the target deflection angle in the process of controlling the rotation of the robot Eye feature target position;
  • the eye feature target position is the position where the eye's line of sight simulated by the output screen looks in the direction of the target object for the first time;
  • the eye correction control unit is further configured to dynamically adjust the position of the simulated eye feature according to the real-time rotation angle after the eye line of sight is in the direction of the target object for the first time, so as to maintain the eye The line of sight looks in the direction of the target object.
  • the eye movement control unit includes:
  • a deflection component determining unit configured to determine a horizontal deflection amount and a vertical deflection amount based on the real-time rotation angle
  • the eye feature target position determining unit is configured to determine the eye feature target position according to the target deflection angle, the horizontal deflection amount, and the vertical deflection amount; the real-time offset is specifically:
  • eye_yaw is the horizontal component of the eye characteristic target position
  • eye_pitch is the vertical component of the eye characteristic target position
  • target_yaw is the horizontal component of the target deflection angle
  • target_pitch is the vertical component of the target deflection angle
  • motor_yaw is the horizontal deflection amount
  • motor_pitch is the horizontal deflection amount
  • the rotation control unit 262 includes:
  • the torso rotation control unit is configured to control the torso rotation of the robot based on the target deflection angle; the target deflection angle is the corresponding deflection angle when the torso of the robot rotates from an initial angle to a target object;
  • the head rotation control unit is used to dynamically control the head of the robot to rotate to the target head position according to the real-time torso angle fed back in real time and the target deflection angle during the rotation of the torso of the robot.
  • the head target position is the position of the robot head facing the target object for the first time;
  • the head rotation return unit is used to dynamically control the head rotation of the robot according to the real-time torso angle after the robot head faces the direction of the target object to keep the robot head facing The direction of the target object.
  • the trunk rotation control unit includes:
  • a first reference rotation angle determining unit configured to obtain a first reference rotation angle of the robot's torso in each control period in a preset control period;
  • the first cycle rotation instruction generating unit is configured to generate the current control cycle according to the first reference rotation angle of the current control cycle, the first tracking error angle of the previous control cycle, and the trunk rotation angle
  • the corresponding first cycle rotation command; the rotation angle in the first cycle rotation command is specifically:
  • motor 0 _yaw is the rotation angle in the first cycle rotation command
  • current_motor 0 _yaw is the first reference rotation angle of the current control cycle
  • target_yaw is the target deflection angle
  • motor 0 _yaw_diff is the current control The first tracking error angle of the cycle
  • motor 0 _yaw_last_diff is the first tracking error angle of the previous control cycle
  • motor 0 _Kp and motor 0 _Kd are the preset adjustment parameters of the trunk rotation part
  • the first cycle rotation instruction execution unit is configured to control the torso of the robot to rotate according to each of the first cycle rotation instructions.
  • the head rotation control unit includes:
  • a rotation angle component determining unit configured to determine a horizontal deflection angle and a vertical deflection angle based on the target deflection angle
  • the first head rotation control unit is configured to control the head of the robot to rotate in the vertical direction according to the vertical deflection angle; and at the same time,
  • the second head rotation control unit is used to dynamically control the head of the robot to rotate to the horizontal in the horizontal direction according to the real-time torso angle and the horizontal deflection angle fed back in real time during the rotation of the torso of the robot Target position; the horizontal target position is the position of the robot head facing the target object in the horizontal direction for the first time.
  • the first head rotation control unit includes:
  • the second reference rotation angle determination unit is configured to obtain the second reference rotation angle of the head of the robot in the vertical direction in each control period in a preset control period;
  • the second cycle rotation command generating unit is configured to generate the current control cycle according to the second reference rotation angle of the current control cycle, the second tracking error angle of the previous control cycle, and the vertical deflection angle
  • the corresponding second cycle rotation command; the rotation angle in the second cycle rotation command is specifically:
  • motor 2 _pitch is the rotation angle in the second cycle rotation command
  • current_motor 2 _pitch is the second reference rotation angle of the current control cycle
  • target_pitch is the vertical deflection angle
  • motor 2 _pitch_diff is the current control Cycle second tracking error angle
  • motor 2 _pitch_last_diff is the second tracking error angle of the previous control cycle
  • motor 2 _Kp and motor 2 _Kd are preset adjustment parameters of the second head rotating part
  • the second cycle rotation instruction execution unit is configured to control the head of the robot to rotate in the vertical direction according to each of the second cycle rotation instructions.
  • the second head rotation control unit includes:
  • the third reference rotation angle determination unit is configured to obtain the third reference rotation angle of the head of the robot in each control period in the horizontal direction in a preset control period;
  • the third cycle rotation instruction generating unit is used to generate the third reference rotation angle of the current control cycle, the third tracking error angle of the previous control cycle, the real-time torso angle, and the horizontal deflection angle,
  • the third cycle rotation command corresponding to the current control cycle is generated; the rotation angle in the third cycle rotation command is specifically:
  • motor 1 _yaw is the rotation angle in the third cycle rotation command
  • current_motor 1 _yaw is the third reference rotation angle of the current control cycle
  • target_yaw is the horizontal deflection angle
  • motor 1 _yaw_diff is the current control The third tracking error angle of the cycle
  • motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle
  • motor 1 _Kp and motor 1 _Kd are the preset adjustment parameters of the first head rotating part
  • current_motor 0 _yaw Is the real-time torso angle
  • the third cycle rotation instruction execution unit is used to dynamically control the head of the robot to rotate in the horizontal direction according to each of the three cycle rotation instructions.
  • the location information acquiring unit 261 includes:
  • a scene image acquisition unit configured to acquire a scene image containing the target object through a camera module built into the robot;
  • the object center coordinate determining unit is configured to mark the boundary coordinates of the target object from the scene image, and determine the position information of the target object of the target object according to the boundary coordinates; the position information of the target object includes The object center coordinates of the target object;
  • a target deflection angle calculation unit configured to determine the target deflection angle according to the object center coordinates and the image center coordinates of the scene image
  • target_yaw is the horizontal component of the target deflection angle
  • target_pitch is the vertical component of the target deflection angle
  • OC is the focal length of the camera module
  • CD is the horizontal deviation between the object center coordinates and the image center coordinates
  • AC is The vertical deviation between the object center coordinate and the image center coordinate.
  • the scene image acquisition unit includes:
  • the image correction parameter acquisition unit is configured to determine the image correction parameter according to the offset between the position of the simulated eye feature of the output screen of the camera module and the display module;
  • the image correction execution unit is configured to adjust the original image collected by the camera module based on the image correction parameter to generate the scene image.
  • the robot control device provided by the embodiments of the present application can also determine the target deflection angle according to the relative position between the target object and the robot, and adjust the output screen and rotating parts of the display module based on the target deflection angle to achieve eye tracking. Since the eye movement of the robot is simulated through the output interface of the display module, no motor drive is required to achieve deflection during the movement, the response time is short, smooth eye tracking is realized, and the degree of personification and accuracy of eye tracking are improved.
  • FIG. 27 is a schematic structural diagram of a robot provided by an embodiment of the application.
  • the robot 27 of this embodiment includes: at least one processor 270 (only one is shown in FIG. 27), a processor, a memory 271, and a memory 271 that is stored in the memory 271 and can be used in the at least one processor.
  • the processor 270 implements the steps in any of the foregoing embodiments of the control method for each robot when the computer program 272 is executed.
  • the robot 27 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the robot may include, but is not limited to, a processor 270 and a memory 271.
  • FIG. 27 is only an example of the robot 27, and does not constitute a limitation on the robot 27. It may include more or less parts than shown, or a combination of some parts, or different parts, such as It can also include input and output devices, network access devices, and so on.
  • the so-called processor 270 may be a central processing unit (Central Processing Unit, CPU), and the processor 270 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 271 may be an internal storage unit of the robot 27, such as a hard disk or a memory of the robot 27. In other embodiments, the memory 271 may also be an external storage device of the robot 27, such as a plug-in hard disk equipped on the robot 27, a smart media card (SMC), and a secure digital (Secure Digital). Digital, SD) card, flash card, etc. Further, the memory 271 may also include both an internal storage unit of the robot 27 and an external storage device. The memory 271 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 271 can also be used to temporarily store data that has been output or will be output.
  • BootLoader boot loader
  • An embodiment of the present application also provides a network device, which includes: at least one processor, a memory, and a computer program stored in the memory and running on the at least one processor, and the processor executes The computer program implements the steps in any of the foregoing method embodiments.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program code to the camera/robot, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • electric carrier signal telecommunications signal and software distribution medium.
  • U disk mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

Abstract

A robot control method and apparatus, and a robot and a storage medium, which are applicable to the technical field of robots. The method comprises: acquiring position information of a target object; according to the position information of the target object, controlling a robot to rotate; and during the rotation process of the robot, dynamically adjusting an output picture of a display module, on the robot, used for simulating an eye feature. In the provided technical solution, the eye movement of a robot is simulated by means of an output interface of a display module, and deflection is realized without a motor drive during a moving process, such that the response time is relatively short; a smooth gaze tracking behavior is realized, such that the personification degree and the accuracy of the gaze tracking behavior are improved; and the robot can be better used for accompanying and educating children, and the human-machine interaction experience is enhanced.

Description

一种机器人的控制方法、装置、机器人以及存储介质Robot control method, device, robot and storage medium
本申请要求于2020年05月08日提交国家知识产权局、申请号为202010382291.0、申请名称为“一种机器人的控制方法、装置、机器人以及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office on May 8, 2020, with the application number 202010382291.0, and the application titled "A method, device, robot and storage medium for a robot", and its entire contents Incorporated in this application by reference.
技术领域Technical field
本申请属于机器人技术领域,尤其涉及一种机器人的控制方法、装置、机器人以及存储介质。This application belongs to the field of robotics, and in particular relates to a control method, device, robot, and storage medium of a robot.
背景技术Background technique
在现代社会的家庭中,越来越多的父母不能时刻陪伴孩子,在不能够陪伴的孩子的时候,父母可以使用智能机器人对孩子进行陪伴以及教育孩子,现有的智能机器人能够与孩子进行交流,并基于与孩子之间的交流,学习和更新与孩子的交流方式。In modern families, more and more parents cannot accompany their children at all times. When they cannot accompany their children, parents can use intelligent robots to accompany and educate their children. The existing intelligent robots can communicate with their children. , And based on the communication with the child, learn and update the way of communication with the child.
而拟人特性作为衡量机器人智能化的重要指标,在人机交互的过程中,目标跟随以及凝视跟踪行为的准确性则直接反映机器人的拟人化程度,通过提高机器人的拟人化从而提高孩子的使用体验。现有的机器人控制技术,无法实现平滑的凝视跟踪行为,从而降低了机器人的拟人化程度以及凝视跟踪行为的准确性。Anthropomorphism is an important indicator to measure the intelligence of robots. In the process of human-computer interaction, the accuracy of target following and gaze tracking behavior directly reflects the degree of anthropomorphism of the robot. By improving the anthropomorphism of the robot, the child's experience can be improved. . The existing robot control technology cannot achieve smooth gaze tracking behavior, thereby reducing the personification of the robot and the accuracy of the gaze tracking behavior.
发明内容Summary of the invention
本申请实施例提供了一种机器人的控制方法、装置、机器人以及存储介质,可以解决现有的机器人控制技术,无法实现平滑的凝视跟踪行为,机器人的拟人化程度以及凝视跟踪行为的准确性较低的问题。The embodiments of the present application provide a robot control method, device, robot, and storage medium, which can solve the problem of the existing robot control technology, which cannot achieve smooth gaze tracking behavior. The degree of personification of the robot and the accuracy of the gaze tracking behavior are relatively high. Low question.
第一方面,本申请实施例提供了一种机器人的控制方法,包括:In the first aspect, an embodiment of the present application provides a method for controlling a robot, including:
获取目标对象的位置信息;Obtain the location information of the target object;
根据所述目标对象的位置信息控制所述机器人转动;Controlling the rotation of the robot according to the position information of the target object;
在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,以使所述机器人上所述显示模块的所述输出画面的模拟眼部特征的眼部视线在机器人旋转过程中为看向所述目标对象的方向。During the rotation of the robot, the output screen of the display module for simulating eye features on the robot is dynamically adjusted, so that the output screen of the display module on the robot can simulate the eye features of the eye. The partial line of sight is the direction of looking at the target object during the rotation of the robot.
上述控制所述机器人转动具体为控制机器人身体内的旋转部件进行转动,若机器人包含头部,则上述控制机器人转动具体为控制连接机器人头部与身体之间旋转部件进行转动,以改变机器人头部正面的朝向;若机器人包含头部、躯干以及底部(如底座或腿部),则上述控制机器人转动具体为控制连接机器人头部与躯干之间的旋转部件转动,以及控制连接机器人躯干与底部之间的旋转部件转动,以改变机器人头部以及躯干的朝向。其中,连接机器人任意两个部分之间的旋转部件可以为一个,也可以为两个或以上,当连接任意两个部分之间的旋转部件包含两个或以上时,可实现多自由度的转动,上述多自由度的转动包括:垂直方向上转动以及水平方向上转动。The above-mentioned controlling the rotation of the robot specifically refers to controlling the rotation of the rotating parts in the body of the robot. If the robot includes a head, the above-mentioned controlling the rotation of the robot specifically refers to controlling the rotation of the rotating parts connecting the head and the body of the robot to change the head of the robot. Frontal orientation; if the robot includes a head, a torso, and a bottom (such as a base or a leg), the above-mentioned controlling the rotation of the robot specifically refers to controlling the rotation of the rotating part connecting the head and the torso of the robot, and controlling the connection between the torso and the bottom of the robot The rotating parts in the middle rotate to change the orientation of the head and torso of the robot. Among them, the rotating part connecting any two parts of the robot can be one or two or more. When the rotating part connecting any two parts includes two or more, the rotation with multiple degrees of freedom can be realized. The above-mentioned multi-degree-of-freedom rotation includes: vertical rotation and horizontal rotation.
上述模拟眼部特征具体为基于拍摄真人的眼睛区域朝各个方向移动的视频而生成的模拟真人眼球的画面,还可以通过动画、漫画、三维模型等方式构建具有眼部特征的眼部画面,包括用于模拟眼部的圆点或任意形状。所述模拟眼部特征的输出画面可 以包含有模拟的瞳孔、眼球、眼睑、眉毛、睫毛等显示对象,通过上述显示对象构成模拟的眼睛画面。The above-mentioned simulated eye feature is specifically a picture that simulates the eyeball of a real person generated based on the video where the eye area of a real person is taken in various directions. It is also possible to construct eye pictures with eye characteristics through animation, cartoons, three-dimensional models, etc., including Used to simulate the dots or arbitrary shapes of the eyes. The output screen that simulates eye features may include simulated pupils, eyeballs, eyelids, eyebrows, eyelashes and other display objects, and the simulated eye screens are formed by the display objects.
在第一方面的一种可能的实现方式中,所述在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,具体为:In a possible implementation of the first aspect, the dynamic adjustment of the output screen of the display module for simulating eye features on the robot during the rotation of the robot is specifically:
在机器人的旋转过程中,根据目标偏转角度动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面;所述目标偏转角度为机器人由初始角度旋转至面向目标对象时对应的偏转角度。During the rotation of the robot, the output screen of the display module for simulating eye features on the robot is dynamically adjusted according to the target deflection angle; the target deflection angle is the corresponding deflection angle when the robot is rotated from the initial angle to facing the target object .
上述初始角度具体为机器人的正面所对应的方向角;若机器人包含头部,身体的正面所对应的方向角则具体为机器人的头部的正面所对应的方向,其中,机器人的头部的正面具体指包含模拟眼球所在的面。The above-mentioned initial angle is specifically the direction angle corresponding to the front of the robot; if the robot includes a head, the direction angle corresponding to the front of the body is specifically the direction corresponding to the front of the robot's head, where the front of the robot's head Specifically, it refers to the surface that contains the simulated eyeball.
在第一方面的一种可能的实现方式中,所述在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,以使所述机器人上所述显示模块的所述输出画面的模拟眼部特征的眼部视线在机器人旋转过程中为看向所述目标对象的方向,包括:In a possible implementation of the first aspect, during the rotation of the robot, the output screen of the display module for simulating eye features on the robot is dynamically adjusted, so that the robot is The eye line of sight of the simulated eye feature of the output screen of the display module is the direction of looking at the target object during the rotation of the robot, including:
在控制所述机器人旋转的过程中,根据所述机器人实时反馈的实时旋转角度以及所述目标偏转角度,动态调整所述输出画面中的模拟眼部特征的位置到眼部特征目标位置;所述眼部特征目标位置是所述输出画面所模拟的眼部视线首次看向所述目标对象的方向的位置;In the process of controlling the rotation of the robot, dynamically adjust the position of the simulated eye feature in the output screen to the target position of the eye feature according to the real-time rotation angle fed back by the robot in real time and the target deflection angle; The eye feature target position is the position in the direction in which the line of sight of the eye simulated by the output screen looks toward the target object for the first time;
在所述眼部视线首次看向所述目标对象的方向后,根据所述实时旋转角度,动态调整所述模拟眼部特征的位置,以保持所述眼部视线看向所述目标对象的方向。After the line of sight of the eye looks in the direction of the target object for the first time, dynamically adjust the position of the simulated eye feature according to the real-time rotation angle to maintain the direction of the line of sight of the eye to the target object .
在第一方面的一种可能的实现方式中,所述在控制所述机器人旋转的过程中,根据所述机器人实时反馈的实时旋转角度以及所述目标偏转角度,动态调整所述输出画面中的模拟眼部特征的位置到眼部特征目标位置,包括:In a possible implementation of the first aspect, in the process of controlling the rotation of the robot, according to the real-time rotation angle fed back by the robot in real time and the target deflection angle, the output screen is dynamically adjusted Simulate the position of the eye feature to the target location of the eye feature, including:
基于所述实时旋转角度确定水平偏转量以及垂直偏转量;Determining the amount of horizontal deflection and the amount of vertical deflection based on the real-time rotation angle;
根据所述目标偏转角度、所述水平偏转量以及所述垂直偏转量,确定所述眼部特征目标位置;所述实时偏移量具体为:According to the target deflection angle, the horizontal deflection amount, and the vertical deflection amount, the eye characteristic target position is determined; the real-time offset is specifically:
Figure PCTCN2021089709-appb-000001
Figure PCTCN2021089709-appb-000001
其中,eye_yaw为所述眼部特征目标位置的水平分量;eye_pitch为所述眼部特征目标位置的垂直分量;target_yaw为所述目标偏转角度的水平分量;target_pitch为所述目标偏转角度的垂直分量;motor_yaw为所述水平偏转量;motor_pitch为所述水平偏转量。Wherein, eye_yaw is the horizontal component of the eye characteristic target position; eye_pitch is the vertical component of the eye characteristic target position; target_yaw is the horizontal component of the target deflection angle; target_pitch is the vertical component of the target deflection angle; motor_yaw is the horizontal deflection amount; motor_pitch is the horizontal deflection amount.
在第一方面的一种可能的实现方式中,所述根据所述目标对象的位置信息控制所述机器人转动,包括:In a possible implementation manner of the first aspect, the controlling the rotation of the robot according to the position information of the target object includes:
基于所述目标偏转角度控制所述机器人的躯干转动;所述目标偏转角度为机器人的躯干由初始角度旋转至朝向目标对象时对应的偏转角度;Control the torso rotation of the robot based on the target deflection angle; the target deflection angle is the corresponding deflection angle when the torso of the robot rotates from the initial angle to the target object;
在所述机器人的躯干旋转的过程中,根据实时反馈的实时躯干角度以及所述目标偏转角度,动态控制所述机器人的头部转动至头部目标位置,所述头部目标位置为所述机器人头部首次面向所述目标对象的方向的位置;During the rotation of the torso of the robot, according to the real-time torso angle and the target deflection angle fed back in real time, the head of the robot is dynamically controlled to rotate to the target head position, and the target head position is the robot The position of the head facing the target object for the first time;
在所述机器人头部的正面面向首次面向所述目标对象的方向后,根据所述实时躯干角度,动态控制所述机器人的头部转动,以保持所述机器人头部面向所述目标对象的方向。After the front of the robot head faces the direction facing the target object for the first time, according to the real-time torso angle, dynamically control the rotation of the robot head to keep the direction of the robot head facing the target object .
在一种可能的实现方式中,所述基于所述目标偏转角度控制所述机器人的躯干转动,包括:In a possible implementation manner, the controlling the torso rotation of the robot based on the target deflection angle includes:
以预设的控制周期获取所述机器人的躯干在各个所述控制周期的第一基准旋转角度;Acquiring the first reference rotation angle of the torso of the robot in each control period in a preset control period;
根据当前所述控制周期的所述第一基准旋转角度、上一所述控制周期的第一跟踪误差角度以及所述躯干旋转角度,生成当前所述控制周期对应的第一周期旋转指令;Generate a first cycle rotation instruction corresponding to the current control cycle according to the first reference rotation angle of the current control cycle, the first tracking error angle of the previous control cycle, and the trunk rotation angle;
根据各个所述第一周期旋转指令控制所述机器人的躯干转动。The torso of the robot is controlled to rotate according to each of the first cycle rotation instructions.
其中,所述第一周期旋转指令内的旋转角度具体为:Wherein, the rotation angle in the first cycle rotation instruction is specifically:
Figure PCTCN2021089709-appb-000002
Figure PCTCN2021089709-appb-000002
其中,motor 0_yaw为所述第一周期旋转指令内的旋转角度;current_motor 0_yaw为当前所述控制周期的第一基准旋转角度;target_yaw为所述目标偏转角度;motor 0_yaw_diff为当前所述控制周期的第一跟踪误差角度;motor 0_yaw_last_diff为上一所述控制周期的第一跟踪误差角度;motor 0_Kp以及motor 0_Kd为所述躯干旋转部件的预设调整参数。 Wherein, motor 0 _yaw is the rotation angle in the first cycle rotation command; current_motor 0 _yaw is the first reference rotation angle of the current control cycle; target_yaw is the target deflection angle; motor 0 _yaw_diff is the current control The first tracking error angle of the cycle; motor 0 _yaw_last_diff is the first tracking error angle of the previous control cycle; motor 0 _Kp and motor 0 _Kd are preset adjustment parameters of the trunk rotating part.
在第一方面的一种可能的实现方式中,所述在所述机器人的躯干旋转的过程中,根据实时反馈的实时躯干角度以及所述目标偏转角度,动态控制所述机器人的头部转动至头部目标位置,所述头部目标位置为所述机器人头部首次面向所述目标对象的方向的位置,包括:In a possible implementation of the first aspect, in the process of rotating the torso of the robot, according to the real-time torso angle fed back in real time and the target deflection angle, the head of the robot is dynamically controlled to rotate to The head target position, where the head target position is the position of the robot head facing the target object for the first time, includes:
基于所述目标偏转角度确定水平偏转角度以及垂直偏转角度;Determining a horizontal deflection angle and a vertical deflection angle based on the target deflection angle;
根据所述垂直偏转角度控制所述机器人的头部在垂直方向上转动;Controlling the head of the robot to rotate in a vertical direction according to the vertical deflection angle;
以及同时在所述机器人的躯干转动的过程中,根据实时反馈的实时躯干角度以及所述水平偏转角度,动态控制所述机器人的头部在水平方向上转动至水平目标位置;所述水平目标位置为所述机器人头部在水平方向上首次面向所述目标对象的方向的位置。And at the same time, during the rotation of the torso of the robot, according to the real-time torso angle and the horizontal deflection angle fed back in real time, the head of the robot is dynamically controlled to rotate in the horizontal direction to a horizontal target position; the horizontal target position Is the position of the robot head facing the target object in the horizontal direction for the first time.
在第一方面的一种可能的实现方式中,所述根据所述垂直偏转角度控制所述机器人的头部在垂直方向上转动,包括:In a possible implementation of the first aspect, the controlling the head of the robot to rotate in a vertical direction according to the vertical deflection angle includes:
以预设的控制周期获取所述机器人的头部在垂直方向上各个所述控制周期的第二基准旋转角度;Acquiring the second reference rotation angle of the head of the robot in the vertical direction in each control period in a preset control period;
根据当前所述控制周期的所述第二基准旋转角度、上一所述控制周期的第二跟踪误差角度以及所述垂直偏转角度,生成当前所述控制周期对应的第二周期旋转指令;Generate a second cycle rotation command corresponding to the current control cycle according to the second reference rotation angle of the current control cycle, the second tracking error angle of the previous control cycle, and the vertical deflection angle;
根据各个所述第二周期旋转指令控制所述机器人的头部在垂直方向上转动。The head of the robot is controlled to rotate in the vertical direction according to each of the second cycle rotation instructions.
其中,所述第二周期旋转指令内的旋转角度具体为:Wherein, the rotation angle in the second cycle rotation instruction is specifically:
Figure PCTCN2021089709-appb-000003
Figure PCTCN2021089709-appb-000003
其中,motor 2_pitch为所述第二周期旋转指令内的旋转角度;current_motor 2_pitch为当前所述控制周期的第二基准旋转角度;target_pitch为所述垂直偏转角度;motor 2_pitch_diff为当前所述控制周期的第二跟踪误差角度;motor 2_pitch_last_diff为上一所述控制周期的第二跟踪误差角度;motor 2_Kp以及motor 2_Kd为所述第二头部旋转部件的预设调整参数。 Wherein, motor 2 _pitch is the rotation angle in the second cycle rotation command; current_motor 2 _pitch is the second reference rotation angle of the current control cycle; target_pitch is the vertical deflection angle; motor 2 _pitch_diff is the current control The second tracking error angle of the cycle; motor 2 _pitch_last_diff is the second tracking error angle of the previous control cycle; motor 2 _Kp and motor 2 _Kd are preset adjustment parameters of the second head rotating part.
在第一方面的一种可能的实现方式中,所述在所述机器人的躯干转动的过程中,根据实时反馈的实时躯干角度以及所述水平偏转角度,动态控制所述机器人的头部在水平方向上转动至水平目标位置,包括:In a possible implementation of the first aspect, in the process of rotating the torso of the robot, according to the real-time torso angle fed back in real time and the horizontal deflection angle, the head of the robot is dynamically controlled to be horizontal. Rotate in the direction to the horizontal target position, including:
以预设的控制周期获取所述机器人的头部在水平方向上各个所述控制周期的第三基准旋转角度;Acquiring the third reference rotation angle of the head of the robot in each control period in the horizontal direction in a preset control period;
根据当前所述控制周期的所述第三基准旋转角度、上一所述控制周期的第三跟踪误差角、所述实时躯干角度以及所述水平偏转角度,生成当前所述控制周期对应的第三周期旋转指令;According to the third reference rotation angle of the current control period, the third tracking error angle of the previous control period, the real-time torso angle, and the horizontal deflection angle, the third reference rotation angle corresponding to the current control period is generated. Cycle rotation instruction;
根据各个所述三周期旋转指令动态控制所述机器人的头部在水平方向上转动。The head of the robot is dynamically controlled to rotate in the horizontal direction according to each of the three-cycle rotation instructions.
其中,所述第三周期旋转指令内的旋转角度具体为:Wherein, the rotation angle in the third cycle rotation instruction is specifically:
Figure PCTCN2021089709-appb-000004
Figure PCTCN2021089709-appb-000004
其中,motor 1_yaw为所述第三周期旋转指令内的旋转角度;current_motor 1_yaw为当前所述控制周期的第三基准旋转角度;target_yaw为所述水平偏转角度;motor 1_yaw_diff为当前所述控制周期的第三跟踪误差角度;motor 1_yaw_last_diff为上一所述控制周期的第三跟踪误差角度;motor 1_Kp以及motor 1_Kd为所述第一头部旋转部件的预设调整参数;current_motor 0_yaw为所述实时躯干角度。 Wherein, motor 1 _yaw is the rotation angle in the third cycle rotation command; current_motor 1 _yaw is the third reference rotation angle of the current control cycle; target_yaw is the horizontal deflection angle; motor 1 _yaw_diff is the current control The third tracking error angle of the cycle; motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle; motor 1 _Kp and motor 1 _Kd are the preset adjustment parameters of the first head rotating part; current_motor 0 _yaw Is the real-time torso angle.
在第一方面的一种可能的实现方式中,所述获取目标对象的位置信息,包括:In a possible implementation manner of the first aspect, the acquiring location information of the target object includes:
通过所述机器人内置的摄像模块获取包含所述目标对象的场景图像;Acquiring a scene image containing the target object through a camera module built into the robot;
从所述场景图像中标记出所述目标对象的边界坐标,并根据所述边界坐标确定所述目标对象的目标对象的位置信息;所述目标对象的位置信息包括所述目标对象的对象中心坐标;Mark the boundary coordinates of the target object from the scene image, and determine the position information of the target object of the target object according to the boundary coordinates; the position information of the target object includes the object center coordinates of the target object ;
根据所述对象中心坐标以及所述场景图像的图像中心坐标,确定所述目标偏转角度;Determine the target deflection angle according to the object center coordinates and the image center coordinates of the scene image;
其中,所述目标偏转角度具体为:Wherein, the target deflection angle is specifically:
Figure PCTCN2021089709-appb-000005
Figure PCTCN2021089709-appb-000005
其中,target_yaw为所述目标偏转角度的水平分量;target_pitch为所述目标偏转角度的垂直分量;OC为所述摄像模块的焦距;CD为对象中心坐标与图像中心坐标之间的水平偏差;AC为对象中心坐标与图像中心坐标之间的垂直偏差。Wherein, target_yaw is the horizontal component of the target deflection angle; target_pitch is the vertical component of the target deflection angle; OC is the focal length of the camera module; CD is the horizontal deviation between the object center coordinates and the image center coordinates; AC is The vertical deviation between the object center coordinate and the image center coordinate.
在第一方面的一种可能的实现方式中,所述通过所述机器人内置的摄像模块获取包含所述目标对象的场景图像,包括:In a possible implementation of the first aspect, the acquiring a scene image containing the target object through a camera module built into the robot includes:
根据摄像模块与所述显示模块的输出画面的模拟眼部特征的位置之间的偏移量,确定图像校正参量;Determine the image correction parameter according to the offset between the position of the simulated eye feature of the output screen of the camera module and the display module;
基于所述图像校正参量调整所述摄像模块采集到的原始图像,生成所述场景图像。The original image collected by the camera module is adjusted based on the image correction parameter to generate the scene image.
第二方面,本申请实施例提供了一种机器人的控制装置,包括:In the second aspect, an embodiment of the present application provides a robot control device, including:
位置信息获取单元,用于获取目标对象的位置信息;The location information acquiring unit is used to acquire the location information of the target object;
转动控制单元,用于根据所述目标对象的位置信息控制所述机器人转动;A rotation control unit for controlling the rotation of the robot according to the position information of the target object;
画面调整单元,用于在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,看向所述目标对象的方向以使所述机器人上所述显示模块的所述输出画面的模拟眼部特征的眼部视线在机器人旋转过程中为看向所述目标对象的方向。The screen adjustment unit is used to dynamically adjust the output screen of the display module for simulating eye features on the robot during the rotation of the robot, and look in the direction of the target object so that the robot is on the The line of sight of the eye that simulates the eye feature of the output screen of the display module is the direction of looking at the target object during the rotation of the robot.
第三方面,本申请实施例提供了一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述机器人的控制方法。In a third aspect, an embodiment of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the The computer program realizes the control method of the robot according to any one of the above-mentioned first aspects.
第四方面,本申请实施例提供了一种机器人,包括处理器、显示器以及用于控制机器人转动的传动部件,其特征在于,所述处理器执行计算机程序时实现上述第一方面中任一项所述机器人的控制方法,所述传动部件根据所述计算机程序输出的控制指令,控制所述机器人转动,以及所述显示器根据所述计算机程序的输出指令,动态调整用于模拟眼部特征的输出画面。In a fourth aspect, an embodiment of the present application provides a robot, including a processor, a display, and a transmission component for controlling the rotation of the robot, wherein the processor executes a computer program to implement any one of the first aspects In the control method of the robot, the transmission component controls the rotation of the robot according to the control instruction output by the computer program, and the display dynamically adjusts the output for simulating eye features according to the output instruction of the computer program Picture.
第五方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述第一方面中任一项所述机器人的控制方法。In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and is characterized in that, when the computer program is executed by a processor, any of the above-mentioned aspects of the first aspect is implemented. A method of controlling the robot.
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在机器人上运行时,使得机器人执行上述第一方面中任一项所述机器人的控制方法。In a sixth aspect, the embodiments of the present application provide a computer program product, which when the computer program product runs on a robot, causes the robot to execute the robot control method described in any one of the above-mentioned first aspects.
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It is understandable that, for the beneficial effects of the second aspect to the fifth aspect described above, reference may be made to the relevant description in the first aspect described above, and details are not repeated here.
本申请实施例根据目标对象与机器人之间的相对位置,确定目标偏转角度,并基于上述目标偏转角度调整显示模块的输出画面以及旋转部件来实现凝视跟踪行为,由于该机器人的眼球运动是通过显示模块的输出界面进行模拟,移动过程中无需马达驱动实现偏转,响应时间较短,实现了平滑凝视跟踪行为,提高了拟人化程度以及凝视跟踪行为的准确性。The embodiment of the present application determines the target deflection angle according to the relative position between the target object and the robot, and adjusts the output screen and rotating parts of the display module based on the target deflection angle to realize the gaze tracking behavior, because the eye movement of the robot is displayed The output interface of the module is simulated, no motor drive is needed to achieve deflection during movement, the response time is short, smooth gaze tracking behavior is realized, and the degree of personification and the accuracy of gaze tracking behavior are improved.
附图说明Description of the drawings
图1是本申请实施例提供的机器人的部分结构的框图;FIG. 1 is a block diagram of a part of the structure of a robot provided by an embodiment of the present application;
图2是本申请实施例的机器人的软件结构示意图;FIG. 2 is a schematic diagram of the software structure of the robot according to an embodiment of the present application;
图3是本申请一实施例提供的机器人示意图;Fig. 3 is a schematic diagram of a robot provided by an embodiment of the present application;
图4是本申请一实施例提供的机器人的结构示意图;FIG. 4 is a schematic structural diagram of a robot provided by an embodiment of the present application;
图5是本申请第一实施例提供的机器人的控制方法的实现流程图;FIG. 5 is an implementation flowchart of the robot control method provided by the first embodiment of the present application;
图6是本申请一实施例提供的目标对象的关键中心的示意图;Fig. 6 is a schematic diagram of a key center of a target object provided by an embodiment of the present application;
图7是本申请一实施例提供的目标偏转角度的获取示意图;FIG. 7 is a schematic diagram of obtaining a target deflection angle provided by an embodiment of the present application;
图8是本申请另一实施例提供的一种机器人的控制方法S501以及S502的具体实现流程图;FIG. 8 is a specific implementation flowchart of a robot control method S501 and S502 according to another embodiment of the present application;
图9是本申请一实施例提供的确定对象中心坐标的示意图;FIG. 9 is a schematic diagram of determining the center coordinates of an object provided by an embodiment of the present application;
图10是本申请一实施例提供的目标偏移角度的获取示意图;FIG. 10 is a schematic diagram of obtaining a target offset angle provided by an embodiment of the present application;
图11是本申请一实施例提供的拍摄偏移的原理示意图;FIG. 11 is a schematic diagram of the principle of shooting offset provided by an embodiment of the present application;
图12是本申请另一实施例提供的一种机器人的控制方法S5011的具体实现流程图;FIG. 12 is a specific implementation flowchart of a robot control method S5011 provided by another embodiment of the present application;
图13是本申请一实施例提供的基于马达驱动的多自由度的机器人示意图;FIG. 13 is a schematic diagram of a motor-driven multi-degree-of-freedom robot provided by an embodiment of the present application;
图14是本申请一实施例提供的2个自由度的机器人示意图;Fig. 14 is a schematic diagram of a robot with 2 degrees of freedom provided by an embodiment of the present application;
图15是本申请另一实施例提供的一种机器人的控制方法S503的具体实现流程图;15 is a specific implementation flowchart of a robot control method S503 provided by another embodiment of the present application;
图16是本申请一实施例提供的机器人的眼部视线首次对准目标对象的示意图;FIG. 16 is a schematic diagram of a robot's eye sight aligning with a target object for the first time according to an embodiment of the present application;
图17是本申请另一实施例提供的一种机器人的控制方法S5031的具体实现流程图;FIG. 17 is a specific implementation flowchart of a robot control method S5031 according to another embodiment of the present application;
图18是本申请一实施例提供的旋转部件包含有一个自由度的机器人的示意图;18 is a schematic diagram of a robot with one degree of freedom in a rotating part provided by an embodiment of the present application;
图19是本申请一实施例提供的旋转部件包含有两个自由度的机器人的示意图;19 is a schematic diagram of a robot with two degrees of freedom in a rotating part provided by an embodiment of the present application;
图20是本申请另一实施例提供的一种机器人的控制方法S502的具体实现流程图;20 is a specific implementation flowchart of a robot control method S502 provided by another embodiment of the present application;
图21是本申请另一实施例提供的一种机器人的控制方法S2001的具体实现流程图;FIG. 21 is a specific implementation flowchart of a robot control method S2001 provided by another embodiment of the present application;
图22是本申请另一实施例提供的一种机器人的控制方法S2002的具体实现流程图;FIG. 22 is a specific implementation flowchart of a robot control method S2002 provided by another embodiment of the present application;
图23是本申请一实施例提供的旋转部件包含有三个自由度的机器人的示意图;FIG. 23 is a schematic diagram of a robot with three degrees of freedom in a rotating part provided by an embodiment of the present application;
图24是本申请另一实施例提供的一种机器人的控制方法S2202的具体实现流程图;FIG. 24 is a specific implementation flowchart of a robot control method S2202 provided by another embodiment of the present application;
图25是本申请另一实施例提供的一种机器人的控制方法S2203的具体实现流程图;25 is a specific implementation flowchart of a robot control method S2203 provided by another embodiment of the present application;
图26是本申请一实施例提供的一种机器人的控制设备的结构框图;FIG. 26 is a structural block diagram of a robot control device provided by an embodiment of the present application;
图27本申请另一实施例提供的一种机器人的示意图。FIG. 27 is a schematic diagram of a robot provided by another embodiment of the present application.
具体实施方式Detailed ways
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are proposed for a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to avoid unnecessary details from obstructing the description of this application.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It should be understood that when used in the specification and appended claims of this application, the term "comprising" indicates the existence of the described features, wholes, steps, operations, elements and/or components, but does not exclude one or more other The existence or addition of features, wholes, steps, operations, elements, components, and/or collections thereof.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should also be understood that the term "and/or" used in the specification and appended claims of this application refers to any combination of one or more of the associated listed items and all possible combinations, and includes these combinations.
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事 件]”。As used in the description of this application and the appended claims, the term "if" can be construed as "when" or "once" or "in response to determination" or "in response to detecting ". Similarly, the phrase "if determined" or "if detected [described condition or event]" can be interpreted as meaning "once determined" or "in response to determination" or "once detected [described condition or event]" depending on the context ]" or "in response to detection of [condition or event described]".
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of this application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。The reference to "one embodiment" or "some embodiments" described in the specification of this application means that one or more embodiments of this application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the sentences "in one embodiment", "in some embodiments", "in some other embodiments", "in some other embodiments", etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless it is specifically emphasized otherwise. The terms "including", "including", "having" and their variations all mean "including but not limited to", unless otherwise specifically emphasized.
凝视跟踪行为是指机器人对于目标对象通过机器人的躯干、头部和眼睛的协调动作进行主动跟踪,是人机非语言类交互中最重要的功能,也是人机交互的重要步骤。对于拟人类机器人而言,凝视跟踪尤为重要,通过系统设计实现机器人对和交互对象的跟踪凝视和自然对视,才能进一步展开深层次交互。凝视跟踪行为具体包含方面,分别为机器人对于目标的跟踪以及机器人与目标对象之间的目光接触。针对目前机器人控制技术,无法实现平滑的凝视跟踪行为,从而降低了机器人的拟人化程度以及凝视跟踪行为的准确性的问题,本申请实施例提供一种机器人的控制方法、装置、机器人及存储介质,根据目标对象与机器人之间的相对位置,确定目标偏转角度,并基于上述目标偏转角度调整显示模块的输出画面以及旋转部件来实现凝视跟踪行为,由于该机器人的眼球运动是通过显示模块的输出界面进行模拟,移动过程中无需马达驱动实现偏转,响应时间较短,实现了平滑凝视跟踪行为,提高了拟人化程度以及凝视跟踪行为的准确性。Gaze tracking behavior refers to the robot's active tracking of the target object through the coordinated actions of the robot's torso, head, and eyes. It is the most important function of human-computer non-verbal interaction and an important step in human-computer interaction. For anthropomorphic robots, gaze tracking is particularly important. Only when the system design realizes the tracking gaze and natural confrontation between the robot and the interactive object, can further deep-level interaction be carried out. Gaze tracking behavior specifically includes aspects, which are the robot's tracking of the target and the eye contact between the robot and the target object. In view of the current robot control technology that cannot achieve smooth gaze tracking behavior, thereby reducing the degree of personification of the robot and the accuracy of gaze tracking behavior, embodiments of the present application provide a robot control method, device, robot, and storage medium According to the relative position between the target object and the robot, determine the target deflection angle, and adjust the output screen and rotating parts of the display module based on the target deflection angle to realize the gaze tracking behavior, because the eye movement of the robot is output through the display module The interface is simulated, and no motor is required to achieve deflection during movement. The response time is short, and smooth gaze tracking behavior is realized, and the degree of personification and the accuracy of gaze tracking behavior are improved.
下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。The technical solution of the present application will be described in detail below with specific embodiments. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments.
本申请实施例提供的机器人的控制方法可以应用于智能机器人,该智能机器人能够实现人机交互,上述人机交互包括但不限于目标跟踪、凝视跟踪行为、智能问答、智能导航、音乐点播、智能陪护等交互操作,智能机器人还能够根据预先配置的指令,自动执行指令关联的任务操作,举例性地,机器人在检测到达预设的时间节点时,可以对用户进行语音提示,或满足预设的触发条件时,执行与触发条件关联的响应操作,例如在检测到当前室内温度大于预设的温度阈值时,开启室内的空调设备。在机器人与用户进行交互过程中,为了提高机器人的拟人化程序,机器人的眼神可以跟随用户的移动而移动,上述过程即为凝视跟踪行为。The robot control method provided in the embodiments of the present application can be applied to an intelligent robot that can realize human-computer interaction. The above-mentioned human-computer interaction includes, but is not limited to, target tracking, gaze tracking behavior, intelligent question and answer, intelligent navigation, music on demand, and intelligent For interactive operations such as escort, intelligent robots can also automatically perform task operations associated with instructions according to pre-configured instructions. For example, when the robot detects the arrival of the preset time node, it can give voice prompts to the user, or meet the preset When a trigger condition is triggered, a response operation associated with the trigger condition is executed, for example, when it is detected that the current indoor temperature is greater than a preset temperature threshold, the indoor air-conditioning device is turned on. In the process of interaction between the robot and the user, in order to improve the anthropomorphic process of the robot, the eyes of the robot can move with the movement of the user. The above process is the gaze tracking behavior.
在本实施例中,上述机器人可以是具备如图1所示的硬件结构的机器人100,如图1所示,机器人100具体可以包括:通信模块110、存储器120、摄像模块130、显示单元140、传感器150、音频电路160、旋转部件170、处理器180、以及电源190等部件。本领域技术人员可以理解,图1中示出的机器人100的结构并不构成对机器人的限定,机器人可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。In this embodiment, the above-mentioned robot may be a robot 100 having a hardware structure as shown in FIG. 1. As shown in FIG. 1, the robot 100 may specifically include: a communication module 110, a memory 120, a camera module 130, a display unit 140, The sensor 150, the audio circuit 160, the rotating component 170, the processor 180, and the power supply 190 and other components. Those skilled in the art can understand that the structure of the robot 100 shown in FIG. 1 does not constitute a limitation on the robot. The robot may include more or fewer components than shown, or combine certain components, or different component arrangements. .
下面结合图1对机器人的各个构成部件进行具体的介绍:The following describes the components of the robot in detail with reference to Figure 1:
通信模块110可用于与其他设备建立通信连接,从而接收其他设备发送的控制指 令、固件更新数据包等,还可以向其他设备发送机器人的操作记录,可选地,该机器人还可以通过通信模块110与所在场景下的其他机器人建立通信连接,从而与其他机器人进行协同运作。特别地,通信模块110可以将从其他设备接收到的下行信息,给处理器180处理;另外,将设计上行的数据发送给与之连接的其他设备。通常,通信模块110包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器、无线通信模块、蓝牙通信模块等。此外,通信模块110还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。The communication module 110 can be used to establish a communication connection with other devices, so as to receive control instructions and firmware update data packets sent by other devices, and can also send operation records of the robot to other devices. Optionally, the robot can also use the communication module 110 Establish communication connections with other robots in the scene, so as to cooperate with other robots. In particular, the communication module 110 may send the downlink information received from other devices to the processor 180 for processing; in addition, it may send the designed uplink data to other devices connected to it. Generally, the communication module 110 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, a wireless communication module, a Bluetooth communication module, and so on. In addition, the communication module 110 may also communicate with the network and other devices through wireless communication. The above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), Email, Short Messaging Service (SMS), etc.
存储器120可用于存储软件程序以及模块,处理器180通过运行存储在存储器120的软件程序以及模块,从而执行机器人的各种功能应用以及数据处理。存储器120可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如凝视跟踪行为应用、智能陪护应用、智能教育应用等)等;存储数据区可存储根据机器人的使用所创建的数据(比如通过摄像模块130采集的图像数据、旋转部件170反馈的旋转角度等)等。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 120 may be used to store software programs and modules. The processor 180 executes various functional applications and data processing of the robot by running the software programs and modules stored in the memory 120. The memory 120 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, at least one application program required by at least one function (such as a gaze tracking behavior application, an intelligent escort application, an intelligent education application, etc.), etc.; The data area may store data created according to the use of the robot (for example, image data collected by the camera module 130, rotation angle fed back by the rotating component 170, etc.) and the like. In addition, the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
摄像模块130可用于采集机器人所处的环境图像,该摄像模块130的拍摄方向可以与机器人的正面面向的方向一致,从而能够实现机器人模拟“人眼看到”所处的环境。可选地,机器人内置有多个摄像模块130,不同的摄像模块130用于采集机器人在不同方向的环境图像;可选地,机器人内置有一个摄像模块130,该摄像模块130可以预设的轨迹移动或围绕一轴心转动,从而获取不同角度、方向的环境图像。摄像模块130可以将采集到的图像存储于存储器120中,还可以采集到的图像直接传输给处理器180。The camera module 130 can be used to collect images of the environment where the robot is located. The shooting direction of the camera module 130 can be consistent with the direction of the front of the robot, so that the robot can simulate the environment where the robot is "seeing". Optionally, the robot has a plurality of camera modules 130 built in, and different camera modules 130 are used to collect environmental images of the robot in different directions; optionally, the robot has a built-in camera module 130, which can preset a trajectory Move or rotate around an axis to obtain environmental images of different angles and directions. The camera module 130 can store the collected images in the memory 120, and can also directly transmit the collected images to the processor 180.
显示单元140可用于显示由用户输入的信息或提供给用户的信息以及机器人的各种菜单。显示单元140可包括显示面板141,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板141。进一步的,触控面板可覆盖显示面板141,当触控面板检测到在其上或附近的触摸操作后,传送给处理器180以确定触摸事件的类型,随后处理器180根据触摸事件的类型在显示面板141上提供相应的视觉输出。特别地,该机器人的头部区域包含有用于模拟双眼的显示模块,该显示模块的输出画面具体为模拟眼睛的画面,该眼睛可以基于拍摄真人的眼睛区域在各个方向移动所对应的视频而生成的模拟真人眼球的画面,还可以通过动画、漫画、三维模型等方式构建的眼睛画面。该模拟眼睛的输出画面内可以包含有模拟的瞳孔、眼球、眼睑、眉毛、睫毛等显示对象,通过上述显示对象构成模拟的眼睛画面。需要说明的是,机器人的头部可以包含一个显示模块,该显示模块可以用于输出包含双眼的输出画面;该机器人的头部还可 以包含两个显示模块,分别为左眼显示模块以及右眼显示模块,左眼显示模块用于输出模拟左眼的输出画面,而右眼显示模块用于输出模拟右眼的输出画面。The display unit 140 may be used to display information input by the user or information provided to the user and various menus of the robot. The display unit 140 may include a display panel 141. Optionally, the display panel 141 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc. Further, the touch panel can cover the display panel 141, and when the touch panel detects a touch operation on or near it, it transmits it to the processor 180 to determine the type of the touch event, and then the processor 180 determines the type of the touch event according to the type of the touch event. Corresponding visual output is provided on the display panel 141. In particular, the head area of the robot contains a display module for simulating binoculars. The output screen of the display module is specifically a screen that simulates the eyes. The eyes can be generated based on the video corresponding to the movement of the eye area of a real person in various directions. The screen that simulates the eyeballs of real people can also be constructed through animation, comics, three-dimensional models and other methods. The output screen of the simulated eye may include simulated pupils, eyeballs, eyelids, eyebrows, eyelashes, and other display objects, and the simulated eye screens are formed by the above-mentioned display objects. It should be noted that the head of the robot can include a display module, which can be used to output an output screen containing two eyes; the head of the robot can also include two display modules, namely a left-eye display module and a right-eye display module. The display module, the left-eye display module is used to output an output image that simulates the left eye, and the right-eye display module is used to output an output image that simulates the right eye.
机器人100还可包括至少一种传感器150,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示单元140的亮度,接近传感器用于判断机器人与用户之间的距离,并在距离小于预设的距离阈值时,通过机器人的传动部件控制机器人远离用户,从而避免与用户碰撞。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别机器人姿态的应用、振动识别相关功能等;至于机器人还可配置的陀螺仪、温度计、红外线传感器等其他传感器,在此不再赘述。The robot 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor can adjust the brightness of the display unit 140 according to the brightness of the ambient light. The proximity sensor is used to determine the distance between the robot and the user. When the preset distance threshold is used, the transmission components of the robot are used to control the robot to be far away from the user, so as to avoid collision with the user. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary, which can be used to recognize robot posture applications, vibration recognition related functions, etc.; As for other sensors, such as gyroscopes, thermometers, infrared sensors, etc., that the robot can also configure, I won't repeat them here.
音频电路160、扬声器161,传声器162可提供用户与机器人之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,传声器162将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器180处理后,经通信模块110以发送给比如另一机器人,或者将音频数据输出至存储器120以便进一步处理。可选地,在用户与机器人进行交互的过程中,可以通过音频电路160采集用户发起的语音指令,并基于语音指令进行响应的响应操作;还可以通过音频电路160在与进行人机交互的过程中,输出智能应答语音信号,实现了与用户进行仿真交互过程。The audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the robot. The audio circuit 160 can transmit the electrical signal converted from the received audio data to the speaker 161, which is converted into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, and the audio circuit 160 After being received, it is converted into audio data, processed by the audio data output processor 180, and sent to, for example, another robot via the communication module 110, or the audio data is output to the memory 120 for further processing. Optionally, in the process of the user interacting with the robot, the audio circuit 160 can collect the voice command initiated by the user, and respond to the response operation based on the voice command; the audio circuit 160 can also be used in the process of human-computer interaction. In, the intelligent response voice signal is output, and the simulation interaction process with the user is realized.
机器人在进行移动、目标追踪以及凝视跟踪行为的过程中,可以通过旋转部件170控制机器人执行相应运动。该旋转部件170可以通过马达进行驱动,通过马达提供驱动力来控制旋转部件170进行旋转,从而实现机器人移动、姿态调整、面向调整等运动操作。在一种可能的实现方式中,为了提高凝视跟踪行为的拟人化程度,可以在机器人的头部、躯干等部位安装有多个旋转部件170,不同的旋转部件可以控制机器人在一个方向上旋转,即对应一个自由度,根据机器人内安装多个旋转部件170,可以实现多自由度的凝视跟踪行为。该旋转部件170还可以实时向处理器180反馈实时旋转角度,处理器180可以根据反馈的实时旋转角度控制机器人运动,从而实现对目标对象进行平滑的凝视跟踪行为。In the process of movement, target tracking, and gaze tracking behavior of the robot, the rotating part 170 can be used to control the robot to perform corresponding movements. The rotating part 170 can be driven by a motor, and the motor provides a driving force to control the rotating part 170 to rotate, so as to realize movement operations such as robot movement, posture adjustment, and orientation adjustment. In a possible implementation, in order to improve the personification of the gaze tracking behavior, multiple rotating parts 170 can be installed on the head, torso, etc. of the robot. Different rotating parts can control the robot to rotate in one direction. That is, corresponding to one degree of freedom, according to the installation of multiple rotating parts 170 in the robot, a gaze tracking behavior with multiple degrees of freedom can be realized. The rotating component 170 can also feed back the real-time rotation angle to the processor 180 in real time, and the processor 180 can control the movement of the robot according to the feedback real-time rotation angle, so as to implement smooth gaze tracking behavior on the target object.
处理器180是机器人的控制中心,利用各种接口和线路连接整个机器人的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行机器人的各种功能和处理数据,从而对机器人进行整体监控。可选的,处理器180可包括一个或多个处理单元;优选的,处理器180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器180中。The processor 180 is the control center of the robot. It uses various interfaces and lines to connect various parts of the entire robot. Various functions and processing data of the robot, so as to monitor the robot as a whole. Optionally, the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 180.
机器人100还包括给各个部件供电的电源190(比如电池),优选的,电源可以通过电源管理系统与处理器180逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The robot 100 also includes a power source 190 (such as a battery) for supplying power to various components. Preferably, the power source may be logically connected to the processor 180 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
机器人100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明机器人100 的软件结构。The software system of the robot 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the robot 100 by way of example.
图2是本申请实施例的机器人100的软件结构框图。将Android系统分为四层,分别为应用程序层、应用程序框架层(framework,FWK)、系统层以及硬件抽象层,层与层之间通过软件接口通信。FIG. 2 is a block diagram of the software structure of the robot 100 according to an embodiment of the present application. The Android system is divided into four layers, which are application layer, application framework layer (framework, FWK), system layer, and hardware abstraction layer. The layers communicate through software interfaces.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图2所示,应用程序包可以包括相机、智能问答、智能陪护、多媒体点播、智能学习教育等应用程序。As shown in Figure 2, the application package may include applications such as camera, smart question and answer, smart escort, multimedia on-demand, smart learning and education.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器等。As shown in Figure 2, the application framework layer can include a window manager, a content provider, a view system, a resource manager, a notification manager, and so on.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示模块大小,判断当前的显示模块的显示状态等。The window manager is used to manage window programs. The window manager can obtain the size of the display module, determine the display status of the current display module, and so on.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,旋转部件反馈的旋转角度等。The content provider is used to store and retrieve data and make these data accessible to applications. The data may include video, image, audio, the rotation angle fed back by the rotating component, and so on.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等,特别地该可视控件包括有用于模拟眼球的控件。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, etc. In particular, the visual controls include controls for simulating eyeballs.
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,图形,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localized strings, icons, pictures, graphics, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, and so on. The notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and application framework layer run in a virtual machine. The virtual machine executes the java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。The system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式录制,以及静态图像文件等。媒体库可以 支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports a variety of commonly used audio, video format recording, and still image files. The media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。具体地,用于模拟眼球的输出画面可以通过三维图形处理库进行画面渲染、合成。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing. Specifically, the output screen used to simulate the eyeballs can be rendered and synthesized through a three-dimensional graphics processing library.
2D图形引擎是2D绘图的绘图引擎。The 2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。在一些实施例中,上述内核层还包含PCIE驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display driver, camera driver, audio driver, and sensor driver. In some embodiments, the above-mentioned kernel layer further includes a PCIE driver.
在本申请实施例中,流程的执行主体为机器人。作为示例而非限定,该机器人包含有显示模块以及旋转部件,通过显示模块输出用于模拟眼球的输出画面,通过旋转部件控制机器人移动。作为示例而非限定,图3示出了本申请一实施例提供的机器人示意图。参见图3所示,本申请提供的机器人可以为具有模拟人形形态的机器人(如图3中的a型机器人),或非人形形态的机器人,例如模拟动物形态的机器人(如图3中的b型机器人)或非生物形态的机器人(如图3中的c型机器人),上述机器人即为任何具有运动功能的设备,运动包括移动、旋转等。In the embodiment of the present application, the execution subject of the process is a robot. As an example and not a limitation, the robot includes a display module and a rotating part, an output screen for simulating eyeballs is output through the display module, and the robot is controlled to move through the rotating part. As an example and not a limitation, FIG. 3 shows a schematic diagram of a robot provided by an embodiment of the present application. As shown in Fig. 3, the robot provided by the present application can be a robot with a simulated human form (a-shaped robot in Fig. 3), or a non-human-shaped robot, such as a robot with a simulated animal form (b in Fig. 3). Type robots) or non-biological robots (such as type c robots in Figure 3). The above-mentioned robots are any equipment with motion functions, and motions include movement, rotation, and so on.
作为示例而非限定,图4示出了本申请一实施例提供的机器人的结构示意图。参见图4所示,该机器人的显示模块安装于机器人的头部,显示模块包括用于模拟左眼的左眼显示模块以及用于模拟右眼的右眼显示模块,通过两个显示模块的输出画面模拟眼球运动,实现凝视跟踪。该机器人的旋转部件包括有用于控制机器人躯干左右旋转的躯干旋转部件、用于控制机器人头部左右旋转的第一头部旋转部件以及用于控制机器人头部上下旋转的第二头部旋转部件。机器人通过控制显示模块的输出画面以及旋转部件的旋转,实现平滑的凝视追踪行为。As an example and not a limitation, FIG. 4 shows a schematic structural diagram of a robot provided by an embodiment of the present application. As shown in Figure 4, the display module of the robot is installed on the head of the robot. The display module includes a left-eye display module for simulating the left eye and a right-eye display module for simulating the right eye. The output of the two display modules is The screen simulates eye movement and realizes gaze tracking. The rotating part of the robot includes a trunk rotating part for controlling the left and right rotation of the robot's torso, a first head rotating part for controlling the left and right rotation of the robot head, and a second head rotating part for controlling the up and down rotation of the robot head. The robot realizes smooth gaze tracking behavior by controlling the output screen of the display module and the rotation of the rotating parts.
图5示出了本申请第一实施例提供的机器人的控制方法的实现流程图,详述如下:FIG. 5 shows an implementation flowchart of the robot control method provided by the first embodiment of the present application, and the details are as follows:
在S501中,获取目标对象的位置信息。In S501, the location information of the target object is acquired.
在S502中,根据所述目标对象的位置信息控制所述机器人转动。In S502, the robot is controlled to rotate according to the position information of the target object.
在本实施例中,机器人可以通过自动识别或者手动设置的方式确定目标对象,并确定该目标对象所在的位置信息。该位置信息具体可以为机器人与目标对象之间的相对位置。举例性地,该相对位置可以为一相对方向,例如左侧、右侧、前方、后方等;也可以为一具体的角度,例如+60°、-120°,其中上述角度具体可以为一矢量角,角度的正负值代表对应的方向,例如可以将左侧定义为正方向、右侧定义为负方向。机器人可以根据上述的位置信息控制机器人转动。In this embodiment, the robot can determine the target object through automatic recognition or manual setting, and determine the location information of the target object. The position information may specifically be the relative position between the robot and the target object. For example, the relative position may be a relative direction, such as left, right, front, rear, etc.; it may also be a specific angle, such as +60°, -120°, where the above angle may specifically be a vector Angle, the positive and negative values of the angle represent the corresponding direction. For example, the left side can be defined as the positive direction, and the right side can be defined as the negative direction. The robot can control the rotation of the robot according to the above-mentioned position information.
在一种可能的实现方式中,S502具体可以为:根据所述目标独享的位置信息,确定机器人的初始角度旋转至面向目标对象时对应的目标偏转角度;根据所述目标偏转角度,控制所述机器人转动。In a possible implementation manner, S502 may specifically be: determining the initial angle of the robot to rotate to the target deflection angle corresponding to the target object according to the position information exclusive to the target; controlling the target deflection angle according to the target deflection angle; The robot rotates.
在本实施例中,根据机器人自身的初始角度以及目标对象的位姿确定目标偏转角度。上述机器人的初始角度具体为机器人的正面所对应的方向角,若机器人包含头部,身体的正面所对应的方向角则具体为机器人的头部的正面所对应的方向,其中,机器人的头部的正面具体指包含模拟眼球所在的面。为了能够实现凝视跟踪的目的,上述目标偏转角度具体是用于将机器人当前的视线看向所述目标对象的方向,若该目标对象为实体人,则该目标偏转角度具体用于将机器人当前的视线对准目标对象的关键中心。In this embodiment, the target deflection angle is determined according to the initial angle of the robot itself and the pose of the target object. The initial angle of the above-mentioned robot is specifically the direction angle corresponding to the front of the robot. If the robot includes a head, the direction angle corresponding to the front of the body is specifically the direction corresponding to the front of the head of the robot. The front of specifically refers to the surface that contains the simulated eyeball. In order to achieve the purpose of gaze tracking, the above-mentioned target deflection angle is specifically used to view the current line of sight of the robot in the direction of the target object. If the target object is a physical person, the target deflection angle is specifically used to set the current robot Align the line of sight to the key center of the target object.
作为示例而非限定,图6示出了本申请一实施例提供的目标对象的关键中心的示意图。参见图6所示,目标对象的关键中心具体为可以为:目标对象的几何中心、目标对象的头部的几何中心,若目标对象为实体人,则上述的关键中心还可以为目标对象两眼的中间区域。As an example and not a limitation, FIG. 6 shows a schematic diagram of the key center of the target object provided by an embodiment of the present application. As shown in Figure 6, the key center of the target object can be specifically: the geometric center of the target object, the geometric center of the head of the target object. If the target object is a physical person, the above-mentioned key center can also be the eyes of the target object. The middle area.
示例性地,图7示出了本申请一实施例提供的目标偏转角度的获取示意图。参见图7所示,该目标对象为实体人,由于实体人具有一定的空间体积,而该空间内的任意一点均属于目标对象,而对于凝视追踪,最重要是将机器人的视线与目标对象的视线交汇,即处于同一平面内,因此,机器人需要确定自身当前的视线方向以及目标对象的视线方向,并基于两个视线方向确定偏转角度。Illustratively, FIG. 7 shows a schematic diagram of acquiring a target deflection angle provided by an embodiment of the present application. As shown in Figure 7, the target object is a physical person. Since the physical person has a certain volume of space, any point in the space belongs to the target object. For gaze tracking, the most important thing is to match the line of sight of the robot with the target object. The line of sight meets, that is, in the same plane. Therefore, the robot needs to determine its current line of sight direction and the line of sight direction of the target object, and determine the deflection angle based on the two line of sight directions.
在一种可能的实现方式中,机器人自动识别目标对象的方式可以包含以下三种:In a possible implementation, the robot can automatically recognize the target object in the following three ways:
方式1:基于距离传感器确定目标对象,实现方式具体如下:机器人可以配置有距离传感器,该距离传感器可以包含有多个距离探测单元,通过多个距离探测单元采集的距离值构建机器人所处场景下的深度图,基于深度图确定当前场景包含的场景对象,并选取距离最小的一个场景对象作为目标对象。可选地,若凝视跟踪行为的目标对象是实体人,则可以根据构建的深度图确定各个场景图像的轮廓线,并根据轮廓线与实体人的标准轮廓线进行匹配,基于匹配结果从场景对象中确定目标对象。优选地,该机器人还包含有红外热成像模块,可以采集所处场景下各个场景对象的外表面的温度值,从而可以构建得到包含温度信息的深度图,根据深度图中的轮廓信息以及温度值,可以确定机器人所处场景下的包含的场景对象的对象类型,并选取对象类型为人的场景对象作为目标对象。若存在多个对象类型为人的场景对象,则可以选取距离最近的一个对象类型为人的场景对象作为目标对象。Method 1: Determine the target object based on the distance sensor. The specific implementation method is as follows: the robot can be equipped with a distance sensor, which can contain multiple distance detection units, and build the scene where the robot is based on the distance values collected by the multiple distance detection units Based on the depth map, determine the scene objects contained in the current scene, and select the scene object with the smallest distance as the target object. Optionally, if the target object of the gaze tracking behavior is a physical person, the contour line of each scene image can be determined according to the constructed depth map, and the contour line can be matched with the standard contour line of the physical person based on the matching result from the scene object Identify the target audience. Preferably, the robot also includes an infrared thermal imaging module, which can collect the temperature value of the outer surface of each scene object in the scene, so as to construct a depth map containing temperature information, according to the contour information and temperature value in the depth map , The object type of the scene object contained in the scene where the robot is located can be determined, and the scene object whose object type is human is selected as the target object. If there are multiple scene objects whose object type is human, the closest scene object whose object type is human may be selected as the target object.
方式2:基于拍摄图像确定目标对象,实现方式如下:机器人可以通过摄像模块获取当前场景的场景图像。通过轮廓识别算法提取该场景图像内包含的轮廓信息,并基于轮廓信息对场景图像进行划分,得到多个拍摄主体区域,并对各个拍摄主体区域进行主体类型识别,基于主体类型确定目标对象。例如,机器人进行凝视跟踪行为的目标对象为实体人,则可以从主体类型中选取实体人类型的拍摄主体作为目标对象;若该拍摄场景中包含的多个场景类型为实体人的拍摄主体,则可以选取拍摄主体区域面积最大的一个拍摄主体作为目标对象。可选地,若目标对象为实体人,则机器人在获取到场景图像后,可以通过人脸识别算法定位出人脸区域,并基于人脸区域确定目标对象。Method 2: Determine the target object based on the captured image. The implementation method is as follows: the robot can obtain the scene image of the current scene through the camera module. The contour information contained in the scene image is extracted by the contour recognition algorithm, and the scene image is divided based on the contour information to obtain multiple subject areas. The subject type is identified for each subject area, and the target object is determined based on the subject type. For example, if the target object of the robot's gaze tracking behavior is a physical person, a physical person-type subject can be selected from the subject type as the target object; if multiple scene types included in the shooting scene are physical human subjects, then The subject with the largest subject area can be selected as the target object. Optionally, if the target object is a physical person, after acquiring the scene image, the robot can locate the face area through a face recognition algorithm, and determine the target object based on the face area.
方式3:基于语音信号确定目标对象,实现方式如下:机器人可以配置有麦克风模块,通过麦克风模块获取当前场景下的声音信号。若检测到当前场景下的声音信号的信号强度大于预设的分贝阈值,则对所述声音信号进行语音分析,基于语音分析结果确定目标对象。其中,上述语音分析具体可以为将声音信号转换为文本信息,机器人可以配置有激活口令,若上述文本信息与激活口令匹配,则识别用户需要与机器人进行交互,此时可以获取声音信号所对应的发声方向,并从当前场景下该发声方向上对应的对象,识别为目标对象。上述语音分析还可以为提取所述声音信号的声纹特征参数,将所述声纹特征参数与各个预存的已登记用户的标准生物参数进行匹配,基于匹配结果确定该声音信号对应的已登记用户,并将该声音信号对应的已登记用户识别 为目标用户。在该情况下,机器人可以通过摄像模块获取当前场景下的环境图像,并根据目标对象预存的标准人脸图像以及上述环境图像,确定出该目标用户所在的位置信息。Method 3: Determine the target object based on the voice signal. The implementation method is as follows: the robot can be equipped with a microphone module, and the sound signal in the current scene can be obtained through the microphone module. If it is detected that the signal strength of the sound signal in the current scene is greater than the preset decibel threshold, voice analysis is performed on the sound signal, and the target object is determined based on the voice analysis result. Among them, the above-mentioned voice analysis may specifically be the conversion of a sound signal into text information. The robot may be equipped with an activation password. If the above-mentioned text information matches the activation password, it is recognized that the user needs to interact with the robot. At this time, the corresponding voice signal can be obtained. The sounding direction, and the object corresponding to the sounding direction in the current scene is identified as the target object. The aforementioned voice analysis may also be to extract the voiceprint feature parameters of the voice signal, match the voiceprint feature parameters with the standard biological parameters of each registered user, and determine the registered user corresponding to the voice signal based on the matching result , And identify the registered user corresponding to the sound signal as the target user. In this case, the robot can obtain the environment image in the current scene through the camera module, and determine the location information of the target user based on the pre-stored standard face image of the target object and the above-mentioned environment image.
在一种可能的实现方式中,机器人确定目标对象的方式可以采用用户设定的方式,用户设定的方式可以包含以下两种:In a possible implementation, the method for the robot to determine the target object can be a method set by the user, and the method set by the user can include the following two:
方式1,根据用户的选择指令确定目标对象,实现方式如下:机器人可以在交互界面上显示所处场景下的候选对象(例如通过机器人身体部位配置有的显示模块输出当前场景拍摄得到的场景图像,并对场景图像进行人脸识别,将识别到的人脸作为候选对象),用户可以通过触控方式或按键方式等交互手段向机器人发送选择指令,从多个候选对象中选择一个作为目标对象。Method 1: Determine the target object according to the user's selection instruction. The realization method is as follows: the robot can display the candidate objects in the scene on the interactive interface (for example, the display module configured on the body part of the robot outputs the scene image captured by the current scene, Face recognition is performed on the scene image, and the recognized face is used as a candidate object. The user can send a selection instruction to the robot through interactive means such as touch or keypress, and select one of multiple candidate objects as the target object.
方式2,根据定位设备确定目标对象,实现方式如下:所需进行凝视跟随的用户可以佩戴有定位设备,该定位设备可以为一智能手表、智能项链、智能手机、智能眼镜等配置有定位模块的可穿戴设备。定位模块可以向机器人以预设的反馈周期向机器人发送定位信息,机器人可以根据定位信息识别得到目标对象。Method 2: Determine the target object according to the positioning device. The implementation is as follows: The user who needs to follow the gaze can wear a positioning device, which can be a smart watch, smart necklace, smart phone, smart glasses, etc., equipped with a positioning module Wearable device. The positioning module can send positioning information to the robot in a preset feedback cycle, and the robot can recognize the target object according to the positioning information.
在一种可能的实现方式中,机器人触发凝视跟随行为的方式可以包括以下三种:In a possible implementation, the robot can trigger the gaze and follow behavior in the following three ways:
方式1,机器人可以在检测到目标对象位姿变化时,触发凝视跟随行为。机器人可以通过上述方式确定目标对象的位置,并根据目标对象的轮廓信息确定目标对象的姿态,若检测到目标对象的位置移动和/或姿态发生改变时,则判定目标对象的位姿发生变化,此时触发凝视跟踪行为,执行S501以及S502的操作。 Method 1, the robot can trigger the gaze and follow behavior when it detects a change in the pose of the target object. The robot can determine the position of the target object in the above-mentioned manner, and determine the posture of the target object according to the contour information of the target object. If the position movement and/or posture of the target object is detected to change, it is determined that the posture of the target object has changed. At this time, the gaze tracking behavior is triggered, and the operations of S501 and S502 are executed.
方式2,机器人可以在检测到机器人位姿变化时,触发凝视跟随行为。机器人可内置有运动传感器,该运动传感器包括但不限于:陀螺仪、震动传感器、加速度传感器、重力传感器等,机器人可以获取各个运动传感器反馈的感应值,判断机器人的位姿是否改变,例如若陀螺仪的读数发生变化或加速度传感器的数值不为0,则可以机器人的位姿发生变化,此时触发凝视跟踪行为,执行S501以及S502的操作。Method 2, the robot can trigger the gaze and follow behavior when it detects a change in the robot's pose. The robot may have a built-in motion sensor. The motion sensor includes but is not limited to: gyroscope, vibration sensor, acceleration sensor, gravity sensor, etc. The robot can obtain the sensing value fed back by each motion sensor to determine whether the pose of the robot has changed, for example, if the gyro If the reading of the meter changes or the value of the acceleration sensor is not 0, then the pose of the robot can be changed. At this time, the gaze tracking behavior is triggered, and the operations of S501 and S502 are executed.
方式3,机器人可以在检测到目标对象变更时,触发凝视跟踪行为。上述目标对象发生变更包括目标对象由无目标对象到有目标对象的变更,还包括从对象A变更为对象B。上述目标变更可以根据用户的选择指令确定所需变更的对象,也可以通过机器人自动识别的方式进行目标变更(例如目标对象出现在机器人的采集画面内,又例如目标对象B靠近机器人,从而识别的对象B与机器人之间的距离小于对象A与机器人之间的距离,继而机器人将目标对象由对象A切换至对象B)。Method 3, the robot can trigger the gaze tracking behavior when detecting the change of the target object. The aforementioned change of the target object includes the change of the target object from a non-target object to a target object, and also includes a change from object A to object B. The above-mentioned target change can be based on the user's selection instruction to determine the object that needs to be changed, or the target can be changed through automatic recognition by the robot (for example, the target object appears in the collection screen of the robot, and the target object B is close to the robot to recognize it. The distance between the object B and the robot is smaller than the distance between the object A and the robot, and then the robot switches the target object from the object A to the object B).
图8示出了本申请另一实施例提供的一种机器人的控制方法S501以及S502的具体实现流程图。参见图8,相对于图5所述实施例,本实施例提供的一种机器人的控制方法中S501包括:S5011~S5012,S502包括S5021,具体详述如下:FIG. 8 shows a specific implementation flowchart of a robot control method S501 and S502 provided by another embodiment of the present application. Referring to FIG. 8, compared with the embodiment described in FIG. 5, S501 in a robot control method provided in this embodiment includes: S5011 to S5012, and S502 includes S5021, and the details are as follows:
在S5011中,通过所述机器人内置的摄像模块获取包含所述目标对象的场景图像。In S5011, a scene image containing the target object is acquired through a camera module built into the robot.
在本实施例中,机器人内置有摄像模块,通过摄像模块采集当前场景下的场景图像。In this embodiment, the robot has a built-in camera module, and the scene image in the current scene is collected through the camera module.
在一种可能的实现方式下,摄像模块可以以视频格式的形式获取当前场景下的场景视频,机器人可以将场景视频内最新拍摄得到的视频图像帧作为上述场景图像,并从场景图像中识别出目标对象。In a possible implementation, the camera module can obtain the scene video in the current scene in the form of a video format, and the robot can use the latest video image frame captured in the scene video as the above scene image and recognize it from the scene image target.
在一种可能的实现方式中,机器人可以向摄像模块发送拍摄指令,摄像模块在接收到拍摄指令后,则获取接收到拍摄指令时刻对应的图像,并将当前拍摄得到的图像识别为上述的场景图像。In a possible implementation, the robot may send a shooting instruction to the camera module. After the camera module receives the shooting instruction, it obtains the image corresponding to the moment the shooting instruction is received, and recognizes the currently shot image as the aforementioned scene image.
在本实施例中,机器人可以配置有目标对象的识别算法,机器人通过上述目标对象的识别算法对摄像图像反馈的图像数据进行解析,判断该图像数据内是否包含目标对象,若包含,则识别该图像数据为包含目标对象的场景图像,并执行S5012的操作;反之,若该图像数据内不包含目标对象,则无需进行凝视跟踪。In this embodiment, the robot may be equipped with a target object recognition algorithm. The robot analyzes the image data fed back from the camera image through the above-mentioned target object recognition algorithm to determine whether the image data contains the target object. The image data is a scene image containing the target object, and the operation of S5012 is performed; on the contrary, if the target object is not included in the image data, there is no need to perform gaze tracking.
在一种可能的实现方式中,若上述目标对象的人,则上述目标对象的识别算法可以为人脸识别算法。在该情况下,机器人可以通过人脸识别算法,判断上述摄像模块反馈的场景图像内是否包含人脸,若包含,则执行S5012的操作。反之,若该场景图像内不包含人脸,则识别当前场景不存在进行凝视跟踪的目标对象,进入待机状态或保持原有的位姿。In a possible implementation manner, if the target object is a person, the recognition algorithm of the target object may be a face recognition algorithm. In this case, the robot can determine whether a human face is included in the scene image fed back by the camera module through a face recognition algorithm, and if it does, perform the operation of S5012. On the contrary, if the scene image does not contain a human face, it is recognized that there is no target object for gaze tracking in the current scene, and the standby state is entered or the original posture is maintained.
在S5012中,从所述场景图像中标记出所述目标对象的边界坐标,并根据所述边界坐标确定所述目标对象的目标对象的位置信息;所述目标对象的位置信息包括所述目标对象的对象中心坐标。In S5012, the boundary coordinates of the target object are marked from the scene image, and the position information of the target object of the target object is determined according to the boundary coordinates; the position information of the target object includes the target object The center coordinates of the object.
在本实施例中,机器人在场景对象中识别得到目标对象的所在的区域图像,并从目标对象所在的区域图像中确定上述的边界坐标。In this embodiment, the robot recognizes the image of the area where the target object is located in the scene object, and determines the above-mentioned boundary coordinates from the image of the area where the target object is located.
在一种可能的实现方式中,机器人配置有轮廓识别算法,可以将场景图像导入到上述轮廓识别算法内,提取得到场景图像包含的轮廓线,并从轮廓线上选取至少一个像素点作为上述的边界坐标。In a possible implementation, the robot is equipped with a contour recognition algorithm, the scene image can be imported into the above contour recognition algorithm, the contour line contained in the scene image is extracted, and at least one pixel point is selected from the contour line as the aforementioned contour recognition algorithm. The boundary coordinates.
在一种可能的实现方式中,机器人可以选取目标对象与场景图像的原点坐标(例如场景图像以图像的左上角为原点坐标)最近的边界点作为上述的边界坐标,并根据该目标对象在场景图像中的区域面积以及边界坐标,确定对象中心坐标。以场景图像的横纵方向为坐标轴构建图像坐标系,则根据边界坐标与场景图像的原点坐标之间的相对位置,可以边界坐标的值,为(left,top),其中,left为与原点坐标之间的水平方向的距离;top为与原点坐标之间垂直方向的距离。若目标对象为一矩形,则其对应的区域面积可以表示为(width,height),其中,width为矩形区域的宽,height为矩形区域的长。基于上述多个参数,可以确定目标对象的位置信息,该位置信息包括目标对象在场景图像内对应的对象中心坐标。In a possible implementation, the robot can select the closest boundary point between the target object and the origin coordinates of the scene image (for example, the scene image uses the upper left corner of the image as the origin coordinates) as the aforementioned boundary coordinates, and according to the target object in the scene The area and boundary coordinates in the image determine the center coordinates of the object. The image coordinate system is constructed with the horizontal and vertical directions of the scene image as the coordinate axis. According to the relative position between the boundary coordinates and the origin coordinates of the scene image, the value of the boundary coordinates can be (left, top), where left is the origin The horizontal distance between the coordinates; top is the vertical distance between the coordinates of the origin. If the target object is a rectangle, the area of the corresponding area can be expressed as (width, height), where width is the width of the rectangular area, and height is the length of the rectangular area. Based on the above multiple parameters, the position information of the target object can be determined, and the position information includes the object center coordinates corresponding to the target object in the scene image.
示例性地,图9示出了本申请一实施例提供的确定对象中心坐标的示意图。参见图9所示,目标对象T可以表示为[left,top,width,height]。通过上述四个参数,可以确定目标对象的对象中心坐标A=[left+width/2,top+height/2]。Exemplarily, FIG. 9 shows a schematic diagram of determining the center coordinates of an object provided by an embodiment of the present application. As shown in Fig. 9, the target object T can be expressed as [left, top, width, height]. Through the above four parameters, the object center coordinate A=[left+width/2,top+height/2] of the target object can be determined.
在一种可能的实现方式中,若目标对象为人脸,则上述目标对象的边界坐标可以为目标对象的人眼坐标,将目标对象两个人眼的所构成线段的中点,作为目标对象的对象中心,将该中点的坐标作为目标对象的对象中心坐标。In a possible implementation manner, if the target object is a human face, the boundary coordinates of the target object may be the eye coordinates of the target object, and the midpoint of the line segment formed by the two human eyes of the target object is taken as the object of the target object Center, use the coordinates of the midpoint as the object center coordinates of the target object.
在S5021中,根据所述对象中心坐标以及所述场景图像的图像中心坐标,确定所述目标偏转角度;In S5021, determine the target deflection angle according to the object center coordinates and the image center coordinates of the scene image;
Figure PCTCN2021089709-appb-000006
Figure PCTCN2021089709-appb-000006
其中,target_yaw为所述目标偏转角度的水平分量;target_pitch为所述目标偏转角度的垂直分量;OC为所述摄像模块的焦距;CD为对象中心坐标与图像中心坐标之间的水平偏差;AC为对象中心坐标与图像中心坐标之间的垂直偏差。Wherein, target_yaw is the horizontal component of the target deflection angle; target_pitch is the vertical component of the target deflection angle; OC is the focal length of the camera module; CD is the horizontal deviation between the object center coordinates and the image center coordinates; AC is The vertical deviation between the object center coordinate and the image center coordinate.
在本实施例中,摄像模块的拍摄角度与机器人的眼部视线方向一致,因此,若要眼部视线对准目标对象,则需要将目标对象的对象中心坐标移动到场景图像的中心坐标。因此可以根据对象中心坐标与场景图像的图像中心坐标生成一个偏移向量,并根据该偏移向量以及机器人的眼部视线的初始角度,确定得到上述的目标偏转角度。In this embodiment, the shooting angle of the camera module is consistent with the eye sight direction of the robot. Therefore, if the eye sight is to be aligned with the target object, the object center coordinates of the target object need to be moved to the center coordinates of the scene image. Therefore, an offset vector can be generated according to the center coordinates of the object and the image center coordinates of the scene image, and the above-mentioned target deflection angle can be determined according to the offset vector and the initial angle of the robot's eye line of sight.
示例性地,图10示出了本申请一实施例提供的目标偏移角度的获取示意图。参见图10所示,目标对象的对象中心为点A,场景图像的焦点为点C,机器人两眼的中心点为点O。因此,机器人的眼部视线为直线OD,摄像模块的焦距为OC,而有图像中心坐标偏移至对象中心坐标的距离为AC,通过上述四个坐标点,可以确定机器人在水平方向的偏转角度target_yaw以及在垂直方向的偏转角度target_pitch。Exemplarily, FIG. 10 shows a schematic diagram of obtaining a target offset angle provided by an embodiment of the present application. As shown in FIG. 10, the object center of the target object is point A, the focal point of the scene image is point C, and the center point of the robot's eyes is point O. Therefore, the line of sight of the robot's eyes is a straight line OD, the focal length of the camera module is OC, and the distance from the image center coordinate offset to the object center coordinate is AC. Through the above four coordinate points, the deflection angle of the robot in the horizontal direction can be determined target_yaw and the deflection angle target_pitch in the vertical direction.
在本申请实施例中,通过摄像模块获取包含目标对象的场景图像,并根据场景图像中目标对象的中心坐标与眼部视线在场景图像中的坐标之间的偏差量,计算出目标偏移角度,通过二维图像计算三维方向的偏转量,计算方式简单,计算量少,从而能够减少机器人的运算压力。In the embodiment of the present application, the scene image containing the target object is acquired through the camera module, and the target offset angle is calculated according to the deviation between the center coordinates of the target object in the scene image and the coordinates of the eye line of sight in the scene image , Calculate the amount of deflection in the three-dimensional direction through the two-dimensional image, the calculation method is simple, and the calculation amount is small, which can reduce the calculation pressure of the robot.
进一步地,若摄像模块与眼部视线之间存在一定的偏移量,为了提高目标偏转角度的准确性,机器人在获取得到场景图像时,可以基于上述偏移量对场景图像进行校准。示例性地,图11示出了本申请一实施例提供的拍摄偏移的原理示意图。参见图11所示,该机器人包含有摄像模块,该摄像模块位于显示模块的上方,与显示模块的输出画面中的眼球之间具有一定的偏移量。基于此,若在模拟眼部特征的位置配置也配置有摄像模块,则眼球所观察到的图像中目标对象的位置与摄像机拍摄得到的目标对象的位置会存在一定的偏移,从而在后续的凝视跟随时,会导致机器人的眼部视线偏移目标对象的中心,因此需要对上述的摄像模块采集的图像进行校准。Further, if there is a certain offset between the camera module and the line of sight of the eye, in order to improve the accuracy of the target deflection angle, the robot can calibrate the scene image based on the offset when acquiring the scene image. Exemplarily, FIG. 11 shows a schematic diagram of the principle of shooting offset provided by an embodiment of the present application. As shown in FIG. 11, the robot includes a camera module, which is located above the display module and has a certain offset from the eyeball in the output image of the display module. Based on this, if a camera module is also configured at the position of the simulated eye feature, the position of the target object in the image observed by the eyeball and the position of the target object captured by the camera will have a certain offset, so that in the subsequent When the gaze follows, the line of sight of the robot's eyes will shift from the center of the target object. Therefore, it is necessary to calibrate the image collected by the aforementioned camera module.
在该情况下,上一实施例中的S5011可以包含校准操作。图12示出了本申请另一实施例提供的一种机器人的控制方法S5011的具体实现流程图。参见图12,相对于图8所述实施例,本实施例提供的一种机器人的控制方法中S5011包括:S1201~S1202,具体详述如下:In this case, S5011 in the previous embodiment may include a calibration operation. FIG. 12 shows a specific implementation flowchart of a robot control method S5011 provided by another embodiment of the present application. Referring to FIG. 12, with respect to the embodiment described in FIG. 8, S5011 in a robot control method provided in this embodiment includes: S1201 to S1202, which are detailed as follows:
在S1201中,根据摄像模块与所述显示模块的输出画面的模拟眼部特征的位置之间的偏移量,确定图像校正参量。In S1201, the image correction parameter is determined according to the offset between the position of the simulated eye feature of the output screen of the camera module and the display module.
在本实施例中,机器人可以根据眼球在输出画面中的显示位置,确定眼球相对于机器人本体的所在位置,并且根据摄像模块在机器人本地的位置与上述确定的眼球相对于机器人本地的所在位置,确定上述偏移量。机器人可以配置有偏移量与校正量之间的转换算法,并将偏移量导入上述转换算法中,计算得到图像校正参数。In this embodiment, the robot can determine the position of the eyeball relative to the robot body according to the display position of the eyeball in the output screen, and according to the local position of the camera module in the robot and the above-determined position of the eyeball relative to the robot, Determine the above offset. The robot can be configured with a conversion algorithm between the offset and the correction amount, and the offset is imported into the above conversion algorithm to calculate the image correction parameters.
在S1202中,基于所述图像校正参量调整所述摄像模块采集到的原始图像,生成所述场景图像。In S1202, the original image collected by the camera module is adjusted based on the image correction parameter to generate the scene image.
在本实施例中,机器人接收到摄像模块采集得到的原始图像后,可以通过上述图像校正参量对上述原始图像进行校准,例如,可以根据校正参量叫原始图像进行旋转、拉伸、水平拉伸、垂直拉伸、目标对象平移等校正操作,并将校正后的原始图像作为 上述场景图像,并执行后续的目标偏移角度的确定操作。In this embodiment, after the robot receives the original image collected by the camera module, it can calibrate the original image using the image correction parameters. For example, the original image can be rotated, stretched, stretched horizontally, and stretched according to the correction parameters. Correction operations such as vertical stretching and translation of the target object are performed, and the corrected original image is used as the above-mentioned scene image, and the subsequent determination of the target offset angle is performed.
在本申请实施例中,根据摄像模块与眼部视线之间的偏移量,对摄像模块拍摄得到的原始图像进行校正,从而能够提高后续目标偏转角度的准确性,消除因机器人模块之间偏移而导致的视线偏移的情况,继而提高了凝视跟踪行为的准确性,提高机器人的拟人化程度。In the embodiment of the present application, the original image captured by the camera module is corrected according to the offset between the camera module and the line of sight of the eye, so that the accuracy of the subsequent target deflection angle can be improved, and the deviation between the robot modules can be eliminated. The deviation of the line of sight caused by the movement further improves the accuracy of the gaze tracking behavior and improves the degree of personification of the robot.
在S503中,在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼球部特征的显示模块的输出画面,以及控制所述机器人内置的旋转部件转动,以使所述机器人上所述显示模块的所述输出画面的模拟眼球眼部特征的眼部视线在机器人旋转过程中看向所述目标对象的方向。In S503, during the rotation of the robot, dynamically adjust the output screen of the display module for simulating the eyeball feature on the robot, and control the rotation of the built-in rotating part of the robot, so that the robot The line of sight of the eye that simulates the eye feature of the eyeball of the output picture of the display module looks in the direction of the target object during the rotation of the robot.
在本实施例中,机器人在确定了自身正面面向的位姿与目标对象正面面向的位置之间的相对位置后,即上述目标对象的位置信息后,可以通过显示模块的输出画面模拟眼部特征移动的同时,控制机器人内置的旋转部件旋转,从而使得机器人显示模块的输出画面中的模拟眼部特征,例如模拟眼球或模拟眼睛对准目标对象。In this embodiment, after the robot has determined the relative position between its frontal pose and the frontal facing position of the target object, that is, after the position information of the target object, it can simulate eye features through the output screen of the display module While moving, control the rotation of the built-in rotating parts of the robot, so that the simulated eye features in the output screen of the robot display module, such as the simulated eyeball or the simulated eye aligning with the target object.
与机械运动相比,通过显示模块输出用于模拟眼部特征移动的输出画面,会使得眼部特征移动更为平滑。由于通过显示模块的输出画面不受限与机器运动,只需通过刷新画面内容即可改变模拟眼部特征的位置,减少了通过马达驱动等机械驱动方式所需的启动时间,提高了响应速率,使得通过显示模块模拟的眼球运动与人体眼球运动的响应速度更为接近,提高了机器人的拟人化程度。Compared with mechanical movement, outputting an output image for simulating the movement of eye features through the display module will make the movement of eye features smoother. Since the output picture through the display module is not limited and the machine movement, the position of the simulated eye feature can be changed only by refreshing the picture content, which reduces the startup time required by mechanical driving methods such as motor drive, and improves the response rate. The eye movement simulated by the display module is closer to the response speed of the human eye movement, and the degree of personification of the robot is improved.
在一种可能的实现方式中,所述在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,具体为:在机器人的旋转过程中,根据目标偏转角度动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面;所述目标偏转角度为机器人由初始角度旋转至面向目标对象时对应的偏转角度。In a possible implementation manner, the dynamic adjustment of the output screen of the display module for simulating eye features on the robot during the rotation of the robot is specifically: during the rotation of the robot, according to The target deflection angle dynamically adjusts the output screen of the display module used for simulating eye features on the robot; the target deflection angle is the corresponding deflection angle when the robot rotates from the initial angle to face the target object.
在一种可能的实现方式中,调整机器人的眼部视线对准目标对象的方式具体可以为:根据目标偏转角度,确定输出画面中模拟眼部特征的偏移量以及机器人的旋转部件的旋转角度,调整输出画面中模拟眼部特征的位置的同时,控制机器人内置的旋转部件进行转动。In a possible implementation, the way to adjust the sight of the robot's eyes to the target object may specifically be: determining the offset of the simulated eye feature in the output screen and the rotation angle of the rotating part of the robot according to the target deflection angle , While adjusting the position of the simulated eye features in the output screen, control the rotation of the robot's built-in rotating parts.
其中,由于机器人的显示模块是附着于机器人表面,因此模拟眼部特征的位置的移动需要考虑机器人本体运动,在该情况下,确定旋转角度以及模拟眼球的偏移量的方式可以为:机器人可以将目标偏移角度作为旋转部件的旋转角度,并且根据以下计算方式计算出眼球的偏移量:Among them, because the display module of the robot is attached to the surface of the robot, the movement of the position of the simulated eye feature needs to consider the movement of the robot body. In this case, the method of determining the rotation angle and the offset of the simulated eyeball can be: Use the target offset angle as the rotation angle of the rotating part, and calculate the eyeball offset according to the following calculation method:
Figure PCTCN2021089709-appb-000007
Figure PCTCN2021089709-appb-000007
其中,eye_angle为眼球的偏移量;target_angle为目标偏转角;motor_angle为眼球移动过程中旋转部件的旋转角度;motor_rate为旋转部件的旋转速度;time为眼球移动所需上述偏移量所需的时间;eye_rate为眼球移动速度。Among them, eye_angle is the offset of the eyeball; target_angle is the target deflection angle; motor_angle is the rotation angle of the rotating part during the movement of the eyeball; motor_rate is the rotation speed of the rotating part; time is the time required for the above-mentioned offset for the eyeball movement ;Eye_rate is the speed of eyeball movement.
在一种可能的实现方式中,由于目标偏转角度的不同,旋转部件的启动加速度也会不同,从而导致旋转部件的速度是并非一恒定数值。在该情况下,机器人可以通过大数据分析的方式,采集在不同目标偏转角度下,旋转部件的历史速度,从而构建目标偏转角度与旋转角度之间的对应关系。机器人可以根据本次识别得到的目标偏转角 度确定旋转部件的旋转速度,从而导入到上述计算方程内,计算得到眼球偏移量。In a possible implementation manner, due to the different target deflection angles, the starting acceleration of the rotating part will also be different, resulting in that the speed of the rotating part is not a constant value. In this case, the robot can collect the historical speed of the rotating part under different target deflection angles through big data analysis, so as to construct the correspondence between the target deflection angle and the rotation angle. The robot can determine the rotation speed of the rotating part according to the target deflection angle obtained this time, and then import it into the above calculation equation to calculate the eyeball offset.
以上可以看出,本申请实施例提供的一种机器人的控制方法根据目标对象与机器人之间的相对位置,确定目标偏转角度,并基于上述目标偏转角度调整显示模块的输出画面以及旋转部件来实现眼神追随,由于该机器人的眼球运动是通过显示模块的输出界面进行模拟,移动过程中无需马达驱动实现偏转,响应时间较短,实现了平滑眼神跟随,提高了拟人化程度以及眼神跟随的准确性。As can be seen from the above, the robot control method provided by the embodiment of the present application determines the target deflection angle according to the relative position between the target object and the robot, and adjusts the output screen and rotating parts of the display module based on the target deflection angle to achieve this. Eye tracking, because the robot's eye movement is simulated through the output interface of the display module, no motor drive is needed to achieve deflection during the movement, and the response time is short, achieving smooth eye tracking, improving the degree of personification and the accuracy of eye tracking .
现有的机器人控制技术中,对于凝视跟踪主要采用以下两种类型的方式实现:In the existing robot control technology, the following two types of methods are mainly used for gaze tracking:
第一,通过视觉融合、运动双模态以及控制算法来实现凝视跟踪,而由于视觉偏移的速度与头部、躯干等身体移动的速度不同,在运动双模态的拟合计算过程中,需要考虑不同部件之间的速度偏差,从而大大增加了凝视跟踪的运算量,并且当机器人在凝视跟踪过程中的自动度的增加,上述运算量也以几何级别的速度增长,进一步增加了机器人的运算压力。图13示出了本申请一实施例提供的基于马达驱动的多自由度的机器人示意图。参见图13所示,该机器人在跟踪凝视过程中,机器人头部可以在三个方向上进行旋转,即基于左右方向旋转、上下方向旋转以及内外方向旋转,而机器人的头部内置有基于马达驱动的眼睛部件,该眼睛部件可以基于两个自由度进行旋转,即上下方向旋转以及左右方向旋转。机器人的头部还配置有惯性测量单元IMU进行运动控制补偿;对于凝视跟踪的控制算法可以采用正逆向运动学解算以及IMU补偿,并融合采用P(比例)I(积分)D(微分)控制算法。而由于该机器人内的马达自由度过多,为了保证系统稳定性需要IMU反馈角度进行运动补偿,尤其对于眼睛运动而言,需要复杂的正逆向运动学进行目标位置解算,凝视跟踪无法同时兼顾准确性、实时性以及平滑性,从而降低了拟人化程度。另外,由于多马达驱动,启动时间较长,从而降低了凝视跟踪的灵敏度,机械感较强,进一步降低了眼神凝视效果。First, gaze tracking is achieved through visual fusion, dual-mode motion, and control algorithms. Since the speed of visual offset is different from the speed of body movement such as the head and torso, in the calculation process of dual-mode motion, The speed deviation between different components needs to be considered, which greatly increases the amount of calculations for gaze tracking, and when the robot's automaticity in the gaze tracking process increases, the amount of calculations mentioned above also increases at a geometric level, further increasing the robot's Computing pressure. FIG. 13 shows a schematic diagram of a motor-driven multi-degree-of-freedom robot provided by an embodiment of the present application. As shown in Figure 13, the robot head can rotate in three directions during the tracking and gaze process, namely based on left and right rotation, vertical rotation, and internal and external rotation. The robot’s head has a built-in motor-driven The eye part of, the eye part can be rotated based on two degrees of freedom, namely, the up and down direction rotation and the left and right direction rotation. The head of the robot is also equipped with an inertial measurement unit IMU for motion control compensation; for the gaze tracking control algorithm, forward and inverse kinematics calculation and IMU compensation can be used, and P (proportional) I (integral) D (derivative) control is integrated. algorithm. Because the motor in the robot has too much freedom, in order to ensure the stability of the system, the IMU feedback angle is required for motion compensation. Especially for eye movement, complex forward and inverse kinematics are required to solve the target position, and gaze tracking cannot be taken into account at the same time. Accuracy, real-time and smoothness, thereby reducing the degree of personification. In addition, due to the multi-motor drive, the start-up time is longer, which reduces the sensitivity of gaze tracking, and has a stronger mechanical sense, which further reduces the effect of the gaze.
第二,采取低自由度的凝视跟随方式,例如以2个自由度为主(即上下方向以及左右方向),并且主要以头部旋转的方式实现目标的凝视跟踪。图14示出了本申请一实施例提供的2个自由度的机器人示意图。参见图14所示,该机器人的眼睛部件固定安装于头部,即无法移动,而头部安装有两个旋转部件,分别可以控制机器人的头部在左右方向上进行旋转以及在上下方向进行旋转。在进行凝视跟踪的过程中,主要依靠头部的偏转来实现对准目标对象。而上述方式由于自由度低,眼神无法移动,从而拟人化程度较低,用户体验度较低。Second, adopt a low-degree-of-freedom gaze-following method, such as two degrees of freedom (ie, up-down direction and left-right direction), and mainly use head rotation to achieve gaze tracking of the target. Fig. 14 shows a schematic diagram of a robot with 2 degrees of freedom provided by an embodiment of the present application. As shown in Figure 14, the eye component of the robot is fixedly installed on the head, that is, it cannot move, and the head is equipped with two rotating parts, which can control the head of the robot to rotate in the left and right directions and rotate in the up and down directions. . In the process of gaze tracking, it mainly relies on the deflection of the head to achieve the alignment of the target object. However, the above-mentioned method has a low degree of freedom and cannot move the eyes, so that the degree of personification is low, and the user experience is low.
与现有的机器人控制技术相比,本申请实施例采用显示模块的输出画面模拟眼部运动,与基于马达驱动旋转部件实现眼部运动相比,除了具有更快的响应速度外,对于马达驱动算法的计算量也随之减少,正逆向运动学解算的要求也随之降低,能够提高凝视跟踪行为的流畅度以及减少机器人的运算压力,实现了平滑且稳定的凝视跟踪,有效提高机器人的运动行为能力以及拟人化程度。与此同时,通过显示模块模拟眼睛运动,只需在机器人头部安装显示模块,而无需配置马达部件、旋转部件以及实体的眼睛部件,从而减少了机器人的造价成本。Compared with the existing robot control technology, the embodiment of the present application uses the output screen of the display module to simulate eye movement. Compared with the realization of eye movement based on a motor-driven rotating part, in addition to having a faster response speed, it also has a faster response speed for motor-driven The calculation amount of the algorithm is also reduced, and the requirements for forward and inverse kinematics calculation are also reduced. It can improve the smoothness of gaze tracking behavior and reduce the computing pressure of the robot, achieve smooth and stable gaze tracking, and effectively improve the robot’s performance. Athletic performance and degree of personification. At the same time, through the display module to simulate eye movement, only the display module needs to be installed on the head of the robot, without the need to configure motor parts, rotating parts and physical eye parts, thereby reducing the cost of the robot.
图15示出了本申请另一实施例提供的一种机器人的控制方法S503中在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼球部特征的显示模块的输出画面的具体实现流程图。参见图15,相对于图5所述实施例,本实施例提供的一种机器人 的控制方法中在机器人的旋转过程中,根据目标偏转角度动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面包括:S5031~S5033,具体详述如下:FIG. 15 shows a method for controlling a robot provided by another embodiment of the present application. In S503, during the rotation of the robot, the specific output screen of the display module for simulating eyeball features on the robot is dynamically adjusted. Realize the flowchart. Referring to FIG. 15, with respect to the embodiment described in FIG. 5, in the robot control method provided in this embodiment, during the rotation of the robot, the display on the robot for simulating eye features is dynamically adjusted according to the target deflection angle The output screen of the module includes: S5031~S5033, the details are as follows:
在S5031中,在控制所述机器人旋转的过程中,根据所述机器人实时反馈的实时旋转角度以及所述目标偏转角度,动态调整所述输出画面中的模拟眼部特征的位置到眼部特征目标位置;所述眼部特征目标位置是所述输出画面所模拟的眼部视线首次看向所述目标对象的方向的位置。In S5031, in the process of controlling the rotation of the robot, dynamically adjust the position of the simulated eye feature in the output screen to the eye feature target according to the real-time rotation angle fed back by the robot in real time and the target deflection angle Position; the eye feature target position is the position in the direction in which the line of sight of the eye simulated by the output screen looks toward the target object for the first time.
在本实施例中,机器人进行凝视跟踪的过程具体可以划分为两个方面,其中一个方面为将机器人的正面面向对准目标对象的正面面向,另一方面则是将机器人的眼部视线的方向对准目标对象的关键中心。上述两个不同方面可以通过机器人的不同部件完成。对于将机器人的正面面向对准目标对象的正面面向的过程,可以通过控制机器人转动调整机器人的位姿,从而实现正面面向的对齐;而对于眼部视线的对齐,则可以通过S5031的操作完成,即通过动态调整显示模块输出画面中的模拟眼部特征的位置,以使机器人的眼部视线的对准目标对象的关键中心。In this embodiment, the process of robot gaze tracking can be specifically divided into two aspects, one of which is to align the front of the robot with the front of the target object, and the other is to adjust the direction of the robot’s eye line of sight Aim at the key center of the target object. The above two different aspects can be accomplished by different parts of the robot. For the process of aligning the front face of the robot with the front face of the target object, the pose of the robot can be adjusted by controlling the rotation of the robot to realize the alignment of the front face; and the alignment of the eye sight can be completed by the operation of S5031. That is, the position of the simulated eye feature in the output screen of the display module is dynamically adjusted, so that the line of sight of the robot's eye is aligned with the key center of the target object.
需要说明的是,控制机器人转动以及调整输出画面是同时执行的,则在机器人在控制机器人的旋转部件进行旋转操作的同时,可以通过动态刷新显示模块的输出画面中的模拟眼部特征的位置。其中,显示模型的输出画面中的模拟眼部特征的位置并非直接跳转到上述目标对象位置,而是可以基于预设的眼球转动速度,从在调整前的初始位置以上述的眼球转动速度旋转至眼部特征目标位置,即从初始位置到目标模拟眼部特征的位置的过程中,需要一定的旋转时间。举例性,若眼部特征目标位置与眼球的初始位置之间的角度差为30°,而机器人的预设眼球旋转速度为120°/s,则眼球的初始位置转动到眼部特征目标位置所需的时间为30/120s=0.25s。即眼部视线首次对准目标对象所需耗时为0.25s。It should be noted that controlling the rotation of the robot and adjusting the output screen are performed at the same time. While the robot is controlling the rotating part of the robot to rotate, it can dynamically refresh the position of the simulated eye feature in the output screen of the display module. Among them, the position of the simulated eye feature in the output screen of the display model does not directly jump to the above-mentioned target object position, but can be rotated at the above-mentioned eyeball rotation speed from the initial position before adjustment based on the preset eyeball rotation speed A certain amount of rotation time is required to reach the target position of the eye feature, that is, from the initial position to the position where the target simulates the eye feature. For example, if the angular difference between the target position of the eye feature and the initial position of the eyeball is 30°, and the robot's preset eyeball rotation speed is 120°/s, the initial position of the eyeball will rotate to the target position of the eye feature. The required time is 30/120s=0.25s. That is, it takes 0.25s for the eye sight to align with the target object for the first time.
在本实施例中,由于机器人的显示模块是处于机器人的主体表面,从而显示模块的输出画面中的眼球运动会与机器人的主体运动耦合,眼球相当于目标对象之间的旋转角度,包含眼球相对于机器人的旋转角度(即机器人控制眼球旋转的角度),以及机器人主体本身的自转(即机器人通过旋转部件进行的转动),因此,眼球相对于目标对象之间的旋转角度与目标偏移角度相同,但最终眼球相当于机器人的眼球旋转的角度需要根据旋转部件的实际角度进行调整。基于此,在所机器人转动的过程中,旋转部件会实时向机器人反馈实时旋转角度,机器人可以根据机器人当前眼球的旋转角度与旋转部件的实时旋转角度进行叠加,判断是否到达上述的目标偏移角度,从而确定机器人的眼部视线是否对准目标对象;若当前眼球的旋转角度与旋转部件的实时旋转角度进行叠加小于目标偏转角度,则机器人的眼部视线未对准目标对象,继续控制眼球以及旋转部件进行转动;反之,若当前眼球的旋转角度与旋转部件的实时旋转角度进行叠加等于目标偏转角度,则识别机器人的眼部视线已对准目标对象,执行S5032的操作。In this embodiment, since the display module of the robot is on the surface of the main body of the robot, the eye movement in the output screen of the display module will be coupled with the movement of the main body of the robot. The eyeball is equivalent to the rotation angle between the target object, including the relative The rotation angle of the robot (that is, the angle at which the robot controls the rotation of the eyeball), and the rotation of the robot itself (that is, the rotation of the robot through the rotating part). Therefore, the rotation angle of the eyeball relative to the target object is the same as the target offset angle. But in the end, the angle at which the eyeball is equivalent to the eyeball rotation of the robot needs to be adjusted according to the actual angle of the rotating part. Based on this, during the rotation of the robot, the rotating part will feed back the real-time rotation angle to the robot in real time. The robot can superimpose the current rotation angle of the robot's eyeball and the real-time rotation angle of the rotating part to determine whether the above-mentioned target offset angle has been reached. , So as to determine whether the robot's eye sight is aligned with the target object; if the current rotation angle of the eyeball and the real-time rotation angle of the rotating part are superimposed less than the target deflection angle, the robot's eye sight is not aligned with the target object, and continue to control the eyeball and The rotating part rotates; conversely, if the current rotation angle of the eyeball and the real-time rotation angle of the rotating part are superimposed to be equal to the target deflection angle, it is recognized that the eye line of sight of the robot is aligned with the target object, and the operation of S5032 is performed.
需要说明的是,上述旋转部件反馈的实时旋转角度是在旋转开始到反馈时刻之间的整个转动过程中的旋转角度。其中,旋转部件可以预设的时间间隔向机器人反馈实时旋转角度,举例性地,该时间间隔可以为30μs或30ms,在时间间隔较短的情况下,可以近似于实时反馈实时旋转角度。其中,上述旋转角度的确定过程可以是调用机器 人内置的旋转角度控制模块完成,该旋转角度控制模块可以为P(比例)D(微分)控制器、P(比例)I(积分)控制器以及P(比例)I(积分)D(微分)控制器等控制模块,该旋转角度控制模块的角度计算过程可以通过比例积分微分原理,还可以基于预测控制,滑模控制等原理计算得到,在此不做限定。It should be noted that the real-time rotation angle fed back by the above-mentioned rotating component is the rotation angle during the entire rotation process from the start of rotation to the time of feedback. The rotating component can feed back the real-time rotation angle to the robot at a preset time interval. For example, the time interval can be 30 μs or 30 ms. When the time interval is short, the real-time rotation angle can be fed back approximately in real time. Among them, the determination process of the above-mentioned rotation angle can be completed by calling the built-in rotation angle control module of the robot. The rotation angle control module can be a P (proportional) D (differential) controller, a P (proportional) I (integral) controller, and a P (proportional) D (differential) controller. (Proportional) I (integral) D (derivative) controller and other control modules. The angle calculation process of the rotation angle control module can be calculated based on the principle of proportional integral derivative, predictive control, sliding mode control and other principles. Make a limit.
在本实施例中,由于机器人在控制旋转部件旋转时,需要启动马达驱动,并通过马达驱动牵引旋转部件进行旋转,从而实现机器人进行机械旋转,而马达驱动启动、马达驱动牵引等操作耗时较长,而对于显示模块只需刷新输出画面即可实现眼球移动,由于显示模块的输出画面可以达到60Hz、120Hz甚至更高的刷新频率,因此眼球移动相当于即时响应,远远快于机械旋转。另一方面,在拟人化的角度而言,在跟随物体的过程中,往往是眼睛转动速度快于身体转动,因此通过旋转部件来控制机器人的身体转动,通过显示模块模拟眼球运动,则使得机器人的凝视跟踪过程的拟人化程度更高,进一步提高了用户的使用体验。基于上述原因,机器人的眼神视线对准目标对象的关键中心的耗时,会短于机器人的正面面向对准目标对象的正面面向。In this embodiment, when the robot controls the rotation of the rotating part, it needs to start the motor drive, and drive the rotating part to rotate through the motor drive, so as to realize the mechanical rotation of the robot, but the motor-driven start, motor-driven traction and other operations are time-consuming. Long, and for the display module, the eyeball movement can be realized by only refreshing the output image. Since the output image of the display module can reach a refresh rate of 60Hz, 120Hz or even higher, the eyeball movement is equivalent to an instant response, which is much faster than mechanical rotation. On the other hand, from the perspective of anthropomorphism, in the process of following objects, the eyes often rotate faster than the body. Therefore, the rotation of the robot's body is controlled by rotating parts, and the eye movement is simulated by the display module, which makes the robot The gaze tracking process is more personified, which further improves the user experience. Based on the above reasons, the time it takes for the robot's eyes to align with the key center of the target object is shorter than the front face of the robot aligning with the front face of the target object.
作为示例而非限定,图16示出了本申请一实施例提供的机器人的眼部视线首次对准目标对象的示意图。参见图16所示,在凝视跟踪行为之前,机器人的眼部视线方向为PA方向,此时机器人的正面面向也是朝着PA方向。此时,机器人眼部视线方向与目标对象之间的目标偏移角度为∠APD,即θ,此时机器人可以控制显示模块的输出画面中的眼球进行转动以及机器人的身体进行移动。由于眼球转动较快且叠加了机器人的身体的转动角度,因此,在机器人的眼部视线对准目标对象时,机器人的正面面向仍处于PB方向,并未对准目标对象,此时,眼球相当于机器人的转动角度为α,而机器人的身体相当于目标对象的转动角度为β,因此眼球相对于目标对象的实际转动角度为α+β=θ。因此,眼部视线会优先机器人的正面面向对准目标对象。As an example and not a limitation, FIG. 16 shows a schematic diagram of a robot's eye sight aligned with a target object for the first time according to an embodiment of the present application. As shown in FIG. 16, before the gaze tracking behavior, the direction of the robot's eye line of sight is the PA direction, and the front face of the robot is also facing the PA direction at this time. At this time, the target offset angle between the robot's eye direction and the target object is ∠APD, which is θ. At this time, the robot can control the eyeballs in the output screen of the display module to rotate and the robot's body to move. Since the eyeball rotates fast and the rotation angle of the robot's body is superimposed, when the robot's eye line of sight is aligned with the target object, the front face of the robot is still in the PB direction and is not aligned with the target object. At this time, the eyeballs are equivalent. The rotation angle of the robot is α, and the rotation angle of the robot's body equivalent to the target object is β, so the actual rotation angle of the eyeball relative to the target object is α+β=θ. Therefore, the eye line of sight will prioritize the front of the robot to align with the target object.
进一步地,图17示出了本申请另一实施例提供的一种机器人的控制方法S5031的具体实现流程图。参见图17,相对于图15所述实施例,本实施例提供的一种机器人的控制方法中S5031包括:S1701~S1702,具体详述如下:Further, FIG. 17 shows a specific implementation flowchart of a robot control method S5031 provided by another embodiment of the present application. Referring to FIG. 17, with respect to the embodiment described in FIG. 15, S5031 in a robot control method provided in this embodiment includes: S1701 to S1702, which are detailed as follows:
在S1701中,基于所述实时旋转角度确定水平偏转量以及垂直偏转量。In S1701, the horizontal deflection amount and the vertical deflection amount are determined based on the real-time rotation angle.
在本实施例中,由于显示模块的输出画面中的模拟眼球包含有两个自由度,可以在显示屏幕所在平面的各个方向移动,而所有移动方向可以分解到水平分量以及垂直分量。因此,机器人在接收到旋转部件反馈的实时旋转角度后,可以对该旋转角度在水平方向以及垂直方向上进行分解,从而得到水平偏转量以及垂直偏转量。In this embodiment, since the simulated eyeball in the output image of the display module contains two degrees of freedom, it can move in various directions of the plane where the display screen is located, and all moving directions can be decomposed into horizontal and vertical components. Therefore, after the robot receives the real-time rotation angle fed back by the rotating component, the rotation angle can be decomposed in the horizontal direction and the vertical direction to obtain the horizontal deflection amount and the vertical deflection amount.
在一种可能的实现方式中,若机器人的体态旋转是通过多个不同的旋转部件控制完成的,且不同的旋转部件对应一个旋转方向,则上述实时旋转角度可以有各个旋转部件的旋转分量构成,机器人可以确定各个旋转部件对应的旋转方向,将所有在水平方向上的旋转分量进行叠加,得到上述水平偏转量,以及将所有垂直方向上的旋转分量进行叠加,得到上述的垂直偏转量。In a possible implementation, if the body rotation of the robot is controlled by a number of different rotating parts, and different rotating parts correspond to a rotation direction, the above-mentioned real-time rotation angle may be composed of the rotation components of each rotating part , The robot can determine the rotation direction corresponding to each rotating component, superimpose all the rotation components in the horizontal direction to obtain the above-mentioned horizontal deflection amount, and superimpose all the rotation components in the vertical direction to obtain the above-mentioned vertical deflection amount.
在一种可能的实现方式中,若机器人的体态旋转有一个多自由度的旋转部件控制完成,该旋转部件可以在多个方向上进行旋转。在该情况下,机器人可以根据旋转部件反馈的三维旋转角度进行分解,得到上述的水平偏转量以及垂直偏转量。In a possible implementation manner, if the posture rotation of the robot is controlled by a multi-degree-of-freedom rotating part, the rotating part can rotate in multiple directions. In this case, the robot can decompose according to the three-dimensional rotation angle fed back by the rotating part to obtain the above-mentioned horizontal deflection amount and vertical deflection amount.
在S1702中,根据所述目标偏转角度、所述水平偏转量以及所述垂直偏转量,确 定所述眼部特征目标位置;所述实时偏移量具体为:In S1702, determine the eye feature target position according to the target deflection angle, the horizontal deflection amount, and the vertical deflection amount; the real-time offset is specifically:
Figure PCTCN2021089709-appb-000008
Figure PCTCN2021089709-appb-000008
其中,eye_yaw为所述眼部特征目标位置的水平分量;eye_pitch为所述眼部特征目标位置的垂直分量;target_yaw为所述目标偏转角度的水平分量;target_pitch为所述目标偏转角度的垂直分量;motor_yaw为所述水平偏转量;motor_pitch为所述水平偏转量。Wherein, eye_yaw is the horizontal component of the eye characteristic target position; eye_pitch is the vertical component of the eye characteristic target position; target_yaw is the horizontal component of the target deflection angle; target_pitch is the vertical component of the target deflection angle; motor_yaw is the horizontal deflection amount; motor_pitch is the horizontal deflection amount.
在本实施例中,机器人可以对目标偏转角度进行分解,得到目标偏转角度的水平分量以及垂直分量,由于机器人的眼球与目标对象之间的相对运动包含有眼球相对于机器人的运动以及机器人相对于目标对象的运动,因此,眼球相对于机器人的实际运动需要减去机器人相当于目标对象的运动贡献,即通过eye_yaw=target_yaw-motor_yaw计算水平分量,通过eye_pitch=target_pitch-motor_pitch计算垂直分量。上述眼部特征目标位置指的是,在机器人的显示画面中眼球对准目标对象时,眼球相当于机器人的偏移位置。In this embodiment, the robot can decompose the target deflection angle to obtain the horizontal and vertical components of the target deflection angle. Since the relative movement between the eyeball of the robot and the target object includes the movement of the eyeball relative to the robot and the robot relative to the The movement of the target object, therefore, the actual movement of the eyeball relative to the robot needs to subtract the contribution of the robot's motion to the target object, that is, the horizontal component is calculated by eye_yaw=target_yaw-motor_yaw, and the vertical component is calculated by eye_pitch=target_pitch-motor_pitch. The above-mentioned eye feature target position refers to the offset position of the robot when the eyeball is aligned with the target object in the display screen of the robot.
在本申请实施例中,通过对目标偏转角度在水平方向以及垂直方向进行分量分解,从而可以得到目标模拟眼部特征的位置的水平分量以及目标模拟眼部特征的位置的垂直分量,从而唯一确定眼部特征目标位置,上述计算过程简单,从而能够减少机器人的运算压力。In the embodiment of this application, by decomposing the target deflection angle in the horizontal and vertical directions, the horizontal component of the position of the target simulated eye feature and the vertical component of the position of the target simulated eye feature can be obtained, thereby uniquely determining The above-mentioned calculation process of the eye feature target position is simple, which can reduce the calculation pressure of the robot.
在S5032中,在所述眼部视线首次看向所述目标对象的方向后,根据所述实时旋转角度,动态调整所述模拟眼部特征的位置,以保持所述眼部视线看向所述目标对象的方向。In S5032, after the line of sight of the eye looks in the direction of the target object for the first time, the position of the simulated eye feature is dynamically adjusted according to the real-time rotation angle to keep the line of sight of the eye looking towards the The direction of the target object.
在本实施例中,机器人在首次对准目标对象后,由于机器人仍在转动,即机器人的旋转角度未到达预设的目标偏转角度,为了能够保持眼部视线持续对准目标对象。此时,机器人可以获取实时旋转角度,继续调整显示模块的输出画面中模拟眼球的模拟眼部特征的位置,即将模拟眼部特征的位置逐渐回正,最终在旋转部件旋转至目标偏转角度后,此时机器人的正面面向也正对着目标对象,且机器人的眼部视线也与机器人的正面面向一致,即旋转至机器人的眼部的中心区域(即眼球回正)。In this embodiment, after the robot is aligned with the target object for the first time, since the robot is still rotating, that is, the rotation angle of the robot does not reach the preset target deflection angle, in order to keep the eye sight continuously aligned with the target object. At this point, the robot can obtain the real-time rotation angle, and continue to adjust the position of the simulated eye feature of the simulated eye in the output screen of the display module, that is, the position of the simulated eye feature will gradually return to the right, and finally after the rotating part is rotated to the target deflection angle, At this time, the front face of the robot is also facing the target object, and the line of sight of the robot's eyes is also consistent with the front face of the robot, that is, it rotates to the center area of the eyes of the robot (that is, the eyeball returns to the center).
在本实施例中,上述根据实时旋转角度调整模拟眼部特征的位置的方式具体可以为:基于旋转部件在各个反馈时刻反馈的实时旋转角度,确定旋转部件的旋转速度,并控制显示模块中的模拟眼球以旋转部件的旋转方向的反方向,以上述旋转速度进行运动,从而能够抵消因旋转部件旋转而带来的偏移量,从而保持眼部视线对准目标对象。In this embodiment, the above-mentioned method of adjusting the position of the simulated eye feature according to the real-time rotation angle may specifically be: based on the real-time rotation angle fed back by the rotation component at each feedback moment, determine the rotation speed of the rotation component, and control the display module The simulated eyeball moves in the opposite direction of the rotation direction of the rotating member and moves at the above-mentioned rotation speed, so as to offset the offset caused by the rotation of the rotating member, thereby keeping the eye sight aligned with the target object.
在一种可能的实现方式中,机器人旋转部件包含有一个自由度,即用于控制头部在左右方向旋转。图18示出了本申请一实施例提供的旋转部件包含有一个自由度的机器人的示意图。参见图18所示,机器人在控制头部左右旋转时,显示模块上的模拟眼球的视线会先对准目标对象,此时,机器人的头部可以继续进行旋转,机器人的眼球可以根据头部在水平方向上的旋转量,在水平方向上进行反方向运动,而保持眼球在垂直方向上静止。In a possible implementation manner, the rotating part of the robot includes a degree of freedom, which is used to control the head to rotate in the left and right directions. FIG. 18 shows a schematic diagram of a robot with one degree of freedom in a rotating part provided by an embodiment of the present application. As shown in Figure 18, when the robot controls the head to rotate left and right, the line of sight of the simulated eyeball on the display module will first be aimed at the target object. At this time, the head of the robot can continue to rotate, and the eyeball of the robot can be adjusted according to the head position. The amount of rotation in the horizontal direction, moving in the opposite direction in the horizontal direction, while keeping the eyeball still in the vertical direction.
在一种可能的实现方式中,机器人旋转部件包含有两个自由度,即用于控制头部 在左右方向旋转以及上下方向旋转。图19示出了本申请一实施例提供的旋转部件包含有两个自由度的机器人的示意图。参见图19所示,机器人的头部可以在两个方向上进行旋转,分别通过旋转部件1控制头部左右旋转,旋转部件2控制头部上下旋转。由于上述两个旋转模块的旋转耗时较长,因此显示模块上的模拟眼球的视线会先对准目标对象,此时,机器人可以通过旋转部件1以及旋转部件2继续控制头部进行旋转,机器人的眼球可以根据旋转部件1在水平方向上的旋转量,在水平方向上进行反方向运动,以及根据旋转部件2在垂直方向上的旋转量,在垂直方向上进行反方向运动,以实现机器人视线持续跟踪目标对象。In a possible implementation, the robot rotating part contains two degrees of freedom, that is, it is used to control the head to rotate in the left-right direction and the up-down direction. FIG. 19 shows a schematic diagram of a robot with two degrees of freedom in a rotating part provided by an embodiment of the present application. As shown in Fig. 19, the head of the robot can be rotated in two directions. The rotating part 1 controls the head to rotate left and right, and the rotating part 2 controls the head to rotate up and down. Since the rotation of the above two rotating modules takes a long time, the line of sight of the simulated eyeball on the display module will first be aimed at the target object. At this time, the robot can continue to control the head to rotate through the rotating part 1 and the rotating part 2. The eyeball can move in the opposite direction in the horizontal direction according to the amount of rotation of the rotating part 1 in the horizontal direction, and move in the opposite direction in the vertical direction according to the amount of rotation of the rotating part 2 in the vertical direction, so as to realize the line of sight of the robot Keep track of the target object.
在本申请实施例中,机器人在控制眼球的移动过程中,会耦合有机器人旋转部件的实时旋转量,从而能够提高凝视跟踪的准确性,并且在眼球首次对准目标后,可以根据旋转部件的实时旋转量,动态调整模拟眼部特征的位置,从而能够保持视线始终对准目标对象,从而能够提高拟人化程度。In the embodiment of the present application, the robot is coupled with the real-time rotation of the rotating part of the robot during the process of controlling the movement of the eyeball, so that the accuracy of gaze tracking can be improved. Real-time rotation, dynamically adjust the position of the simulated eye features, so as to keep the line of sight always aligned with the target object, thereby improving the degree of personification.
进一步地,当机器人的躯干以及头部分离,可以通过不同的旋转部件(即头部旋转部件以及躯干旋转部件)分别控制躯干以及头部运动的情况下,则S502中根据所述目标对象的位置信息控制所述机器人转动可以包含有S2001~S2003的操作。图20示出了本申请另一实施例提供的一种机器人的控制方法S502中的根据所述目标对象的位置信息控制所述机器人转动的具体实现流程图。参见图20,相对于图5所述实施例,本实施例提供的一种机器人的控制方法中S502包括:S2001~S2003,具体详述如下:Further, when the torso and head of the robot are separated, the movement of the torso and the head can be controlled by different rotating parts (that is, the head rotating part and the torso rotating part), then in S502 according to the position of the target object Information to control the rotation of the robot may include operations from S2001 to S2003. FIG. 20 shows a specific implementation flow chart of controlling the rotation of the robot according to the position information of the target object in a method S502 for controlling a robot according to another embodiment of the present application. Referring to FIG. 20, with respect to the embodiment described in FIG. 5, S502 in a robot control method provided in this embodiment includes: S2001 to S2003, which are detailed as follows:
在S2001中,基于所述目标偏转角度控制所述机器人的躯干转动;所述目标偏转角度为机器人的躯干由初始角度旋转至朝向目标对象时对应的偏转角度。In S2001, the torso rotation of the robot is controlled based on the target deflection angle; the target deflection angle is the corresponding deflection angle when the robot's torso rotates from an initial angle to a target object.
在本实施例中,由于机器人的躯干运动并不受头部以及眼球的运动影响,因此该机器人的初始角度对准目标对象所需的旋转角度即为上述的目标偏转角度,可以基于目标偏转角度控制机器人的躯干进行转动。其中,所述初始角度为机器人的躯干正面所朝向的方向角。In this embodiment, since the movement of the torso of the robot is not affected by the movement of the head and eyeballs, the rotation angle required for the initial angle of the robot to align with the target object is the aforementioned target deflection angle, which can be based on the target deflection angle Control the torso of the robot to rotate. Wherein, the initial angle is the direction angle to which the front of the torso of the robot faces.
进一步地,图21示出了本申请另一实施例提供的一种机器人的控制方法S2001的具体实现流程图。参见图21,相对于图20所述实施例,本实施例提供的一种机器人的控制方法中S2001包括:S2101~S2103,具体详述如下:Further, FIG. 21 shows a specific implementation flowchart of a robot control method S2001 provided by another embodiment of the present application. Referring to FIG. 21, with respect to the embodiment described in FIG. 20, S2001 in a robot control method provided in this embodiment includes: S2101 to S2103, which are detailed as follows:
在S2101中,以预设的控制周期获取所述机器人的躯干在各个所述控制周期的第一基准旋转角度。In S2101, the first reference rotation angle of the torso of the robot in each control period is acquired in a preset control period.
在本实施例中,控制机器人躯干转动可以通过机器人部署于机器人躯干的躯干旋转部件视线,其中,机器人会以预设的控制周期向躯干旋转部件发送控制指令,每一次发送的控制指令内可以包含有关于该躯干旋转部件在整个旋转过程中的旋转角度,并根据每个控制周期反馈的实际旋转量动态调整上述旋转角度,从而确保旋转操作的准确性。因此,机器人向躯干旋转部件发送第一周期旋转指令时,需要确定当前机器人的姿态,即在控制周期对应的开始时刻,该机器人躯干的旋转角度,即上述的第一基准旋转角度。In this embodiment, the torso rotation of the robot can be controlled by the robot deployed on the torso rotating part of the robot torso. The robot will send a control command to the torso rotating part in a preset control cycle, and each control command sent may include It is related to the rotation angle of the trunk rotating component during the entire rotation process, and the rotation angle is dynamically adjusted according to the actual rotation amount fed back in each control cycle, so as to ensure the accuracy of the rotation operation. Therefore, when the robot sends the first cycle rotation instruction to the trunk rotation component, it needs to determine the current robot's posture, that is, the rotation angle of the robot's torso at the start time corresponding to the control cycle, that is, the aforementioned first reference rotation angle.
在S2102中,根据当前所述控制周期的所述第一基准旋转角度、上一所述控制周期的第一跟踪误差角度以及所述躯干旋转角度,生成当前所述控制周期对应的第一周期旋转指令;所述第一周期旋转指令内的旋转角度具体为:In S2102, according to the first reference rotation angle of the current control period, the first tracking error angle of the previous control period, and the trunk rotation angle, the first period rotation corresponding to the current control period is generated Instruction; the rotation angle in the first cycle rotation instruction is specifically:
Figure PCTCN2021089709-appb-000009
Figure PCTCN2021089709-appb-000009
其中,motor 0_yaw为所述第一周期旋转指令内的旋转角度;current_motor 0_yaw为当前所述控制周期的第一基准旋转角度;target_yaw为所述目标偏转角度;motor 0_yaw_diff为当前所述控制周期的第一跟踪误差角度;motor 0_yaw_last_diff为上一所述控制周期的第一跟踪误差角度;motor 0_Kp以及motor 0_Kd为所述躯干旋转部件的预设调整参数。 Wherein, motor 0 _yaw is the rotation angle in the first cycle rotation command; current_motor 0 _yaw is the first reference rotation angle of the current control cycle; target_yaw is the target deflection angle; motor 0 _yaw_diff is the current control The first tracking error angle of the cycle; motor 0 _yaw_last_diff is the first tracking error angle of the previous control cycle; motor 0 _Kp and motor 0 _Kd are preset adjustment parameters of the trunk rotating part.
在本实施例中,由于躯干旋转部件的运行并非匀速且稳定,在实际旋转的过程中会存在一定旋转误差。为了消除机器人因机械转动所带来的误差,可以通过躯干旋转部件对应的motor 0_Kp以及motor 0_Kd来实时调整旋转角度,上述两个参数可以在机器人出厂过程时默认配置,还可以基于机器人在历史控制操作中的历史旋转记录中通过大数据学习的方式确定得到,当然,还可以通过与云端服务器进行通信,实时更新上述调整参数。对于上述预设参数的获取方式,在此不一一进行限定。其中,motor 0_Kp为当前周期的校准参量;motor 0_Kd为周期迭代的校准参量。 In this embodiment, since the movement of the trunk rotating component is not uniform and stable, there will be a certain rotation error during the actual rotation. In order to eliminate the errors caused by mechanical rotation of the robot, the rotation angle can be adjusted in real time through the motor 0 _Kp and motor 0 _Kd corresponding to the torso rotating parts. The history rotation record in the history control operation is determined by the way of big data learning. Of course, it is also possible to update the above adjustment parameters in real time by communicating with the cloud server. The methods for obtaining the aforementioned preset parameters are not limited one by one here. Among them, motor 0 _Kp is the calibration parameter of the current cycle; motor 0 _Kd is the calibration parameter of the cycle iteration.
在本实施例中,第一跟踪误差角度为表示与目标偏差角度之间的相差角度,即机器人的躯体仍需旋转的角度,由于current_motor 0_yaw为当前机器人的躯干的已偏转的角度,即第一基准旋转角度,当该角度与目标偏转角度相同时,上述数值为0,则不存在偏差,机器人的躯干的正面面向已对准目标对象,无需继续旋转;反之,若上述数值为非0,即机器人仍需继续进行旋转,并生成第一周期控制指令。 In this embodiment, the first tracking error angle is the angle that represents the phase difference with the target deviation angle, that is, the angle at which the body of the robot still needs to be rotated. Since current_motor 0 _yaw is the deflection angle of the current robot's torso, that is, the first A reference angle of rotation. When the angle is the same as the target deflection angle, the above value is 0, and there is no deviation. The front of the robot's torso is aligned with the target object, and there is no need to continue to rotate; on the contrary, if the above value is non-zero, That is, the robot still needs to continue to rotate and generate the first cycle control command.
在S2103中,根据各个所述第一周期旋转指令控制所述机器人的躯干转动。In S2103, the torso of the robot is controlled to rotate according to each of the first cycle rotation instructions.
在本实施例中,机器人在每个控制周期生成一个第一周期旋转指令,该第一周期旋转指令包含上述确定旋转部件所需旋转的角度,并在各个控制周期将第一周期控制指令发送给躯干旋转部件,以控制躯干旋转部件进行转动,直到目标旋转部件的旋转角度到达上述目标偏转角度,此时识别机器人当前的正面位姿对准目标对象。In this embodiment, the robot generates a first-cycle rotation command in each control cycle. The first-cycle rotation command includes the above-mentioned determining the required rotation angle of the rotating component, and sends the first-cycle control command to The torso rotating part is used to control the torso rotating part to rotate until the rotation angle of the target rotating part reaches the target deflection angle. At this time, it is recognized that the current frontal pose of the robot is aligned with the target object.
在本申请实施例中,通过在每个周期生成一个对躯干旋转部件进行控制的第一周期旋转指令,实现对旋转部件的周期精准控制,并且在生成第一周期旋转指令时,通过motor 0_Kp以及motor 0_Kd来消除机械旋转偏差量,提高了机器人控制的准确性。 In the embodiment of the present application, by generating a first-cycle rotation command for controlling the body rotating part in each cycle, the cycle precision control of the rotating part is realized, and when the first-cycle rotation command is generated, the motor 0 _Kp And motor 0 _Kd to eliminate mechanical rotation deviation and improve the accuracy of robot control.
在S2002中,在所述机器人的躯干旋转的过程中,根据实时反馈的实时躯干角度以及所述目标偏转角度,动态控制所述机器人的头部转动至头部目标位置,所述头部目标位置为所述机器人头部首次面向所述目标对象的方向的位置。In S2002, during the rotation of the torso of the robot, the head of the robot is dynamically controlled to rotate to the target head position according to the real-time torso angle fed back in real time and the target deflection angle. Is the position of the robot head facing the target object for the first time.
在本实施例中,机器人将头部的正面面向对准目标对象的正面面向的过程具体可以划分为两个方面,其中一个方面为将机器人头部的正面面向对准目标对象的正面面向,另一方面则是将机器人躯干的正面面向对准目标对象的正面面向,当上述两个部位均对准目标对象的正面面向时,则识别机器人已对准目标对象。上述两个不同方面可以通过机器人的不同部件完成。对于将机器人头部的正面面向对准目标对象的正面面向的过程,可以通过机器人的头部旋转部件调整机器人的头部位姿,从而实现头部正面面向与目标对象的正面面向的对齐;而对于机器人躯干的正面面向对准目标对象 的正面面向的过程,可以通过机器人的躯干旋转部件调整机器人的头部位姿,从而实现头部正面面向与目标对象的正面面向的对齐。In this embodiment, the process of the robot aligning the front face of the head with the front face of the target object can be specifically divided into two aspects, one of which is aligning the front face of the robot head with the front face of the target object, and the other On the one hand, the front face of the robot torso is aligned with the front face of the target object. When the above two parts are aligned with the front face of the target object, it is recognized that the robot has been aligned with the target object. The above two different aspects can be accomplished by different parts of the robot. For the process of aligning the front face of the robot's head with the front face of the target object, the head position of the robot can be adjusted through the head rotation part of the robot, so as to realize the alignment of the front face of the head with the front face of the target object; and For the process of aligning the front face of the robot torso with the front face of the target object, the head position of the robot can be adjusted through the torso rotating part of the robot, so as to realize the alignment of the front face of the head with the front face of the target object.
需要说明的是,S2001以及S2002是同时执行的,则在机器人在控制机器人的躯干旋转部件进行旋转操作的同时,可以控制机器人的头部旋转部件进行旋转操作。其中,头部旋转部件相对于目标对象的旋转速度会高于躯体旋转部件相对于目标对象的旋转速度,即头部部件会快于身体部件对准目标对象的正面面向。理由如下:It should be noted that S2001 and S2002 are executed at the same time, so while the robot controls the torso rotating part of the robot to perform a rotating operation, it can control the head rotating part of the robot to perform a rotating operation. Wherein, the rotation speed of the head rotating part relative to the target object will be higher than the rotation speed of the body rotating part relative to the target object, that is, the head part will be aligned with the front face of the target object faster than the body part. The reasons are as follows:
由于机器人的头部是安装于机器人的躯干上,从而头部部件的旋转会与机器人的躯干的运动耦合,头部部件相对于目标对象之间的旋转角度,包含头部部件相对于机器人的旋转角度(即机器人控制头部部件的角度),以及机器人躯干的旋转(即机器人通过躯干旋转部件进行的转动),因此,头部部件的实际旋转角度需要根据躯干旋转部件的实际旋转角度进行调整。基于此,在躯干旋转部件转动的过程中,躯干旋转部件会实时向机器人反馈实时躯干角度,机器人可以根据机器人当前头部的旋转角度与躯干旋转部件的实时躯干角度进行叠加,判断是否到达上述的目标偏移角度,从而确定机器人的头部的正面面向是否对准目标对象;若当前头部的旋转角度与躯干旋转部件的实时躯干角度进行叠加小于目标偏转角度,则机器人的头部的正面面向未对准目标对象,继续控制头部旋转部件以及躯干旋转部件进行转动;反之,若当前头部的旋转角度与躯干旋转部件的实时躯干角度进行叠加等于目标偏转角度,则识别机器人的头部的正面面向已对准目标对象。Since the head of the robot is mounted on the torso of the robot, the rotation of the head part will be coupled with the motion of the torso of the robot. The rotation angle of the head part relative to the target object includes the rotation of the head part relative to the robot. The angle (that is, the angle at which the robot controls the head part), and the rotation of the robot's torso (that is, the rotation of the robot through the torso rotating part), therefore, the actual rotation angle of the head part needs to be adjusted according to the actual rotation angle of the torso rotating part. Based on this, during the rotation of the torso rotating part, the torso rotating part will feed back the real-time torso angle to the robot in real time. The robot can superimpose the current head rotation angle of the robot with the real-time torso angle of the torso rotating part to determine whether it has reached the above The target offset angle, so as to determine whether the front face of the robot's head is aligned with the target object; if the current head rotation angle and the real-time torso angle of the torso rotating part are superimposed less than the target deflection angle, the front face of the robot's head faces If the target object is not aligned, continue to control the head rotating part and the torso rotating part to rotate; conversely, if the current head rotation angle and the real-time torso angle of the torso rotating part are superimposed equal to the target deflection angle, the head of the robot is recognized The front face has been aligned with the target object.
在拟人化的角度而言,在跟随物体的过程中,往往是头部转动速度快于躯干转动,因此通过躯干旋转部件来控制机器人的躯干转动,通过耦合了躯干旋转的基础上控制头部旋转部件控制机器人的头部转动,从而使得头部优先于躯干对正目标对象,机器人的凝视跟踪过程的拟人化程度更高,进一步提高了用户的使用体验。In terms of anthropomorphism, in the process of following objects, the head rotates faster than the torso. Therefore, the torso rotation of the robot is controlled by the torso rotation component, and the head rotation is controlled by coupling the torso rotation. The component controls the rotation of the head of the robot, so that the head is aligned with the target object prior to the torso. The gaze tracking process of the robot is more personified, which further improves the user experience.
在S2003中,在所述机器人头部首次面向所述目标对象的方向后,根据所述实时躯干角度,动态控制所述机器人的头部转动,以保持所述机器人头部面向所述目标对象的方向。In S2003, after the head of the robot faces the direction of the target object for the first time, the head of the robot is dynamically controlled to rotate according to the real-time torso angle to keep the head of the robot facing the target object. direction.
在本实施例中,机器人头部的正面面向在首次对正目标对象后,由于躯干旋转部件仍在旋转,即躯干旋转部件的旋转角度未到达预设的目标偏转角度,为了能够保持器人头部的正面面向持续对正目标对象。此时,机器人可以获取躯干旋转部件的实时躯干角度,继续通过头部旋转部件控制机器人头部进行旋转,即将头部逐渐回正,最终在躯干旋转部件旋转至目标偏转角度后,此时机器人躯干的正面面向也正对着目标对象,且机器人头部的正面面向也与机器人躯干的正面面向一致,即实现头部回正。In this embodiment, after the front face of the robot head is aligned with the target object for the first time, because the torso rotating part is still rotating, that is, the rotation angle of the torso rotating part does not reach the preset target deflection angle, in order to be able to hold the robot head The front face of the department continuously aligns the target object. At this point, the robot can obtain the real-time torso angle of the torso rotating part, and continue to control the head of the robot to rotate through the head rotating part, that is, gradually return the head to the center, and finally after the torso rotating part rotates to the target deflection angle, the robot torso The front face of is also facing the target object, and the front face of the robot's head is also consistent with the front face of the robot's torso, that is, the head is returned to the center.
在本实施例中,上述根据实时旋转角度调整模拟眼部特征的位置的方式具体可以为:基于躯干旋转部件在各个反馈时刻反馈的实时躯干角度,确定躯干旋转部件的躯干旋转速度,并控制头部旋转部件以躯干旋转部件的旋转方向的反方向,以上述躯干旋转速度进行运动,从而能够抵消因躯干旋转部件旋转而带来的偏移量,从而保持头部的正面面向对正目标对象。In this embodiment, the above-mentioned method of adjusting the position of the simulated eye feature according to the real-time rotation angle may specifically be: based on the real-time torso angle fed back by the torso rotation component at each feedback moment, determine the torso rotation speed of the torso rotation component, and control the head The part rotation member moves in the opposite direction of the rotation direction of the trunk rotation member and moves at the above-mentioned trunk rotation speed, so as to offset the offset caused by the rotation of the trunk rotation member, thereby keeping the front face of the head facing the target object.
在本申请实施例中,机器人在控制头部的转动过程中,会耦合有机器人躯干旋转部件的实时旋转量,从而能够提高凝视跟踪的准确性,并且在头部的正面面向首次对正目标对象后,可以根据躯干旋转部件的实时旋转量,动态调整头部旋转部件的转动, 从而能够保持头部的正面面向始终对正目标对象,从而能够提高拟人化程度。In the embodiments of the present application, the robot is coupled with the real-time rotation of the rotating parts of the robot's torso during the process of controlling the rotation of the head, so that the accuracy of gaze tracking can be improved, and the front of the head is aligned with the target object for the first time Later, the rotation of the head rotating part can be dynamically adjusted according to the real-time rotation amount of the torso rotating part, so that the front face of the head can always be aligned with the target object, so that the degree of personification can be improved.
进一步地,当机器人的头部旋转包含有两个自由度,例如可以通过两个不同的头部旋转部件(即用于左右旋转的第一头部旋转部件以及用于上下旋转的第二头部旋转部件)分别控制头部进行左右旋转以及上下旋转的情况下,则S2002可以包含有S2201~S2203的操作。图22示出了本申请另一实施例提供的一种机器人的控制方法S2002的具体实现流程图。参见图22,相对于图20所述实施例,本实施例提供的一种机器人的控制方法中S2202包括:S2201~S2203,具体详述如下:Further, when the head rotation of the robot includes two degrees of freedom, for example, two different head rotation parts (ie the first head rotation part for left and right rotation and the second head rotation part for up and down rotation can be used). In the case where the rotating part) controls the head to rotate left and right and rotate up and down, S2002 may include the operations of S2201 to S2203. FIG. 22 shows a specific implementation flowchart of a robot control method S2002 provided by another embodiment of the present application. Referring to FIG. 22, with respect to the embodiment described in FIG. 20, S2202 in a robot control method provided in this embodiment includes: S2201 to S2203, which are detailed as follows:
在S2201中,基于所述目标偏转角度确定水平偏转角度以及垂直偏转角度。In S2201, the horizontal deflection angle and the vertical deflection angle are determined based on the target deflection angle.
在本实施例中,由于机器人头部的包含有两个自由度,分别通过第一头部旋转部件控制头部在水平方向旋转,通过第二头部部件控制在垂直方向旋转。因此,机器人可以对该目标偏转角度在水平方向以及垂直方向上进行分解,从而得到水平偏转角度以及垂直偏转角度。In this embodiment, since the robot head includes two degrees of freedom, the first head rotating part is used to control the head to rotate in the horizontal direction, and the second head part is used to control the rotation in the vertical direction. Therefore, the robot can decompose the target deflection angle in the horizontal direction and the vertical direction to obtain the horizontal deflection angle and the vertical deflection angle.
图23示出了本申请一实施例提供的旋转部件包含有三个自由度的机器人的示意图。参见图23所示,机器人的头部可以在两个方向上进行旋转,分别通过旋转部件1控制头部左右旋转,旋转部件2控制头部上下旋转,机器人的躯干可以在一个方向上进行旋转,即通过旋转部件0控制躯干在左右方向上旋转。由于垂直方向上只有第二头部旋转部件进行控制,因此第二头部旋转部件承担所有垂直旋转量;而在水平方向上,有用于控制头部左右旋转的第一头部旋转部件(即旋转部件1)以及用于控制躯干左右旋转的躯干旋转部件(即旋转部件0),因此头部旋转的过程中需要考虑躯干的实时躯干角度。FIG. 23 shows a schematic diagram of a robot with three degrees of freedom in a rotating part provided by an embodiment of the present application. As shown in Figure 23, the head of the robot can be rotated in two directions. Rotating part 1 controls the head to rotate left and right, and rotating part 2 controls the head to rotate up and down. The torso of the robot can rotate in one direction. That is, the body is controlled to rotate in the left-right direction by the rotating part 0. Since only the second head rotating part is controlled in the vertical direction, the second head rotating part assumes all the vertical rotation; while in the horizontal direction, there is a first head rotating part used to control the left and right rotation of the head (ie, rotating Part 1) and the torso rotating part (ie, rotating part 0) used to control the left and right rotation of the torso, so the real-time torso angle of the torso needs to be considered in the process of head rotation.
在S2202中,根据所述垂直偏转角度控制所述机器人的头部在垂直方向上转动。In S2202, the head of the robot is controlled to rotate in the vertical direction according to the vertical deflection angle.
在本实施例中,由于垂直方向上只有第二头部旋转部件进行控制,躯干的转动并不会影响垂直方向的旋转角度,因此可以根据上述确定的垂直偏转角度控制第二头部旋转部件进行转动,以使头部在垂直方向上移动目标位置。In this embodiment, since only the second head rotation component is controlled in the vertical direction, the rotation of the torso does not affect the vertical rotation angle, so the second head rotation component can be controlled according to the vertical deflection angle determined above. Turn to move the head to the target position in the vertical direction.
进一步地,图24示出了本申请另一实施例提供的一种机器人的控制方法S2202的具体实现流程图。参见图24,相对于图22所述实施例,本实施例提供的一种机器人的控制方法中S2202包括:S2401~S2403,具体详述如下:Further, FIG. 24 shows a specific implementation flowchart of a robot control method S2202 provided by another embodiment of the present application. Referring to FIG. 24, with respect to the embodiment described in FIG. 22, S2202 in a robot control method provided in this embodiment includes: S2401 to S2403, which are detailed as follows:
在S2401中,以预设的控制周期获取所述机器人的头部在垂直方向上各个所述控制周期的第二基准旋转角度。In S2401, the second reference rotation angle of the head of the robot in each control period in the vertical direction is acquired in a preset control period.
在本实施例中,机器人会以预设的控制周期向第二头部旋转部件发送控制指令,每一次发送的控制指令内可以包含有关于该第二头部旋转部件在整个旋转过程中垂直方向的旋转角度,并根据每个控制周期反馈的实际垂直旋转量动态调整上述垂直方向的旋转角度。因此,机器人向第二头部旋转部件发送第二周期旋转指令时,需要确定当前机器人头部垂直方向上的姿态,即在控制周期对应的开始时刻,该机器人头部垂直方向上的旋转角度,即上述的第二基准旋转角度。In this embodiment, the robot will send a control command to the second head rotating part in a preset control cycle. Each control command sent may include information about the vertical direction of the second head rotating part during the entire rotation. The rotation angle of the vertical direction is dynamically adjusted according to the actual vertical rotation amount fed back in each control cycle. Therefore, when the robot sends the second cycle rotation instruction to the second head rotation component, it needs to determine the current posture of the robot head in the vertical direction, that is, the rotation angle of the robot head in the vertical direction at the beginning of the control cycle, That is, the above-mentioned second reference rotation angle.
在S2402中,根据当前所述控制周期的所述第二基准旋转角度、上一所述控制周期的第二跟踪误差角度以及所述垂直偏转角度,生成当前所述控制周期对应的第二周期旋转指令;所述第二周期旋转指令内的旋转角度具体为:In S2402, according to the second reference rotation angle of the current control period, the second tracking error angle of the previous control period, and the vertical deflection angle, a second period rotation corresponding to the current control period is generated Instruction; the rotation angle in the second cycle rotation instruction is specifically:
Figure PCTCN2021089709-appb-000010
Figure PCTCN2021089709-appb-000010
其中,motor 2_pitch为所述第二周期旋转指令内的旋转角度;current_motor 2_pitch为当前所述控制周期的第二基准旋转角度;target_pitch为所述垂直偏转角度;motor 1_yaw_diff为当前所述控制周期的第三跟踪误差角度;motor 1_yaw_last_diff为上一所述控制周期的第三跟踪误差角度;motor 2_Kp以及motor 2_Kd为所述第二头部旋转部件的预设调整参数。 Wherein, motor 2 _pitch is the rotation angle in the second cycle rotation command; current_motor 2 _pitch is the second reference rotation angle of the current control cycle; target_pitch is the vertical deflection angle; motor 1 _yaw_diff is the current control The third tracking error angle of the cycle; motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle; motor 2 _Kp and motor 2 _Kd are preset adjustment parameters of the second head rotating part.
在本实施例中,由于第二头部旋转部件的运行并非匀速且稳定,在实际旋转的过程中会存在一定旋转误差。为了消除机器人因机械转动所带来的误差,可以通过第二头部旋转部件对应的motor 2_Kp以及motor 2_Kd来实时调整旋转角度,上述两个参数可以在机器人出厂过程时默认配置,还可以基于机器人在历史控制操作中的历史旋转记录中通过大数据学习的方式确定得到,当然,还可以通过与云端服务器进行通信,实时更新上述调整参数。对于上述预设参数的获取方式,在此不一一进行限定。其中,motor 2_Kp为当前周期的校准参量;motor 2_Kd为周期迭代的校准参量。 In this embodiment, since the operation of the second head rotating part is not uniform and stable, there will be a certain rotation error during the actual rotation. In order to eliminate the error caused by mechanical rotation of the robot , the rotation angle can be adjusted in real time through the motor 2 _Kp and motor 2 _Kd corresponding to the second head rotating part. The above two parameters can be configured by default when the robot is shipped from the factory, or Based on the historical rotation record of the robot in the historical control operation, it is determined by way of big data learning. Of course, it is also possible to update the above-mentioned adjustment parameters in real time by communicating with the cloud server. The methods for obtaining the aforementioned preset parameters are not limited one by one here. Among them, motor 2 _Kp is the calibration parameter of the current cycle; motor 2 _Kd is the calibration parameter of the cycle iteration.
在本实施例中,第二跟踪误差角度为表示与目标偏差角度在垂直方向上的相差角度,即机器人的头部在垂直方向仍需旋转的角度,由于current_motor 2_pitch为当前机器人的头部在垂直方向上的已偏转的角度,即第二基准旋转角度,当该角度与目标偏转角度的垂直防线的偏转角度相同时,上述数值为0,则不存在偏差,机器人的头部在垂直方向上已到达目标位置,无需继续旋转;反之,若上述数值为非0,即第二头部旋转部件仍需继续进行旋转,并生成第二周期控制指令。 In this embodiment, the second tracking error angle is the angle of the vertical deviation from the target deviation angle, that is, the angle at which the robot's head still needs to rotate in the vertical direction. Because current_motor 2 _pitch is the current robot's head The deflection angle in the vertical direction, that is, the second reference rotation angle. When the angle is the same as the deflection angle of the vertical line of defense of the target deflection angle, and the above value is 0, there is no deviation. The head of the robot is in the vertical direction. The target position has been reached, and there is no need to continue to rotate; on the contrary, if the above-mentioned value is non-zero, that is, the second head rotating part still needs to continue to rotate and generate a second cycle control command.
在S2403中,根据各个所述第二周期旋转指令控制所述机器人的头部在垂直方向上转动。In S2403, the head of the robot is controlled to rotate in the vertical direction according to each of the second cycle rotation instructions.
在本实施例中,机器人在每个控制周期生成一个第二周期旋转指令,该第二周期旋转指令包含上述确定第二头部旋转部件所需旋转的角度,并在各个控制周期将第二周期控制指令发送给第二头部旋转部件,以控制第二头部旋转部件进行转动,直到到达上述垂直偏转角度。In this embodiment, the robot generates a second-cycle rotation instruction in each control cycle. The second-cycle rotation instruction includes the above-mentioned determination of the required rotation angle of the second head rotating part, and the second cycle is changed in each control cycle. The control instruction is sent to the second head rotating part to control the second head rotating part to rotate until the vertical deflection angle is reached.
在本申请实施例中,通过在每个周期生成一个对第二头部旋转部件进行控制的第二周期旋转指令,实现对第二头部旋转部件的周期精准控制,并且在生成第二周期旋转指令时,通过motor 2_Kp以及motor 2_Kd来消除机械旋转偏差量,提高了机器人控制的准确性。 In the embodiment of the present application, by generating a second cycle rotation command for controlling the second head rotation component in each cycle, precise cycle control of the second head rotation component is achieved, and the second cycle rotation is generated. When instructing, the motor 2 _Kp and motor 2 _Kd are used to eliminate the mechanical rotation deviation and improve the accuracy of robot control.
在S2203中,在所述机器人的躯干转动的过程中,根据实时反馈的实时躯干角度以及所述水平偏转角度,动态控制所述机器人的头部在水平方向上转动至水平目标位置;所述水平目标位置为所述机器人头部在水平方向上首次面向所述目标对象的方向的位置。In S2203, during the rotation of the torso of the robot, the head of the robot is dynamically controlled to rotate in the horizontal direction to a horizontal target position according to the real-time torso angle and the horizontal deflection angle fed back in real time; The target position is the position of the robot head facing the target object in the horizontal direction for the first time.
在本实施例中,由于机器人的头部是安装于机器人的躯干上,从而头部部件的水平旋转会与机器人的躯干的水平运动耦合,头部部件相对于目标对象之间在水平方向上的旋转角度,包含头部部件相对于机器人在水平方向上的旋转角度(即机器人控制 第一头部部件的角度),以及机器人躯干在水平方向上的旋转(即机器人通过躯干旋转部件进行的转动),因此,头部部件在水平方向上的实际旋转角度需要根据躯干旋转部件在水平方向上的实际旋转角度进行调整。In this embodiment, since the head of the robot is mounted on the torso of the robot, the horizontal rotation of the head part will be coupled with the horizontal movement of the torso of the robot. The rotation angle includes the rotation angle of the head part relative to the robot in the horizontal direction (that is, the angle at which the robot controls the first head part), and the rotation of the robot torso in the horizontal direction (that is, the rotation of the robot through the torso rotating part) Therefore, the actual rotation angle of the head part in the horizontal direction needs to be adjusted according to the actual rotation angle of the trunk rotation part in the horizontal direction.
进一步地,图25示出了本申请另一实施例提供的一种机器人的控制方法S2203的具体实现流程图。参见图25,相对于图22所述实施例,本实施例提供的一种机器人的控制方法中S2203包括:S2501~S2503,具体详述如下:Further, FIG. 25 shows a specific implementation flowchart of a robot control method S2203 provided by another embodiment of the present application. Referring to FIG. 25, with respect to the embodiment described in FIG. 22, S2203 in a robot control method provided in this embodiment includes: S2501 to S2503, which are detailed as follows:
在S2501中,以预设的控制周期获取所述机器人的头部在水平方向上各个所述控制周期的第三基准旋转角度。In S2501, the third reference rotation angle of the head of the robot in each control period in the horizontal direction is acquired in a preset control period.
在本实施例中,与S2401相似,机器人会以预设的控制周期向第二头部旋转部件获取第三基准旋转角度。具体参数可以参见S2401的相关描述,在此不一一赘述。In this embodiment, similar to S2401, the robot obtains the third reference rotation angle from the second head rotating part in a preset control cycle. For specific parameters, please refer to the relevant description of S2401, which will not be repeated here.
在S2502中,根据当前所述控制周期的所述第三基准旋转角度、上一所述控制周期的第三跟踪误差角、所述实时躯干角度以及所述水平偏转角度,生成当前所述控制周期对应的第三周期旋转指令;所述第三周期旋转指令内的旋转角度具体为:In S2502, the current control period is generated according to the third reference rotation angle of the current control period, the third tracking error angle of the previous control period, the real-time torso angle, and the horizontal deflection angle The corresponding third cycle rotation command; the rotation angle in the third cycle rotation command is specifically:
Figure PCTCN2021089709-appb-000011
Figure PCTCN2021089709-appb-000011
其中,motor 1_yaw为所述第三周期旋转指令内的旋转角度;current_motor 1_yaw为当前所述控制周期的第三基准旋转角度;target_yaw为所述水平偏转角度;motor 1_yaw_diff为当前所述控制周期的第三跟踪误差角度;motor 1_yaw_last_diff为上一所述控制周期的第三跟踪误差角度;motor 1_Kp以及motor 1_Kd为所述第一头部旋转部件的预设调整参数;current_motor 0_yaw为所述实时躯干角度。 Wherein, motor 1 _yaw is the rotation angle in the third cycle rotation command; current_motor 1 _yaw is the third reference rotation angle of the current control cycle; target_yaw is the horizontal deflection angle; motor 1 _yaw_diff is the current control The third tracking error angle of the cycle; motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle; motor 1 _Kp and motor 1 _Kd are the preset adjustment parameters of the first head rotating part; current_motor 0 _yaw Is the real-time torso angle.
在本实施例中,同样地,为了消除机械运动的误差,在确定第三周期旋转指令内的旋转角度时,可以引入motor 1_Kp以及motor 1_Kd,实现参数校正。 In this embodiment, similarly, in order to eliminate the error of mechanical motion, when determining the rotation angle in the third cycle rotation command, motor 1 _Kp and motor 1 _Kd can be introduced to realize parameter correction.
在本实施例中,由于第一头部旋转部件在执行左右旋转的过程中current_motor 1_yaw中耦合了躯干左右旋转current_motor 0_yaw,因此计算第三跟踪误差角度时需要考虑上述的耦合影响。 In this embodiment, since the current_motor 1 _yaw is coupled with the torso left and right rotation current_motor 0 _yaw in the process of performing the left and right rotation of the first head rotation component, the aforementioned coupling influence needs to be considered when calculating the third tracking error angle.
在S2503中,根据各个所述三周期旋转指令动态控制所述机器人的头部在水平方向上转动。In S2503, the head of the robot is dynamically controlled to rotate in the horizontal direction according to each of the three-cycle rotation instructions.
在本实施例中,机器人在每个控制周期生成一个第三周期旋转指令,该第三周期旋转指令包含上述确定第一头部旋转部件所需旋转的角度,并在各个控制周期将第三周期控制指令发送给第一头部旋转部件,以控制第一头部旋转部件进行转动,直到到达上述水平偏转角度。In this embodiment, the robot generates a third-cycle rotation instruction in each control cycle. The third-cycle rotation instruction includes the above-mentioned determining the required rotation angle of the first head rotating part, and the third cycle is changed in each control cycle. The control instruction is sent to the first head rotating part to control the first head rotating part to rotate until the horizontal deflection angle is reached.
在本申请实施例中,在控制头部水平旋转的过程中考虑了机器人躯干水平旋转量,从而能够提高旋转控制的准确性。In the embodiment of the present application, the horizontal rotation amount of the robot torso is considered in the process of controlling the horizontal rotation of the head, so that the accuracy of the rotation control can be improved.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the foregoing embodiment does not mean the order of execution. The execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
对应于上文实施例所述的机器人的控制方法,图26示出了本申请实施例提供的机器人的控制装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。Corresponding to the robot control method described in the above embodiment, FIG. 26 shows a structural block diagram of a robot control device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
参照图26,该机器人的控制装置包括:Referring to Figure 26, the robot control device includes:
位置信息获取单元261,用于获取目标对象的位置信息;The location information acquiring unit 261 is configured to acquire location information of the target object;
转动控制单元262,用于根据所述目标对象的位置信息控制所述机器人转动;The rotation control unit 262 is configured to control the rotation of the robot according to the position information of the target object;
画面调整单元263,用于在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,以使所述机器人上所述显示模块的所述输出画面的模拟眼部特征的眼部视线在机器人旋转过程中为看向所述目标对象的方向。The screen adjustment unit 263 is configured to dynamically adjust the output screen of the display module for simulating eye features on the robot during the rotation of the robot, so that the output screen of the display module on the robot The eye line of sight that simulates the eye features is the direction of looking at the target object during the rotation of the robot.
所述画面调整单元263具体用于:在机器人的旋转过程中,根据目标偏转角度动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面;所述目标偏转角度为机器人由初始角度旋转至面向目标对象时对应的偏转角度。The image adjustment unit 263 is specifically configured to: during the rotation of the robot, dynamically adjust the output image of the display module for simulating eye features on the robot according to the target deflection angle; the target deflection angle is the initial angle of the robot. Rotate to the corresponding deflection angle when facing the target object.
可选地,所述画面调整单元263包括:Optionally, the picture adjustment unit 263 includes:
眼部运动控制单元,用于在控制所述机器人旋转的过程中,根据所述机器人实时反馈的实时旋转角度以及所述目标偏转角度,动态调整所述输出画面中的模拟眼部特征的位置到眼部特征目标位置;所述眼部特征目标位置是所述输出画面所模拟的眼部视线首次看向所述目标对象的方向的位置;The eye movement control unit is used to dynamically adjust the position of the simulated eye feature in the output screen according to the real-time rotation angle fed back by the robot in real time and the target deflection angle in the process of controlling the rotation of the robot Eye feature target position; the eye feature target position is the position where the eye's line of sight simulated by the output screen looks in the direction of the target object for the first time;
眼部回正控制单元,还用于在所述眼部视线首次看向所述目标对象的方向后,根据所述实时旋转角度,动态调整所述模拟眼部特征的位置,以保持所述眼部视线看向所述目标对象的方向。The eye correction control unit is further configured to dynamically adjust the position of the simulated eye feature according to the real-time rotation angle after the eye line of sight is in the direction of the target object for the first time, so as to maintain the eye The line of sight looks in the direction of the target object.
可选地,所述眼球运动控制单元包括:Optionally, the eye movement control unit includes:
偏转分量确定单元,用于基于所述实时旋转角度确定水平偏转量以及垂直偏转量;A deflection component determining unit, configured to determine a horizontal deflection amount and a vertical deflection amount based on the real-time rotation angle;
眼部特征目标位置确定单元,用于根据所述目标偏转角度、所述水平偏转量以及所述垂直偏转量,确定所述眼部特征目标位置;所述实时偏移量具体为:The eye feature target position determining unit is configured to determine the eye feature target position according to the target deflection angle, the horizontal deflection amount, and the vertical deflection amount; the real-time offset is specifically:
Figure PCTCN2021089709-appb-000012
Figure PCTCN2021089709-appb-000012
其中,eye_yaw为所述眼部特征目标位置的水平分量;eye_pitch为所述眼部特征目标位置的垂直分量;target_yaw为所述目标偏转角度的水平分量;target_pitch为所述目标偏转角度的垂直分量;motor_yaw为所述水平偏转量;motor_pitch为所述水平偏转量。Wherein, eye_yaw is the horizontal component of the eye characteristic target position; eye_pitch is the vertical component of the eye characteristic target position; target_yaw is the horizontal component of the target deflection angle; target_pitch is the vertical component of the target deflection angle; motor_yaw is the horizontal deflection amount; motor_pitch is the horizontal deflection amount.
可选地,所述转动控制单元262包括:Optionally, the rotation control unit 262 includes:
躯干旋转控制单元,用于基于所述目标偏转角度控制所述机器人的躯干转动;所述目标偏转角度为机器人的躯干由初始角度旋转至朝向目标对象时对应的偏转角度;The torso rotation control unit is configured to control the torso rotation of the robot based on the target deflection angle; the target deflection angle is the corresponding deflection angle when the torso of the robot rotates from an initial angle to a target object;
头部旋转控制单元,用于在所述机器人的躯干旋转的过程中,根据实时反馈的实时躯干角度以及所述目标偏转角度,动态控制所述机器人的头部转动至头部目标位置,所述头部目标位置为所述机器人头部首次面向所述目标对象的方向的位置;The head rotation control unit is used to dynamically control the head of the robot to rotate to the target head position according to the real-time torso angle fed back in real time and the target deflection angle during the rotation of the torso of the robot. The head target position is the position of the robot head facing the target object for the first time;
头部旋转回正单元,用于在所述机器人头部首次面向所述目标对象的方向后,根据所述实时躯干角度,动态控制所述机器人的头部转动,以保持所述机器人头部面向所述目标对象的方向。The head rotation return unit is used to dynamically control the head rotation of the robot according to the real-time torso angle after the robot head faces the direction of the target object to keep the robot head facing The direction of the target object.
可选地,所述躯干旋转控制单元包括:Optionally, the trunk rotation control unit includes:
第一基准旋转角度确定单元,用于以预设的控制周期获取所述机器人的躯干在各个所述控制周期的第一基准旋转角度;A first reference rotation angle determining unit, configured to obtain a first reference rotation angle of the robot's torso in each control period in a preset control period;
第一周期旋转指令生成单元,用于根据当前所述控制周期的所述第一基准旋转角度、上一所述控制周期的第一跟踪误差角度以及所述躯干旋转角度,生成当前所述控制周期对应的第一周期旋转指令;所述第一周期旋转指令内的旋转角度具体为:The first cycle rotation instruction generating unit is configured to generate the current control cycle according to the first reference rotation angle of the current control cycle, the first tracking error angle of the previous control cycle, and the trunk rotation angle The corresponding first cycle rotation command; the rotation angle in the first cycle rotation command is specifically:
Figure PCTCN2021089709-appb-000013
Figure PCTCN2021089709-appb-000013
其中,motor 0_yaw为所述第一周期旋转指令内的旋转角度;current_motor 0_yaw为当前所述控制周期的第一基准旋转角度;target_yaw为所述目标偏转角度;motor 0_yaw_diff为当前所述控制周期的第一跟踪误差角度;motor 0_yaw_last_diff为上一所述控制周期的第一跟踪误差角度;motor 0_Kp以及motor 0_Kd为所述躯干旋转部件的预设调整参数; Wherein, motor 0 _yaw is the rotation angle in the first cycle rotation command; current_motor 0 _yaw is the first reference rotation angle of the current control cycle; target_yaw is the target deflection angle; motor 0 _yaw_diff is the current control The first tracking error angle of the cycle; motor 0 _yaw_last_diff is the first tracking error angle of the previous control cycle; motor 0 _Kp and motor 0 _Kd are the preset adjustment parameters of the trunk rotation part;
第一周期旋转指令执行单元,用于根据各个所述第一周期旋转指令控制所述机器人的躯干转动。The first cycle rotation instruction execution unit is configured to control the torso of the robot to rotate according to each of the first cycle rotation instructions.
可选地,所述头部旋转控制单元包括:Optionally, the head rotation control unit includes:
旋转角度分量确定单元,用于基于所述目标偏转角度确定水平偏转角度以及垂直偏转角度;A rotation angle component determining unit, configured to determine a horizontal deflection angle and a vertical deflection angle based on the target deflection angle;
第一头部旋转控制单元,用于根据所述垂直偏转角度控制所述机器人的头部在垂直方向上转动;以及同时,The first head rotation control unit is configured to control the head of the robot to rotate in the vertical direction according to the vertical deflection angle; and at the same time,
第二头部旋转控制单元,用于在所述机器人的躯干转动的过程中,根据实时反馈的实时躯干角度以及所述水平偏转角度,动态控制所述机器人的头部在水平方向上转动至水平目标位置;所述水平目标位置为所述机器人头部在水平方向上首次面向所述目标对象的方向的位置。The second head rotation control unit is used to dynamically control the head of the robot to rotate to the horizontal in the horizontal direction according to the real-time torso angle and the horizontal deflection angle fed back in real time during the rotation of the torso of the robot Target position; the horizontal target position is the position of the robot head facing the target object in the horizontal direction for the first time.
可选地,所述第一头部旋转控制单元包括:Optionally, the first head rotation control unit includes:
第二基准旋转角度确定单元,用于以预设的控制周期获取所述机器人的头部在垂直方向上各个所述控制周期的第二基准旋转角度;The second reference rotation angle determination unit is configured to obtain the second reference rotation angle of the head of the robot in the vertical direction in each control period in a preset control period;
第二周期旋转指令生成单元,用于根据当前所述控制周期的所述第二基准旋转角度、上一所述控制周期的第二跟踪误差角度以及所述垂直偏转角度,生成当前所述控制周期对应的第二周期旋转指令;所述第二周期旋转指令内的旋转角度具体为:The second cycle rotation command generating unit is configured to generate the current control cycle according to the second reference rotation angle of the current control cycle, the second tracking error angle of the previous control cycle, and the vertical deflection angle The corresponding second cycle rotation command; the rotation angle in the second cycle rotation command is specifically:
Figure PCTCN2021089709-appb-000014
Figure PCTCN2021089709-appb-000014
其中,motor 2_pitch为所述第二周期旋转指令内的旋转角度;current_motor 2_pitch为当前所述控制周期的第二基准旋转角度;target_pitch为所述垂直偏转角度;motor 2_pitch_diff为当前所述控制周期的第二跟踪误差角度;motor 2_pitch_last_diff为上一所述控制周期的第二跟踪误差角度;motor 2_Kp以及 motor 2_Kd为所述第二头部旋转部件的预设调整参数; Wherein, motor 2 _pitch is the rotation angle in the second cycle rotation command; current_motor 2 _pitch is the second reference rotation angle of the current control cycle; target_pitch is the vertical deflection angle; motor 2 _pitch_diff is the current control Cycle second tracking error angle; motor 2 _pitch_last_diff is the second tracking error angle of the previous control cycle; motor 2 _Kp and motor 2 _Kd are preset adjustment parameters of the second head rotating part;
第二周期旋转指令执行单元,用于根据各个所述第二周期旋转指令控制所述机器人的头部在垂直方向上转动。The second cycle rotation instruction execution unit is configured to control the head of the robot to rotate in the vertical direction according to each of the second cycle rotation instructions.
可选地,所述第二头部旋转控制单元包括:Optionally, the second head rotation control unit includes:
第三基准旋转角度确定单元,用于以预设的控制周期获取所述机器人的头部在水平方向上各个所述控制周期的第三基准旋转角度;The third reference rotation angle determination unit is configured to obtain the third reference rotation angle of the head of the robot in each control period in the horizontal direction in a preset control period;
第三周期旋转指令生成单元,用于根据当前所述控制周期的所述第三基准旋转角度、上一所述控制周期的第三跟踪误差角、所述实时躯干角度以及所述水平偏转角度,生成当前所述控制周期对应的第三周期旋转指令;所述第三周期旋转指令内的旋转角度具体为:The third cycle rotation instruction generating unit is used to generate the third reference rotation angle of the current control cycle, the third tracking error angle of the previous control cycle, the real-time torso angle, and the horizontal deflection angle, The third cycle rotation command corresponding to the current control cycle is generated; the rotation angle in the third cycle rotation command is specifically:
Figure PCTCN2021089709-appb-000015
Figure PCTCN2021089709-appb-000015
其中,motor 1_yaw为所述第三周期旋转指令内的旋转角度;current_motor 1_yaw为当前所述控制周期的第三基准旋转角度;target_yaw为所述水平偏转角度;motor 1_yaw_diff为当前所述控制周期的第三跟踪误差角度;motor 1_yaw_last_diff为上一所述控制周期的第三跟踪误差角度;motor 1_Kp以及motor 1_Kd为所述第一头部旋转部件的预设调整参数;current_motor 0_yaw为所述实时躯干角度; Wherein, motor 1 _yaw is the rotation angle in the third cycle rotation command; current_motor 1 _yaw is the third reference rotation angle of the current control cycle; target_yaw is the horizontal deflection angle; motor 1 _yaw_diff is the current control The third tracking error angle of the cycle; motor 1 _yaw_last_diff is the third tracking error angle of the previous control cycle; motor 1 _Kp and motor 1 _Kd are the preset adjustment parameters of the first head rotating part; current_motor 0 _yaw Is the real-time torso angle;
第三周期旋转指令执行单元,用于根据各个所述三周期旋转指令动态控制所述机器人的头部在水平方向上转动。The third cycle rotation instruction execution unit is used to dynamically control the head of the robot to rotate in the horizontal direction according to each of the three cycle rotation instructions.
可选地,所述位置信息获取单元261包括:Optionally, the location information acquiring unit 261 includes:
场景图像采集单元,用于通过所述机器人内置的摄像模块获取包含所述目标对象的场景图像;A scene image acquisition unit, configured to acquire a scene image containing the target object through a camera module built into the robot;
对象中心坐标确定单元,用于从所述场景图像中标记出所述目标对象的边界坐标,并根据所述边界坐标确定所述目标对象的目标对象的位置信息;所述目标对象的位置信息包括所述目标对象的对象中心坐标;The object center coordinate determining unit is configured to mark the boundary coordinates of the target object from the scene image, and determine the position information of the target object of the target object according to the boundary coordinates; the position information of the target object includes The object center coordinates of the target object;
目标偏转角度计算单元,用于根据所述对象中心坐标以及所述场景图像的图像中心坐标,确定所述目标偏转角度;A target deflection angle calculation unit, configured to determine the target deflection angle according to the object center coordinates and the image center coordinates of the scene image;
Figure PCTCN2021089709-appb-000016
Figure PCTCN2021089709-appb-000016
其中,target_yaw为所述目标偏转角度的水平分量;target_pitch为所述目标偏转角度的垂直分量;OC为所述摄像模块的焦距;CD为对象中心坐标与图像中心坐标之间的水平偏差;AC为对象中心坐标与图像中心坐标之间的垂直偏差。Wherein, target_yaw is the horizontal component of the target deflection angle; target_pitch is the vertical component of the target deflection angle; OC is the focal length of the camera module; CD is the horizontal deviation between the object center coordinates and the image center coordinates; AC is The vertical deviation between the object center coordinate and the image center coordinate.
可选地,所述场景图像采集单元包括:Optionally, the scene image acquisition unit includes:
图像校正参量获取单元,用于根据摄像模块与所述显示模块的输出画面的模拟眼部特征的位置之间的偏移量,确定图像校正参量;The image correction parameter acquisition unit is configured to determine the image correction parameter according to the offset between the position of the simulated eye feature of the output screen of the camera module and the display module;
图像校正执行单元,用于基于所述图像校正参量调整所述摄像模块采集到的原始 图像,生成所述场景图像。The image correction execution unit is configured to adjust the original image collected by the camera module based on the image correction parameter to generate the scene image.
因此,本申请实施例提供的机器人的控制装置同样可以根据目标对象与机器人之间的相对位置,确定目标偏转角度,并基于上述目标偏转角度调整显示模块的输出画面以及旋转部件来实现眼神追随,由于该机器人的眼球运动是通过显示模块的输出界面进行模拟,移动过程中无需马达驱动实现偏转,响应时间较短,实现了平滑眼神跟随,提高了拟人化程度以及眼神跟随的准确性。Therefore, the robot control device provided by the embodiments of the present application can also determine the target deflection angle according to the relative position between the target object and the robot, and adjust the output screen and rotating parts of the display module based on the target deflection angle to achieve eye tracking. Since the eye movement of the robot is simulated through the output interface of the display module, no motor drive is required to achieve deflection during the movement, the response time is short, smooth eye tracking is realized, and the degree of personification and accuracy of eye tracking are improved.
图27为本申请一实施例提供的机器人的结构示意图。如图27所示,该实施例的机器人27包括:至少一个处理器270(图27中仅示出一个)处理器、存储器271以及存储在所述存储器271中并可在所述至少一个处理器270上运行的计算机程序272,所述处理器270执行所述计算机程序272时实现上述任意各个机器人的控制方法实施例中的步骤。FIG. 27 is a schematic structural diagram of a robot provided by an embodiment of the application. As shown in FIG. 27, the robot 27 of this embodiment includes: at least one processor 270 (only one is shown in FIG. 27), a processor, a memory 271, and a memory 271 that is stored in the memory 271 and can be used in the at least one processor. A computer program 272 running on the 270. The processor 270 implements the steps in any of the foregoing embodiments of the control method for each robot when the computer program 272 is executed.
所述机器人27可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该机器人可包括,但不仅限于,处理器270、存储器271。本领域技术人员可以理解,图27仅仅是机器人27的举例,并不构成对机器人27的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。The robot 27 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The robot may include, but is not limited to, a processor 270 and a memory 271. Those skilled in the art can understand that FIG. 27 is only an example of the robot 27, and does not constitute a limitation on the robot 27. It may include more or less parts than shown, or a combination of some parts, or different parts, such as It can also include input and output devices, network access devices, and so on.
所称处理器270可以是中央处理单元(Central Processing Unit,CPU),该处理器270还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 270 may be a central processing unit (Central Processing Unit, CPU), and the processor 270 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
所述存储器271在一些实施例中可以是所述机器人27的内部存储单元,例如机器人27的硬盘或内存。所述存储器271在另一些实施例中也可以是所述机器人27的外部存储设备,例如所述机器人27上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器271还可以既包括所述机器人27的内部存储单元也包括外部存储设备。所述存储器271用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器271还可以用于暂时地存储已经输出或者将要输出的数据。In some embodiments, the memory 271 may be an internal storage unit of the robot 27, such as a hard disk or a memory of the robot 27. In other embodiments, the memory 271 may also be an external storage device of the robot 27, such as a plug-in hard disk equipped on the robot 27, a smart media card (SMC), and a secure digital (Secure Digital). Digital, SD) card, flash card, etc. Further, the memory 271 may also include both an internal storage unit of the robot 27 and an external storage device. The memory 271 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 271 can also be used to temporarily store data that has been output or will be output.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction and execution process between the above-mentioned devices/units are based on the same concept as the method embodiment of this application, and its specific functions and technical effects can be found in the method embodiment section. I won't repeat it here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的 形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, only the division of the above functional units and modules is used as an example. In practical applications, the above functions can be allocated to different functional units and modules as needed. Module completion, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above. The functional units and modules in the embodiments can be integrated into one processing unit, or each unit can exist alone physically, or two or more units can be integrated into one unit. The above-mentioned integrated units can be hardware-based Formal realization can also be realized in the form of a software functional unit. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application. For the specific working process of the units and modules in the foregoing system, reference may be made to the corresponding process in the foregoing method embodiment, which will not be repeated here.
本申请实施例还提供了一种网络设备,该网络设备包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。An embodiment of the present application also provides a network device, which includes: at least one processor, a memory, and a computer program stored in the memory and running on the at least one processor, and the processor executes The computer program implements the steps in any of the foregoing method embodiments.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application provide a computer program product. When the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/机器人的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the implementation of all or part of the processes in the above-mentioned embodiment methods in the present application can be accomplished by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may at least include: any entity or device capable of carrying computer program code to the camera/robot, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory ( RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium. For example, U disk, mobile hard disk, floppy disk or CD-ROM, etc. In some jurisdictions, according to legislation and patent practices, computer-readable media cannot be electrical carrier signals and telecommunication signals.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own focus. For parts that are not described in detail or recorded in an embodiment, reference may be made to related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/network equipment and method may be implemented in other ways. For example, the device/network device embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units. Or components can be combined or integrated into another system, or some features can be omitted or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例 方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application.

Claims (19)

  1. 一种机器人的控制方法,其特征在于,包括:A method for controlling a robot, which is characterized in that it comprises:
    获取目标对象的位置信息;Obtain the location information of the target object;
    根据所述目标对象的位置信息控制所述机器人转动;Controlling the rotation of the robot according to the position information of the target object;
    在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,以使所述机器人上所述显示模块的所述输出画面的模拟眼部特征的眼部视线在机器人旋转过程中为看向所述目标对象的方向。During the rotation of the robot, the output screen of the display module for simulating eye features on the robot is dynamically adjusted, so that the output screen of the display module on the robot can simulate the eye features of the eye. The partial line of sight is the direction of looking at the target object during the rotation of the robot.
  2. 根据权利要求1所述的控制方法,其特征在于,所述在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,具体为:The control method according to claim 1, wherein the dynamic adjustment of the output screen of the display module for simulating eye features on the robot during the rotation of the robot is specifically:
    在机器人的旋转过程中,根据目标偏转角度动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面;所述目标偏转角度为机器人由初始角度旋转至面向目标对象时对应的偏转角度。During the rotation of the robot, the output screen of the display module for simulating eye features on the robot is dynamically adjusted according to the target deflection angle; the target deflection angle is the corresponding deflection angle when the robot is rotated from the initial angle to facing the target object .
  3. 根据权利要求2所述的控制方法,其特征在于,所述在机器人的旋转过程中,根据目标偏转角度动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,包括:The control method according to claim 2, characterized in that, during the rotation of the robot, dynamically adjusting the output screen of the display module for simulating eye features on the robot according to the target deflection angle, comprising:
    在控制所述机器人旋转的过程中,根据所述机器人实时反馈的实时旋转角度以及所述目标偏转角度,动态调整所述输出画面中的模拟眼部特征的位置到眼部特征目标位置;所述眼部特征目标位置是所述输出画面所模拟的眼部视线首次看向所述目标对象的方向时对应的位置;In the process of controlling the rotation of the robot, dynamically adjust the position of the simulated eye feature in the output screen to the target position of the eye feature according to the real-time rotation angle fed back by the robot in real time and the target deflection angle; The eye feature target position is the corresponding position when the eye sight simulated by the output screen looks in the direction of the target object for the first time;
    在所述眼部视线首次看向所述目标对象的方向后,根据所述实时旋转角度,动态调整所述模拟眼部特征的位置,以保持所述眼部视线看向所述目标对象的方向。After the line of sight of the eye looks in the direction of the target object for the first time, dynamically adjust the position of the simulated eye feature according to the real-time rotation angle to maintain the direction of the line of sight of the eye to the target object .
  4. 根据权利要求3所述的控制方法,其特征在于,所述在控制所述机器人旋转的过程中,根据所述机器人实时反馈的实时旋转角度以及所述目标偏转角度,动态调整所述输出画面中的模拟眼部特征的位置到眼部特征目标位置,包括:The control method according to claim 3, wherein, in the process of controlling the rotation of the robot, the output screen is dynamically adjusted according to the real-time rotation angle fed back by the robot in real time and the target deflection angle. The simulated eye feature position to the eye feature target position includes:
    基于所述实时旋转角度确定水平偏转量以及垂直偏转量;Determining the amount of horizontal deflection and the amount of vertical deflection based on the real-time rotation angle;
    根据所述目标偏转角度、所述水平偏转量以及所述垂直偏转量,确定所述眼部特征目标位置;所述实时偏移量具体为:According to the target deflection angle, the horizontal deflection amount, and the vertical deflection amount, the eye characteristic target position is determined; the real-time offset is specifically:
    Figure PCTCN2021089709-appb-100001
    Figure PCTCN2021089709-appb-100001
    其中,eye_yaw为所述眼部特征目标位置的水平分量;eye_pitch为所述眼部特征目标位置的垂直分量;target_yaw为所述目标偏转角度的水平分量;target_pitch为所述目标偏转角度的垂直分量;motor_yaw为所述水平偏转量;motor_pitch为所述水平偏转量。Wherein, eye_yaw is the horizontal component of the eye characteristic target position; eye_pitch is the vertical component of the eye characteristic target position; target_yaw is the horizontal component of the target deflection angle; target_pitch is the vertical component of the target deflection angle; motor_yaw is the horizontal deflection amount; motor_pitch is the horizontal deflection amount.
  5. 根据权利要求1所述的控制方法,其特征在于,所述根据所述目标对象的位置信息控制所述机器人转动,包括:The control method according to claim 1, wherein the controlling the rotation of the robot according to the position information of the target object comprises:
    基于目标偏转角度控制所述机器人的躯干转动;所述目标偏转角度为机器人的躯干由初始角度旋转至朝向目标对象时对应的偏转角度;Control the torso rotation of the robot based on the target deflection angle; the target deflection angle is the corresponding deflection angle when the torso of the robot rotates from the initial angle to the target object;
    在所述机器人的躯干旋转的过程中,根据实时反馈的实时躯干角度以及所述目标 偏转角度,动态控制所述机器人的头部转动至头部目标位置,所述头部目标位置为所述机器人头部首次面向所述目标对象的方向时对应的位置;During the rotation of the torso of the robot, according to the real-time torso angle and the target deflection angle fed back in real time, the head of the robot is dynamically controlled to rotate to the target head position, and the target head position is the robot The corresponding position when the head faces the direction of the target object for the first time;
    在所述机器人头部首次面向所述目标对象的方向后,根据所述实时躯干角度,动态控制所述机器人的头部转动,以保持所述机器人头部面向所述目标对象的方向。After the head of the robot faces the direction of the target object for the first time, according to the real-time torso angle, the head of the robot is dynamically controlled to rotate to keep the head of the robot facing the direction of the target object.
  6. 根据权利要求5所述的控制方法,其特征在于,所述基于目标偏转角度控制所述机器人的躯干转动,包括:The control method according to claim 5, wherein the controlling the torso rotation of the robot based on the target deflection angle comprises:
    以预设的控制周期获取所述机器人的躯干在各个所述控制周期的第一基准旋转角度;Acquiring the first reference rotation angle of the torso of the robot in each control period in a preset control period;
    根据当前所述控制周期的所述第一基准旋转角度、上一所述控制周期的第一跟踪误差角度以及所述躯干旋转角度,生成当前所述控制周期对应的第一周期旋转指令;Generate a first cycle rotation instruction corresponding to the current control cycle according to the first reference rotation angle of the current control cycle, the first tracking error angle of the previous control cycle, and the trunk rotation angle;
    根据各个所述第一周期旋转指令控制所述机器人的躯干转动。The torso of the robot is controlled to rotate according to each of the first cycle rotation instructions.
  7. 根据权利要求5所述的控制方法,所述在所述机器人的躯干旋转的过程中,根据实时反馈的实时躯干角度以及所述目标偏转角度,动态控制所述机器人的头部转动至头部目标位置,所述头部目标位置为所述机器人头部首次面向所述目标对象的方向的位置,包括:The control method according to claim 5, wherein during the rotation of the torso of the robot, the head of the robot is dynamically controlled to rotate to the head target according to the real-time torso angle fed back in real time and the target deflection angle Position, the head target position is the position of the robot head facing the target object for the first time, including:
    基于所述目标偏转角度确定水平偏转角度以及垂直偏转角度;Determining a horizontal deflection angle and a vertical deflection angle based on the target deflection angle;
    根据所述垂直偏转角度控制所述机器人的头部在垂直方向上转动;Controlling the head of the robot to rotate in a vertical direction according to the vertical deflection angle;
    以及同时在所述机器人的躯干转动的过程中,根据实时反馈的实时躯干角度以及所述水平偏转角度,动态控制所述机器人的头部在水平方向上转动至水平目标位置;所述水平目标位置为所述机器人头部在水平方向上首次面向所述目标对象的方向的位置。And at the same time, during the rotation of the torso of the robot, according to the real-time torso angle and the horizontal deflection angle fed back in real time, the head of the robot is dynamically controlled to rotate in the horizontal direction to a horizontal target position; the horizontal target position Is the position of the robot head facing the target object in the horizontal direction for the first time.
  8. 根据权利要求7所述的控制方法,其特征在于,所述根据所述垂直偏转角度控制所述机器人的头部在垂直方向上转动,包括:The control method according to claim 7, wherein the controlling the head of the robot to rotate in the vertical direction according to the vertical deflection angle comprises:
    以预设的控制周期获取所述机器人的头部在垂直方向上各个所述控制周期的第二基准旋转角度;Acquiring the second reference rotation angle of the head of the robot in the vertical direction in each control period in a preset control period;
    根据当前所述控制周期的所述第二基准旋转角度、上一所述控制周期的第二跟踪误差角度以及所述垂直偏转角度,生成当前所述控制周期对应的第二周期旋转指令;Generate a second cycle rotation command corresponding to the current control cycle according to the second reference rotation angle of the current control cycle, the second tracking error angle of the previous control cycle, and the vertical deflection angle;
    根据各个所述第二周期旋转指令控制所述机器人的头部在垂直方向上转动。The head of the robot is controlled to rotate in the vertical direction according to each of the second cycle rotation instructions.
  9. 根据权利要求7所述的控制方法,其特征在于,所述在所述机器人的躯干转动的过程中,根据实时反馈的实时躯干角度以及所述水平偏转角度,动态控制所述机器人的头部在水平方向上转动至水平目标位置,包括:The control method according to claim 7, characterized in that, during the rotation of the torso of the robot, according to the real-time torso angle fed back in real time and the horizontal deflection angle, the head of the robot is dynamically controlled to Rotate in the horizontal direction to the horizontal target position, including:
    以预设的控制周期获取所述机器人的头部在水平方向上各个所述控制周期的第三基准旋转角度;Acquiring the third reference rotation angle of the head of the robot in each control period in the horizontal direction in a preset control period;
    根据当前所述控制周期的所述第三基准旋转角度、上一所述控制周期的第三跟踪误差角、所述实时躯干角度以及所述水平偏转角度,生成当前所述控制周期对应的第三周期旋转指令;According to the third reference rotation angle of the current control period, the third tracking error angle of the previous control period, the real-time torso angle, and the horizontal deflection angle, the third reference rotation angle corresponding to the current control period is generated. Cycle rotation instruction;
    根据各个所述三周期旋转指令动态控制所述机器人的头部在水平方向上转动。The head of the robot is dynamically controlled to rotate in the horizontal direction according to each of the three-cycle rotation instructions.
  10. 根据权利要求1-9任一所述的控制方法,其特征在于,所述获取目标对象的位置信息,包括:The control method according to any one of claims 1-9, wherein said acquiring location information of the target object comprises:
    通过所述机器人内置的摄像模块获取包含所述目标对象的场景图像;Acquiring a scene image containing the target object through a camera module built into the robot;
    从所述场景图像中标记出所述目标对象的边界坐标,并根据所述边界坐标确定所述目标对象的目标对象的位置信息;所述目标对象的位置信息包括所述目标对象的对象中心坐标。Mark the boundary coordinates of the target object from the scene image, and determine the position information of the target object of the target object according to the boundary coordinates; the position information of the target object includes the object center coordinates of the target object .
  11. 根据权利要求10所述的控制方法,其特征在于,所述通过所述机器人内置的摄像模块获取包含所述目标对象的场景图像,包括:The control method according to claim 10, wherein the acquiring a scene image containing the target object through a camera module built into the robot comprises:
    根据摄像模块与所述显示模块的输出画面的模拟眼部特征的位置之间的偏移量,确定图像校正参量;Determine the image correction parameter according to the offset between the position of the simulated eye feature of the output screen of the camera module and the display module;
    基于所述图像校正参量调整所述摄像模块采集到的原始图像,生成所述场景图像。The original image collected by the camera module is adjusted based on the image correction parameter to generate the scene image.
  12. 一种机器人的控制装置,其特征在于,包括:A robot control device is characterized in that it comprises:
    位置信息获取单元,用于获取目标对象的位置信息;The location information acquiring unit is used to acquire the location information of the target object;
    转动控制单元,用于根据所述目标对象的位置信息控制所述机器人转动;A rotation control unit for controlling the rotation of the robot according to the position information of the target object;
    画面调整单元,用于在所述机器人旋转的过程中,动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面,以使所述机器人上所述显示模块的所述输出画面的模拟眼部特征的眼部视线在机器人旋转过程中为看向所述目标对象的方向。The screen adjustment unit is used to dynamically adjust the output screen of the display module for simulating eye features on the robot during the rotation of the robot, so that the output screen of the display module on the robot is The line of sight of the eye that simulates the eye feature is the direction of looking at the target object during the rotation of the robot.
  13. 根据权利要求12所述的控制装置,其特征在于,所述画面调整单元具体用于:在机器人的旋转过程中,根据目标偏转角度动态调整所述机器人上用于模拟眼部特征的显示模块的输出画面;所述目标偏转角度为机器人由初始角度旋转至面向目标对象时对应的偏转角度。The control device according to claim 12, wherein the picture adjustment unit is specifically configured to dynamically adjust the display module of the robot for simulating eye characteristics according to the target deflection angle during the rotation of the robot. Output screen; the target deflection angle is the corresponding deflection angle when the robot rotates from the initial angle to face the target object.
  14. 根据权利要求13所述的控制装置,其特征在于,所述画面调整单元包括:The control device according to claim 13, wherein the picture adjustment unit comprises:
    眼部运动控制单元,用于在控制所述机器人旋转的过程中,根据所述机器人实时反馈的实时旋转角度以及所述目标偏转角度,动态调整所述输出画面中的模拟眼部特征的位置到眼部特征目标位置;所述眼部特征目标位置是所述输出画面所模拟的眼部视线首次看向所述目标对象的方向的位置;The eye movement control unit is used to dynamically adjust the position of the simulated eye feature in the output screen according to the real-time rotation angle fed back by the robot in real time and the target deflection angle in the process of controlling the rotation of the robot Eye feature target position; the eye feature target position is the position where the eye's line of sight simulated by the output screen looks in the direction of the target object for the first time;
    眼部回正控制单元,还用于在所述眼部视线首次看向所述目标对象的方向后,根据所述实时旋转角度,动态调整所述模拟眼部特征的位置,以保持所述眼部视线看向所述目标对象的方向。The eye correction control unit is further configured to dynamically adjust the position of the simulated eye feature according to the real-time rotation angle after the eye line of sight is in the direction of the target object for the first time, so as to maintain the eye The line of sight looks in the direction of the target object.
  15. 根据权利要求12所述的控制装置,其特征在于,所述转动控制单元包括:The control device according to claim 12, wherein the rotation control unit comprises:
    躯干旋转控制单元,用于基于所述目标偏转角度控制所述机器人的躯干转动;所述目标偏转角度为机器人的躯干由初始角度旋转至朝向目标对象时对应的偏转角度;The torso rotation control unit is configured to control the torso rotation of the robot based on the target deflection angle; the target deflection angle is the corresponding deflection angle when the torso of the robot rotates from an initial angle to a target object;
    头部旋转控制单元,用于在所述机器人的躯干旋转的过程中,根据实时反馈的实时躯干角度以及所述目标偏转角度,动态控制所述机器人的头部转动至头部目标位置,所述头部目标位置为所述机器人头部首次面向所述目标对象的方向的位置;The head rotation control unit is used to dynamically control the head of the robot to rotate to the target head position according to the real-time torso angle fed back in real time and the target deflection angle during the rotation of the torso of the robot. The head target position is the position of the robot head facing the target object for the first time;
    头部旋转回正单元,用于在所述机器人头部首次面向所述目标对象的方向后,根据所述实时躯干角度,动态控制所述机器人的头部转动,以保持所述机器人头部面向所述目标对象的方向。The head rotation return unit is used to dynamically control the head rotation of the robot according to the real-time torso angle after the robot head faces the direction of the target object for the first time to keep the robot head facing The direction of the target object.
  16. 根据权利要求15所述的控制装置,其特征在于,所述头部旋转控制单元包括:The control device according to claim 15, wherein the head rotation control unit comprises:
    旋转角度分量确定单元,用于基于所述目标偏转角度确定水平偏转角度以及垂直偏转角度;A rotation angle component determining unit, configured to determine a horizontal deflection angle and a vertical deflection angle based on the target deflection angle;
    第一头部旋转控制单元,用于根据所述垂直偏转角度控制所述机器人的头部在垂直方向上转动;以及同时,The first head rotation control unit is configured to control the head of the robot to rotate in the vertical direction according to the vertical deflection angle; and at the same time,
    第二头部旋转控制单元,用于在所述机器人的躯干转动的过程中,根据实时反馈的实时躯干角度以及所述水平偏转角度,动态控制所述机器人的头部在水平方向上转动至水平目标位置;所述水平目标位置为所述机器人头部在水平方向上首次面向所述目标对象的方向的位置。The second head rotation control unit is used to dynamically control the head of the robot to rotate to the horizontal in the horizontal direction according to the real-time torso angle and the horizontal deflection angle fed back in real time during the rotation of the torso of the robot Target position; the horizontal target position is the position of the robot head facing the target object in the horizontal direction for the first time.
  17. 一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至11任一项所述的方法。A robot comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program as claimed in claims 1 to 11 Any of the methods.
  18. 一种机器人,包括处理器、显示器以及用于控制机器人转动的传动部件,其特征在于,所述处理器执行计算机程序时实现如权利要求1至11任一项所述的方法,所述传动部件根据所述计算机程序输出的控制指令,控制所述机器人转动,以及所述显示器根据所述计算机程序的输出指令,动态调整用于模拟眼部特征的输出画面。A robot comprising a processor, a display, and a transmission component for controlling the rotation of the robot, wherein the processor implements the method according to any one of claims 1 to 11 when the processor executes a computer program, and the transmission component The robot is controlled to rotate according to the control instructions output by the computer program, and the display dynamically adjusts the output screen used to simulate eye features according to the output instructions of the computer program.
  19. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至11任一项所述的方法。A computer-readable storage medium storing a computer program, wherein the computer program implements the method according to any one of claims 1 to 11 when the computer program is executed by a processor.
PCT/CN2021/089709 2020-05-08 2021-04-25 Robot control method and apparatus, and robot and storage medium WO2021223611A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010382291.0 2020-05-08
CN202010382291.0A CN111546338A (en) 2020-05-08 2020-05-08 Robot control method and device, robot and storage medium

Publications (1)

Publication Number Publication Date
WO2021223611A1 true WO2021223611A1 (en) 2021-11-11

Family

ID=71996160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089709 WO2021223611A1 (en) 2020-05-08 2021-04-25 Robot control method and apparatus, and robot and storage medium

Country Status (2)

Country Link
CN (1) CN111546338A (en)
WO (1) WO2021223611A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111546338A (en) * 2020-05-08 2020-08-18 华为技术有限公司 Robot control method and device, robot and storage medium
CN112965599B (en) * 2021-03-08 2023-11-21 深圳市优必选科技股份有限公司 Interaction method and device of robot and terminal and robot
CN113021338B (en) * 2021-03-16 2022-08-05 苏州工艺美术职业技术学院 Intelligent accompanying robot
CN114872036B (en) * 2022-03-28 2023-07-04 青岛中电绿网新能源有限公司 Robot eye-head cooperative gazing behavior control method based on bionic principle
CN117146828B (en) * 2023-10-30 2024-03-19 网思科技股份有限公司 Method and device for guiding picking path, storage medium and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647633A (en) * 2018-05-08 2018-10-12 腾讯科技(深圳)有限公司 Recognition and tracking method, recognition and tracking device and robot
CN109459722A (en) * 2018-10-23 2019-03-12 同济大学 Voice interactive method based on face tracking device
CN110091345A (en) * 2018-01-31 2019-08-06 丰田自动车株式会社 It interacts robot and it controls program
CN209364629U (en) * 2017-10-30 2019-09-10 索尼公司 Information processing unit
CN111546338A (en) * 2020-05-08 2020-08-18 华为技术有限公司 Robot control method and device, robot and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4878462B2 (en) * 2005-09-07 2012-02-15 株式会社国際電気通信基礎技術研究所 Communication robot
ES2358139B1 (en) * 2009-10-21 2012-02-09 Thecorpora, S.L. SOCIAL ROBOT.
JP5526942B2 (en) * 2010-03-31 2014-06-18 ソニー株式会社 Robot apparatus, control method and program for robot apparatus
CN102915044B (en) * 2012-10-25 2016-03-30 上海大学 A kind of robot head eye coordinate motion control method based on bionic principle
CN106041940A (en) * 2016-06-14 2016-10-26 江苏若博机器人科技有限公司 Heavy-load wireless transmission five-core high-speed joint robot control system
DE112017005954T8 (en) * 2016-11-24 2019-10-24 Groove X, Inc. Autonomous acting robot that changes the pupil
JP2019005842A (en) * 2017-06-23 2019-01-17 カシオ計算機株式会社 Robot, robot controlling method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN209364629U (en) * 2017-10-30 2019-09-10 索尼公司 Information processing unit
CN110091345A (en) * 2018-01-31 2019-08-06 丰田自动车株式会社 It interacts robot and it controls program
CN108647633A (en) * 2018-05-08 2018-10-12 腾讯科技(深圳)有限公司 Recognition and tracking method, recognition and tracking device and robot
CN109459722A (en) * 2018-10-23 2019-03-12 同济大学 Voice interactive method based on face tracking device
CN111546338A (en) * 2020-05-08 2020-08-18 华为技术有限公司 Robot control method and device, robot and storage medium

Also Published As

Publication number Publication date
CN111546338A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
WO2021223611A1 (en) Robot control method and apparatus, and robot and storage medium
KR102565755B1 (en) Electronic device for displaying an avatar performed a motion according to a movement of a feature point of a face and method of operating the same
US10078377B2 (en) Six DOF mixed reality input by fusing inertial handheld controller with hand tracking
CN108475120B (en) Method for tracking object motion by using remote equipment of mixed reality system and mixed reality system
CN107209386B (en) Augmented reality view object follower
CN107577045B (en) The method, apparatus and storage medium of predicting tracing for head-mounted display
US9367136B2 (en) Holographic object feedback
US9591295B2 (en) Approaches for simulating three-dimensional views
US11127380B2 (en) Content stabilization for head-mounted displays
US20150379770A1 (en) Digital action in response to object interaction
JP7008730B2 (en) Shadow generation for image content inserted into an image
US9547412B1 (en) User interface configuration to avoid undesired movement effects
JP2019012526A (en) Image processing method, computer program, and recording medium
CN112154405B (en) Three-dimensional push notification
US20220269338A1 (en) Augmented devices
EP3669252A1 (en) Techniques for predictive prioritization of image portions in processing graphics
US11579747B1 (en) 3D user interface depth forgiveness
US20210374615A1 (en) Training a Model with Human-Intuitive Inputs
US20190339767A1 (en) Method and system for initiating application and system modal control based on hand locations
US11961195B2 (en) Method and device for sketch-based placement of virtual objects
US20210201108A1 (en) Model with multiple concurrent timescales
US11823339B2 (en) Localization accuracy response
US20240137436A1 (en) Phone case for tracking and localization
KR20240035281A (en) An augmented reality device for proving augmented reality service which controls an object in the real world space and a method for operating the same
WO2024086645A1 (en) Phone case for tracking and localization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21800193

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21800193

Country of ref document: EP

Kind code of ref document: A1