WO2022037413A1 - 机器人 - Google Patents

机器人 Download PDF

Info

Publication number
WO2022037413A1
WO2022037413A1 PCT/CN2021/110711 CN2021110711W WO2022037413A1 WO 2022037413 A1 WO2022037413 A1 WO 2022037413A1 CN 2021110711 W CN2021110711 W CN 2021110711W WO 2022037413 A1 WO2022037413 A1 WO 2022037413A1
Authority
WO
WIPO (PCT)
Prior art keywords
torso
head
robot
interface
chassis
Prior art date
Application number
PCT/CN2021/110711
Other languages
English (en)
French (fr)
Inventor
姚秀军
许哲涛
Original Assignee
京东科技信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东科技信息技术有限公司 filed Critical 京东科技信息技术有限公司
Publication of WO2022037413A1 publication Critical patent/WO2022037413A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals

Definitions

  • Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a robot.
  • An embodiment of the present disclosure proposes a robot, which includes a detachable chassis, a torso, and a head.
  • the chassis and the torso are plugged and connected through a first docking terminal, and the trunk and the head are implemented through a second docking terminal.
  • Plug-in connection wherein the chassis includes: a power supply module, an obstacle avoidance module, a network module, a network port and a USB interface for plug-in connection with the first docking terminal;
  • the torso includes: a switch, a torso display screen , a USB interface for plugging and unplugging connection with the first docking terminal, wherein the switch is respectively connected to the network port of the first docking terminal and the network port of the second docking terminal through different network ports.
  • Pull connection the head includes: a head display screen, a network port for connecting with the network port of the second docking terminal or an external network for plugging and unplugging.
  • the first docking terminal includes: a network port, a USB port, a mechanical guide post, an electrical interface, a grounding interface, an indicator light interface, a power-on key interface, and an emergency-stop key interface, which are respectively connected to the chassis and the trunk.
  • the second docking terminal includes: a network port, a USB interface, a mechanical guide post, an electrical interface, and a grounding interface, which are respectively connected to the chassis and the trunk by plugging and unplugging.
  • the power supply module includes an on-off controller, and the on-off controller is respectively connected to the power-on key and the relay, wherein the relay and the emergency-stop key installed on the torso are plugged into the emergency-stop key interface of the first docking terminal.
  • the power-on key installed on the torso and the power-on-off controller realize the plug-and-pull connection through the power-on key interface of the first docking terminal.
  • the second docking terminal further includes a power-on key interface and an emergency-stop key interface
  • the head is provided with a power-on key and an emergency-stop key
  • the emergency-stop key on the head is connected to the emergency-stop key interface of the second docking terminal.
  • the emergency stop key interface of the first docking terminal is connected to the relay of the power supply module
  • the power-on key of the head is connected to the power-on key interface of the first docking terminal through the power-on key interface of the second docking terminal, and then connected to the switch of the power supply module. machine controller.
  • the first butt terminal is embedded in the chassis and is plug-and-pull connection with the bottom protrusion of the torso
  • the second butt terminal is embedded in the torso and plug-and-pull connection with the bottom protrusion of the head.
  • the robot further includes a controller including a head controller located on the head and at least one of the following: a robot controller located on the chassis and a torso controller located on the torso.
  • the head further includes at least one of the following devices connected to the head controller: a microphone, a head speaker, a voice recognition module and steering gear, a camera, a 3D obstacle avoidance camera, and a card reader, wherein the 3D The obstacle avoidance camera and the card reader are plugged and connected with the USB interface of the second docking terminal through the USB interface.
  • the torso further includes at least one of the following devices connected to the torso controller: a torso sound, a 3D obstacle avoidance camera, and a card reader, wherein the 3D obstacle avoidance camera and the card reader are connected to the first through a USB interface
  • the USB interface of the terminal realizes plug-in connection.
  • the electrical interface includes at least one of the following: a 24V electrical interface, a 12V electrical interface, and a 5V electrical interface.
  • the chassis further includes a signal control module connected to the controller through a serial port, and the signal control module is connected to at least one of the following modules: an ultrasonic sensor, a cliff sensor, a temperature and humidity sensor, a PM2.5 sensor, an indicator light drive circuit, Battery communication interface.
  • a signal control module connected to the controller through a serial port, and the signal control module is connected to at least one of the following modules: an ultrasonic sensor, a cliff sensor, a temperature and humidity sensor, a PM2.5 sensor, an indicator light drive circuit, Battery communication interface.
  • the chassis further includes a motor driver, a motion motor and an encoder, and the motor driver and the controller are connected through a CAN bus.
  • the robot further includes a chassis indicator light, a torso indicator light, and a head indicator light respectively mounted on the chassis, the torso, and the head.
  • the microphone is an array of microphones.
  • the chassis is a wheeled or tracked chassis.
  • the obstacle avoidance module includes at least one of the following devices: lidar, GPS module, ultrasonic sensor, cliff sensor, 3D obstacle avoidance camera.
  • the power supply module includes at least one of the following: a 24V power supply module, a 12V power supply module, and a 5V power supply module, each of which includes an overcurrent protection module and a voltage conversion module.
  • the robot provided by the embodiments of the present disclosure is divided into three parts: a chassis, a torso, and a head.
  • the pluggable design of the robot is realized, which is convenient for transportation and quick assembly.
  • Robot application scenarios according to the robot interface definition, the robot can be equipped with different chassis, such as replacing a wheeled chassis with a crawler chassis to enhance terrain adaptability, or expanding other hardware devices on the robot chassis/torso, such as removing the head and installing infrared Carry out tasks such as security patrols with surveillance cameras, or take off the robot head alone and use it as a remote conference terminal.
  • the invention improves the modularity and hardware expandability of the robot, and increases the application scene of the robot.
  • FIG. 1a, 1b are structural diagrams of an embodiment of a robot according to the present disclosure
  • Fig. 2a is a schematic diagram of a signal control of a chassis part of an embodiment of a robot according to the present disclosure
  • FIGS. 2b and 2c are schematic diagrams of power supply control of the chassis part of an embodiment of the robot according to the present disclosure
  • FIG. 3 is a schematic diagram of a first docking terminal between a chassis and a torso of an embodiment of a robot according to the present disclosure
  • FIG. 4 is a schematic circuit diagram of a torso portion of an embodiment of a robot according to the present disclosure
  • FIG. 5 is a schematic diagram of a second docking terminal between the torso and the head of an embodiment of the robot according to the present disclosure
  • FIG. 6 is a schematic circuit diagram of a head portion of one embodiment of a robot according to the present disclosure.
  • Fig. 1a shows the actual picture of the robot, and the robot of the present invention is mainly used for service.
  • the robot is divided into three modules: chassis 100 , torso 200 , and head 300 . And each module can be used independently.
  • the head of the robot can be removed and replaced with surveillance cameras, infrared cameras, etc. according to the connector interface definition between the head and the torso; the removed head power supply connection network cable can be used as an independent video conference terminal; the torso can be equipped with other types of Chassis; the chassis can also carry other forms of robot body.
  • the robot includes: a detachable chassis, a torso, and a head, the chassis and the torso are plugged and connected through a first butt terminal, and the torso and the head are plugged and unplugged through a second butt terminal connection, wherein the chassis includes: a power supply module, an obstacle avoidance module, a network module, a network port and a USB interface for plugging and unplugging with the first docking terminal; the trunk includes: a switch, a trunk display screen, which is used for connecting with the first docking terminal.
  • the USB interface for plugging and unplugging connection wherein the switch realizes plugging and unplugging connection with the network port of the first docking terminal and the network port of the second docking terminal respectively through different network ports;
  • the first butt terminal is embedded in the chassis and is plug-and-pull connection with the bottom protrusion of the torso
  • the second butt terminal is embedded in the trunk and protrudes from the bottom of the head Plug connection.
  • the butt terminals are embedded for protection.
  • the robot further includes a controller, and the controller includes a head controller located on the head and at least one of the following controllers: a robot controller located on the chassis, a torso control located on the torso device. If only the head controller is included, all other devices are directly or indirectly connected to the head controller.
  • the head controller recognizes the user's instruction and executes it.
  • the data collected by the sensors is also analyzed by the head controller, which controls the robot movement. For example, the laser sensor sends point cloud data to the head controller, and after the head controller analyzes and determines that there is an obstacle, it controls the robot to turn.
  • the motion control is realized by the chassis controller, and the head controller mainly realizes functions such as voice acquisition and recognition.
  • the torso controller can control the torso display screen, the torso sound, and also control the card reader to collect user information for identification.
  • this type of robot is mainly used for services.
  • the height of the robot is close to the average height of a human.
  • the microphone and speaker can be installed on the head, which is convenient for collecting the human voice and outputting the response.
  • the head display is installed on the head and can be used to display facial expressions, such as smiling faces. Other prompt information can also be displayed, such as the robot does not understand the user's question, and a question mark can be displayed.
  • the “first” and “second” here have no practical meaning. “First” only means that it is installed on the head, and “second” means that it is installed on the torso. Since the head can be detached and used as a stand-alone conference terminal, the head can also include a microphone and sound.
  • the head part may also serve as a video conference terminal, and therefore, the head part may include a camera.
  • the camera can also be used to collect face images for face recognition, so as to determine the user's identity and grant corresponding permissions. For example, an unregistered user can do regular Q&A but cannot make the bot move.
  • the camera is not fixed on the head and can be rotated toward the sound source according to the position of the sound source.
  • the camera is fixed on the front of the head (the display side of the head), and if the microphone detects the position of the sound source, the head can be rotated to face the sound source. At this point, both the display screen and the camera are facing the user, the frontal face image of the user can be captured, and the user can see the displayed content from the front.
  • the head and the torso are connected through movable joints.
  • the robot can make corresponding head movements, such as nodding, shaking his head, etc., according to the content of the answer.
  • the movable joint controls the head movement through the servo.
  • the header may further include a speech recognition module.
  • Offline speech recognition is available. You can choose to send the user's voice to the server for recognition, or you can choose local offline voice recognition. It can detect the strength of the network signal, and if the signal is weak, it can automatically switch to local offline speech recognition.
  • the head controller after the head controller obtains the face image through the camera, it can locate the position of the user relative to the robot, and then control the head to rotate so that the head display screen faces the user. If the head rotation angle exceeds a predetermined value, the robot can be rotated as a whole so that the head display screen faces the user.
  • the number of display screens on the head is greater than one. If the head is a cuboid, up to 4 displays can be arranged on the head, serving 4 users at the same time. In order to prevent interference, the voice dialogue may not be accepted at this time, but the service can be operated through the touch of the display screen.
  • the display screen facing the user's head can be lit, and other displays on the head are not lit. In this way, the user-facing effect can be achieved without the need for the robot's head to turn. Or just turn a small angle for a user-facing effect.
  • the number of cameras on the head is greater than one. If the head is a cuboid, up to 4 cameras can be arranged on the head to serve users in different directions.
  • the microphone may be a microphone array, and the position of the user relative to the robot can be located through the microphone array, and then the head is controlled to rotate so that the head display screen faces the user. If the head rotation angle exceeds a predetermined value, the robot can be rotated as a whole so that the head display screen faces the user.
  • the network port of the head can be connected to the external network in addition to being connected to the torso.
  • the head is removed and used as a conference terminal alone, plug in the network cable to the network port, and then the network teleconference can be performed.
  • the head may also include a 24V power supply module, a 12V power supply module, and a 5V power supply module
  • the 24V power supply module includes an overcurrent protection module and a 24V voltage conversion module
  • the 12V power supply module includes an overcurrent power supply module Protection module and 12V voltage conversion module
  • 5V power supply module includes overcurrent protection module and 5V voltage conversion module.
  • the head portion may also include any one of a 24V power supply module, a 12V power supply module, and a 5V power supply module.
  • the 24V power supply module includes an overcurrent protection module and a 24V voltage conversion module
  • the 12V power supply module includes an overcurrent protection module and a 12V voltage conversion module
  • the 5V power supply module includes an overcurrent protection module and a 5V voltage conversion module.
  • the head may also include any two of a 24V power supply module, a 12V power supply module, and a 5V power supply module.
  • the 24V power supply module includes an overcurrent protection module and a 24V voltage conversion module
  • the 12V power supply module includes an overcurrent protection module and a 12V voltage conversion module
  • the 5V power supply module includes an overcurrent protection module and a 5V voltage conversion module.
  • a head indicator light may also be installed on the head. When the head circuit is connected, the head indicator lights up.
  • the indicator light on the head may be a three-color LED indicator, and different colors indicate different states. For example, the red light indicates that the circuit is connected, and the green light indicates that the network is connected. The blue light is on to indicate that the voice recognition function is activated, etc.
  • the head display screen may be a touch screen.
  • a number of question and answer options can be displayed on the head display for the user to choose from.
  • a touch screen can be used for fingerprint recognition. The identity of the user can be identified, and instructions that match their identity can be executed. For unregistered users, the size of the fingerprint can be used to determine whether the user is an adult or a child. Different response methods can be adopted for different types of users. For example, children can use the voices of animated characters to answer questions.
  • the head may further include various sensors, for example, an ultrasonic sensor, a cliff sensor, a temperature and humidity sensor, a PM2.5 sensor, and the like.
  • Ultrasonic sensors are used to supplement the blind area of lidar and avoid obstacles; cliff sensors are used to detect steps on the ground to prevent robots from falling; temperature and humidity sensors and PM2.5 sensors are used to monitor the operating environment, and can also Measuring body temperature. While identifying the user's identity, the robot can also measure body temperature and conduct epidemic prevention inspections.
  • PM2.5 sensor detects air quality. Robots for different purposes can be equipped with different sensors. The sensor is a pluggable design and can be combined in any combination.
  • the head may further include a 3D obstacle avoidance camera, which is connected to the robot controller of the chassis through a USB interface.
  • the 3D obstacle avoidance camera can be an infrared camera, which can measure depth information for obstacle avoidance.
  • the head may further include a card reader.
  • the card reader is connected with the head controller through the USB interface. Through the switch of the torso, the read information is transmitted to the robot controller of the chassis.
  • the card reader can be an RFID card reader that reads ID card information, or can also be used to read barcode information.
  • the user can also wear a special wristband, and the card reader reads the information of the wristband to determine the user's identity.
  • an emergency stop key may be installed on the head.
  • the emergency stop key is connected to the relay in the power supply module of the chassis.
  • the second docking terminal includes an emergency stop key interface.
  • the emergency stop key of the head is connected to the emergency stop key interface of the first docking terminal through the emergency stop key interface of the second docking terminal, and then connected to the relay of the power supply module.
  • a power-on button may be installed on the head.
  • the second docking terminal includes a power-on key interface.
  • the power-on key of the head is connected to the power-on key interface of the first docking terminal through the power-on key interface of the second docking terminal, and then connected to the power-on/off controller of the power supply module.
  • the head may further include a GPS module.
  • the GPS module is used to locate the position of the robot.
  • the LiDAR can be configured with GPS for precise positioning.
  • the head portion may further include a solar cell panel.
  • the head display screen may output the identification of the robot as a wake-up word for voice recognition, for example, the identification of the robot is "007". The user only needs to say "zero zero seven" to wake up the robot to serve. Different robots can be distinguished by identification, and the robots used for service can be selected in a targeted manner.
  • the head controller may also record the voiceprint feature of the user who wakes up the robot, and bind the awakened user as the master.
  • the user's command is filtered out of the received voice commands of multiple people, so as to avoid the interference of other user's commands.
  • user A wakes up the robot 007, and 007 later receives user A's instruction to go forward and user B's instruction to go back.
  • the robot 007 can recognize the master user A and only obey the instructions of the user A.
  • the user can also issue an unbinding instruction to release the robot resources.
  • the torso may further include a microphone. Used to capture the user's voice.
  • the torso may further include a torso display screen.
  • the size of the torso display is larger than the size of the head display.
  • the head display is mainly used to display expressions.
  • the torso display can display some textual information.
  • the torso may further include a torso sound.
  • the torso has separate microphones and speakers, so it can be used independently as a conference terminal.
  • the position of the torso speakers and the position of the head speakers can create surround sound and enhance the sound effect.
  • the torso may further include a network port or a USB interface for connecting to the outside. Expandable electronics.
  • the torso may also serve as a video conference terminal, and therefore, the torso may include a camera.
  • the camera can also be used to collect face images for face recognition, so as to determine the user's identity and grant corresponding permissions. For example, an unregistered user can do regular Q&A but cannot make the bot move.
  • cameras may be arranged at different heights of the torso to facilitate the collection of face images of users with different heights.
  • the camera of the torso is on the same side as the display screen of the torso.
  • the number of display screens of the torso is greater than one. If the torso is a rectangular parallelepiped, up to 4 displays can be arranged on the torso, serving 4 users at the same time. In order to prevent interference, the voice dialogue may not be accepted at this time, but the service can be operated through the touch of the display screen.
  • the display screen of the torso facing the user side can be lit, and the display screens of other torso are not lit. This allows for a user-facing effect without the need for the robot's torso to turn. Or just turn a small angle for a user-facing effect.
  • the number of cameras of the torso is greater than one. If the torso is a cuboid, up to 4 cameras can be arranged on the torso to serve users in different directions.
  • the torso may further include arms and palms.
  • the robot can make corresponding torso actions according to the content of the answer, such as shaking hands, applauding, hugging, etc.
  • a fingerprint collector can be set on the palm, and the user's fingerprint information can be collected by shaking hands for identification.
  • the height of the torso can be adjusted.
  • the height can be adjusted manually, and the height of the user can also be determined according to the height of the located sound source, and then the height of the torso is automatically adjusted, so as to facilitate the touch operation of the user, especially for short users such as children. And can more accurately capture voice and face images.
  • the torso may further include a speech recognition module.
  • Offline speech recognition is available. You can choose to send the user's voice to the server for recognition, or you can choose local offline voice recognition. It can detect the strength of the network signal, and if the signal is weak, it can automatically switch to local offline speech recognition.
  • the torso controller may locate the position of the user relative to the robot, and then control the torso to rotate so that the torso display screen faces the user. If the rotation angle of the torso exceeds a predetermined value, the robot can be rotated as a whole so that the torso display faces the user.
  • the microphone of the torso may be a microphone array, and the position of the user relative to the robot can be located through the microphone array, and then the torso is controlled to rotate so that the torso display screen faces the user. If the rotation angle of the torso exceeds a predetermined value, the robot can be rotated as a whole so that the torso display faces the user.
  • the network port of the trunk can be connected to the external network in addition to the trunk.
  • the torso may also include a 24V power supply module, a 12V power supply module, and a 5V power supply module
  • the 24V power supply module includes an overcurrent protection module and a 24V voltage conversion module
  • the 12V power supply module includes overcurrent protection module and 12V voltage conversion module
  • 5V power supply module includes overcurrent protection module and 5V voltage conversion module.
  • the torso may also include any one of a 24V power supply module, a 12V power supply module, and a 5V power supply module.
  • the 24V power supply module includes an overcurrent protection module and a 24V voltage conversion module
  • the 12V power supply module includes an overcurrent protection module and a 12V voltage conversion module
  • the 5V power supply module includes an overcurrent protection module and a 5V voltage conversion module.
  • the trunk may also include any two of a 24V power supply module, a 12V power supply module, and a 5V power supply module.
  • the 24V power supply module includes an overcurrent protection module and a 24V voltage conversion module
  • the 12V power supply module includes an overcurrent protection module and a 12V voltage conversion module
  • the 5V power supply module includes an overcurrent protection module and a 5V voltage conversion module.
  • a torso indicator light may also be installed on the torso. The torso indicator lights up when the torso circuit is connected.
  • the torso indicator light may be a three-color LED indicator light, and different colors indicate different states. For example, the red light indicates that the circuit is connected, the green light indicates that the network is connected, and the blue light indicates that the network is connected. The light is on to indicate that the voice recognition function is activated, etc.
  • the torso display screen may be a touch screen.
  • a number of question and answer options can be displayed on the torso display for the user to choose from.
  • a touch screen can be used for fingerprint recognition. The identity of the user can be identified, and instructions that match their identity can be executed. For unregistered users, the size of the fingerprint can be used to determine whether the user is an adult or a child. Different response methods can be adopted for different types of users. For example, children can use the voices of animated characters to answer questions.
  • the torso may further include various sensors, for example, an ultrasonic sensor, a cliff sensor, a temperature and humidity sensor, a PM2.5 sensor, and the like.
  • Ultrasonic sensors are used to supplement the blind area of lidar and avoid obstacles; cliff sensors are used to detect steps on the ground to prevent robots from falling; temperature and humidity sensors and PM2.5 sensors are used to monitor the operating environment, and can also Measuring body temperature. While identifying the user's identity, the robot can also measure body temperature and conduct epidemic prevention inspections.
  • PM2.5 sensor detects air quality. Robots for different purposes can be equipped with different sensors. The sensor is a pluggable design and can be combined in any combination.
  • the torso may further include a 3D obstacle avoidance camera, which is connected to the robot controller of the chassis through a USB interface.
  • the 3D obstacle avoidance camera can be an infrared camera, which can measure depth information for obstacle avoidance.
  • the torso may further include a card reader.
  • the card reader is connected to the torso controller via the USB interface. Through the switch of the torso, the read information is transmitted to the robot controller of the chassis.
  • the card reader can be an RFID card reader for reading ID card information, or it can also be used to read barcode information.
  • a dedicated wristband can also be assigned to the user, and the card reader reads the information of the wristband to determine the user's identity.
  • an emergency stop button may be installed on the torso.
  • the emergency stop key is connected to the relay in the power supply module of the chassis.
  • a power-on button may be installed on the torso.
  • the torso may further include a GPS module.
  • the GPS module is used to locate the position of the robot.
  • the LiDAR can be configured with GPS for precise positioning.
  • the torso may also include solar panels.
  • the torso display screen may output the identification of the robot as a wake-up word for voice recognition, for example, the identification of the robot is "007". The user only needs to say "zero zero seven" to wake up the robot to serve. Different robots can be distinguished by identification, and the robots used for service can be selected in a targeted manner.
  • the torso controller may also record the voiceprint feature of the user who wakes up the robot, and bind the awakened user as the master.
  • the user's command is filtered out of the received voice commands of multiple people, so as to avoid the interference of other user's commands.
  • user A wakes up the robot 007, and 007 later receives user A's instruction to go forward and user B's instruction to go back.
  • the robot 007 can recognize the master user A and only obey the instructions of the user A.
  • the user can also issue an unbinding instruction to release the robot resources.
  • the chassis may further include a lidar.
  • Lidar is used for laser point cloud recognition of obstacles and construction of maps. Accordingly, the chassis is movable.
  • the chassis can be wheeled or tracked.
  • the chassis is replaceable.
  • the wheeled chassis or crawler chassis can be replaced by plugging and unplugging.
  • the chassis may further include various sensors, for example, an ultrasonic sensor, a cliff sensor, a temperature and humidity sensor, a PM2.5 sensor, and the like.
  • Ultrasonic sensors are used to supplement the blind area of lidar and avoid obstacles; cliff sensors are used to detect steps on the ground to prevent robots from falling; temperature and humidity sensors and PM2.5 sensors are used to monitor the operating environment, and can also Measuring body temperature. While identifying the user's identity, the robot can also measure body temperature and conduct epidemic prevention inspections.
  • PM2.5 sensor detects air quality. Robots for different purposes can be equipped with different sensors. The sensor is a pluggable design and can be combined in any combination.
  • a chassis indicator light may also be installed on the chassis.
  • the chassis indicator light illuminates when the chassis circuit is connected.
  • the chassis indicator light may be a three-color LED indicator light, and different colors indicate different states. For example, the red light indicates that the circuit is connected, the green light indicates that the network is connected, and the blue light indicates that the network is connected. Lights up to indicate that the motor is running, etc.
  • the chassis further includes a motor driver, a motion motor and an encoder, and the motor driver and the robot controller are connected through a CAN bus.
  • the robot controller After the robot controller receives an external movement command, it controls the motor driver to drive the electric motion motor to drive the wheels or tracks to rotate, thereby making the robot move.
  • the chassis may further include a GPS module.
  • the GPS module is used to locate the position of the robot.
  • the LiDAR can be configured with GPS for precise positioning.
  • the chassis may also be in the form of two legs, with movable knee joints, and can go up and down stairs.
  • the cliff sensor detects a step, it notifies the robot controller.
  • the robot controller controls the motor driver to drive the motion motor, so that the robot lifts its legs and steps up the steps.
  • the chassis may further include solar panels.
  • the chassis may further include a camera for photographing the ground. Compare the ground image taken with the standard image, if the matching degree is less than the predetermined value, it means that the ground is stained. The location of the stain can be notified to the cleaner for cleaning.
  • the chassis may further include a cleaning device, which may be cleaned when the camera detects that the ground is stained.
  • the chassis 100 of the robot may include a lidar 101, a sensor 102 (optional, may include an ultrasonic sensor, a cliff sensor, a temperature and humidity sensor, a PM2.5 sensor, etc.), a chassis indicator light (optional) 103,
  • the robot torso 200 may include a torso display screen 201, a torso indicator light 202 (optional), a 3D obstacle avoidance camera 203 (optional), a card reader 204 (optional);
  • the robot's head 300 may include activities
  • the joint 301 , the microphone 302 , the head display 303 , the camera 304 , and an external device interface 305 may also be included.
  • the plug connection between the chassis 100 and the torso 200 is realized through the first docking terminal 12 .
  • the plug connection between the torso 200 and the head 300 is realized through the second docking terminal 23 .
  • FIGS 2a and 2b show the schematic diagram of the chassis circuit of the robot.
  • the chassis integrates signal control (shown in Figure 2a) and power control (shown in Figures 2b, 2c).
  • the chassis includes lidar, which can perform laser point cloud recognition for map construction of the robot.
  • the chassis may also include an IMU (Inertial Measurement Unit) for self-positioning of the robot. Lidar and IMU together constitute the robot's navigation control system.
  • the chassis of the robot can be fixed or mobile. If mobile, the chassis also includes motion motors to control wheel or track rotation.
  • the robot controller can send motion commands to the motor driver through the CAN bus. After the driver receives the command, it drives the motor to rotate to realize the forward, backward, and steering operations of the robot.
  • the encoder on the motor monitors the rotation state of the motor and realizes the closed-loop control of the motor. .
  • the chassis also includes a signal control module connected with the power supply module, and the signal control module is connected with at least one of the following modules: an ultrasonic sensor, a cliff sensor, an indicator light drive circuit, a battery communication interface, a temperature and humidity sensor, and a PM2.5 sensor.
  • Ultrasonic sensors are used to supplement lidar blind spots and avoid obstacles; cliff sensors are used to detect steps on the ground to prevent robots from falling; temperature and humidity sensors and PM2.5 sensors are used to monitor the operating environment; indicator lights
  • the drive circuit is used to drive the indicator light to display the working state of the robot; the battery communication interface is used to collect the battery power, temperature, and current lamp charge and discharge information.
  • the signal control module can communicate with the robot controller through the RS232 interface, and the robot controller can be connected with the network module through the network port.
  • the network module can include two modes of wired routing and wireless routing. Through the network module, the robot can be accessed remotely.
  • the robot controller has reserved the network port and the USB interface to the torso part.
  • FIG. 2b shows a schematic diagram of the power control section.
  • the power supply module may include an on-off controller, which is respectively connected to the power-on button, the relay, the 24V power supply module, the 12V power supply module, the 5V power supply module, the battery, the charging plug and/or the charging pole.
  • the robot can use either charging pile charging (ie charging pole) or plug charging (charging plug), or two methods. Robots can work while charging.
  • the torso part of the robot can be equipped with a power-on button for the control of the robot on and off, the power supply of the robot motor is controlled by a relay, and the logic coil part of the relay can also be connected in series with an emergency stop switch.
  • the robot provides three power supply circuits: 12V, 24V, 5V.
  • the battery input is converted into 5V, 12V, and 24V power through the 5V voltage conversion module, 12V voltage conversion module, and 24V voltage conversion module, respectively, to supply power to all the robot equipment, and each power supply circuit has an overcurrent protection module, 5V, 12V, 24V power supply
  • the bus runs through the chassis, torso, and head modules, as shown in Figure 2c.
  • Figure 3 shows the first docking terminal between the robot chassis and the torso, and the robot chassis and the torso are connected through the first docking terminal.
  • the chassis and torso are designed to be pluggable.
  • the first docking terminal of the electrical connection between the chassis and the torso includes a mechanical guide column, a 24V electrical interface, a 12V electrical interface, a 5V electrical interface, a grounding interface, an indicator light interface, a USB interface, a network port, a power-on key interface, and an emergency stop key interface.
  • There are mechanical guide posts at both ends of the first butt terminal which are used for mechanical guidance when the chassis and the torso are butted to protect the terminals.
  • the first docking terminal includes any one of a 24V electrical interface, a 12V electrical interface, and a 5V electrical interface, a grounding interface, a network port, and at least one of the following: an indicator light Interface, USB interface, power-on key interface, emergency stop key interface.
  • the specific implementation is consistent with the power supply modules used in the head, torso, and chassis.
  • the first docking terminal includes any two electrical interfaces among a 24V electrical interface, a 12V electrical interface, and a 5V electrical interface, a ground interface, a network port, and at least one of the following: an indicator light Interface, USB interface, power-on key interface, emergency stop key interface.
  • the specific implementation is consistent with the power supply modules used in the head, torso, and chassis.
  • FIG. 4 shows the circuit diagram of the torso of the robot.
  • the torso part can include a torso controller (which can be an Android motherboard), a torso display screen, a card reader (optional), a torso indicator light (optional), and a 3D obstacle avoidance camera (optional). ), switch, torso audio (, and communicate with the chassis through the network port and USB.
  • the torso controller can drive the torso display screen to display content in a specific scene, and play sound through the torso audio.
  • the torso controller and card reader Through USB communication, it is used for swiping card recognition to obtain robot permissions.
  • the 3D obstacle avoidance camera (which can be a camera that can measure attempts such as an infrared camera) is connected to the chassis controller through USB to perform image obstacle avoidance.
  • the trunk is equipped with a switch to divide one network port into one. to the torso and head interface.
  • Figure 5 shows the second connection terminal between the robot's torso and the head.
  • the second connection terminal may include a mechanical guide post, a 24V electrical interface, a 12V electrical interface, a 5V electrical interface, a ground interface, and a network port. Both ends of the second butt terminal are provided with mechanical guide posts, which are used for mechanical guidance when the torso and the head are butted to protect the terminals.
  • the second docking terminal includes any one of a 24V electrical interface, a 12V electrical interface, and a 5V electrical interface, a grounding interface, a network port, and at least one of the following: an indicator light Interface, USB interface, power-on key interface, emergency stop key interface.
  • the specific implementation is consistent with the power supply modules used in the head, torso, and chassis.
  • the second docking terminal includes any two electrical interfaces among a 24V electrical interface, a 12V electrical interface, and a 5V electrical interface, a ground interface, a network port, and at least one of the following: an indicator light Interface, USB interface, power-on key interface, emergency stop key interface.
  • the specific implementation is consistent with the power supply modules used in the head, torso, and chassis.
  • FIG. 6 shows a schematic circuit diagram of the robot's head.
  • the head can include: a microphone, a camera, a head controller, a head display (also called an expression screen), a head sound, and communicate with the chassis and the torso through the network port.
  • the robot head can be fixed or movable. If it is active, it has servos installed.
  • the head controller can be an Android motherboard, which is used to drive the head display, and can display expressions to interact with the user.
  • the head may also include a speech module for performing speech recognition and making corresponding actions.
  • the head can also send the voice collected by the microphone to the cloud server through the network for voice recognition.
  • the camera can recognize the user's facial features and can also be used for facial recognition.
  • the personnel tracking of the robot head display is realized by controlling the steering gear.
  • the head of the robot can be removed separately. After the power supply and network port are provided, it can be used alone to provide functions such as remote conferences and artificial intelligence assistants.
  • the microphone can take the form of a microphone array, which can locate the direction of the sound source so that the head display of the head faces the sound source.
  • the invention is a modular service robot, which divides the service robot into three parts: a chassis, a torso and a head, and realizes the pluggable design of the robot, which is convenient for transportation and quick assembly.
  • Each module can be used alone, which enhances the application scenario of the robot.
  • the robot interface the robot can be equipped with different chassis, such as replacing the wheeled chassis with the crawler chassis to enhance the terrain adaptability, or expand other hardware devices on the robot chassis/torso , such as removing the head to install infrared and surveillance cameras for security patrols and other tasks, or removing the robot head separately and using it as a remote conference terminal.
  • the invention improves the modularity and hardware expandability of the robot, and increases the application scene of the robot.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Manipulator (AREA)

Abstract

一种机器人,包括可拆分的底盘(100)、躯干(200)、头部(300),底盘(100)与躯干(200)之间通过第一对接端子(12)实现插拔连接,躯干(200)与头部(300)之间通过第二对接端子(23)实现插拔连接,其中,底盘(100)包括:供电模块、避障模块、网络模块、用于与第一对接端子(12)插拔连接的网口和USB接口;躯干(200)包括:交换机、躯干显示屏(201)、用于与第一对接端子(12)插拔连接的USB接口,其中,交换机通过不同的网口分别与第一对接端子(12)的网口和第二对接端子(23)的网口实现插拔连接;头部(300)包括:头部显示屏(303)、用于与第二对接端子(23)的网口或外网插拔连接的网口。由此实现了机器人可插拔设计,便于运输和快速组装,并且底盘、躯干、头部三个模块可以单独使用。

Description

机器人
本专利申请要求于2020年8月18日提交的、申请号为202010831200.7、申请人为京东数科海益信息科技有限公司、发明名称为“机器人”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本公开的实施例涉及计算机技术领域,具体涉及一种机器人。
背景技术
随着人工智能的发展,服务型机器人逐渐出现在商场、银行大厅、医院大厅等人员密集场所,提供引导、问答和业务介绍等服务。现有服务型机器人,通常为一体式机器人,机器人作为整体提供服务,功能扩展性差。
当前服务型机器人主要缺点:1.功能单一,硬件可扩展性差2.机器人作为整体使用,机器人组件不能单独使用,降低了机器人可应用的场景。
发明内容
本公开的实施例提出了一种机器人,包括可拆分的底盘、躯干、头部,底盘与躯干之间通过第一对接端子实现插拔连接,躯干与头部之间通过第二对接端子实现插拔连接,其中,所述底盘包括:供电模块、避障模块、网络模块、用于与所述第一对接端子插拔连接的网口和USB接口;所述躯干包括:交换机、躯干显示屏、用于与所述第一对接端子插拔连接的USB接口,其中,所述交换机通过不同的网口分别与所述第一对接端子的网口和所述第二对接端子的网口实现插拔连接;所述头部包括:头部显示屏、用于与所述第二对接端子的网口或外网插拔连接的网口。
在一些实施例中,第一对接端子包括分别与底盘和躯干插拔连接的:网口、USB接口、机械导向柱、电接口、接地接口、指示灯接口、开机键接口、急停键接口,第二对接端子包括分别与底盘和躯干插拔连接的:网口、USB接口、机械导向柱、电接口、接地接口。
在一些实施例中,供电模块包括开关机控制器,开关机控制器分别与开机键、继电器连接,其中,继电器与安装在躯干上的急停键通过第一对接端子的急停键接口实现插拔连接,安装在躯干上的开机键与开关机控制器通过第一对接端子的开机键接口实现插拔连接。
在一些实施例中,第二对接端子还包括开机键接口及急停键接口,头部安装有开机键和急停键,头部的急停键通过第二对接端子的急停键接口连接到第一对接端子的急停键接口,再连接到供电模块的继电器,头部的开机键通过第二对接端子的开机键接口连接到第一对接端子的开机键接口,再连接到供电模块的开关机控制器。
在一些实施例中,第一对接端子内嵌于底盘中,与躯干的底部凸起插拔连接,第二对接端子内嵌于躯干中,与头部的底部凸起插拔连接。
在一些实施例中,机器人还包括控制器,控制器包括位于头部的头部控制器以及以下至少一种控制器:位于底盘的机器人控制器、位于躯干的躯干控制器。
在一些实施例中,头部还包括与头部控制器连接的以下至少一项设备:麦克风、头部音响、语音识别模块和舵机、摄像头、3D避障摄像头、读卡器,其中,3D避障摄像头和读卡器通过USB接口与第二对接端子的USB接口插拔连接。
在一些实施例中,躯干还包括与躯干控制器连接的以下至少一项设备:躯干音响、3D避障摄像头、读卡器,其中,3D避障摄像头和读卡器通过USB接口与第一对接端子的USB接口实现插拔连接。
在一些实施例中,电接口包括以下至少一种:24V电接口、12V电接口、5V电接口。
在一些实施例中,底盘还包括与控制器通过串口连接的信号控制模块,信号控制模块与以下至少一个模块连接:超声波传感器、悬崖 传感器、温湿度传感器、PM2.5传感器、指示灯驱动电路、电池通信接口。
在一些实施例中,底盘还包括电机驱动器、运动电机和编码器,电机驱动器与控制器通过CAN总线连接。
在一些实施例中,机器人还包括分别安装在底盘、躯干、头部上的底盘指示灯、躯干指示灯、头部指示灯。
在一些实施例中,麦克风为麦克风阵列。
在一些实施例中,底盘为轮式底盘或履带式底盘。
在一些实施例中,避障模块包括以下至少一种设备:激光雷达、GPS模块、超声波传感器、悬崖传感器、3D避障摄像头。
在一些实施例中,供电模块包括以下至少一种:24V供电模块、12V供电模块、5V供电模块,每种供电模块包括过流保护模块和电压转换模块。
本公开的实施例提供的机器人,分为底盘、躯干、头部三部分,实现了机器人可插拔设计,便于运输和快速组装,并且底盘、躯干、头部三个模块可以单独使用,增强了机器人应用场景,根据机器人接口定义可以使机器人搭载不同的底盘比如将轮式底盘换成履带式底盘增强地形适应能力,或者在机器人底盘/躯干上扩展其它的硬件设备,比如取下头部安装红外和监控相机进行安防巡逻等任务,或者单独取下机器人头部,用作远程会议终端。本发明提高了机器人模块化和硬件可扩展性,增加了机器人应用场景。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:
图1a、1b是根据本公开的机器人的一个实施例的结构图;
图2a是根据本公开的机器人的一个实施例的底盘部分的信号控制原理图;
图2b、2c是根据本公开的机器人的一个实施例的底盘部分的电源控制原理图;
图3是根据本公开的机器人的一个实施例的底盘与躯干之间的第一对接端子的示意图;
图4是根据本公开的机器人的一个实施例的躯干部分的电路原理图;
图5是根据本公开的机器人的一个实施例的躯干与头部之间的第二对接端子的示意图;
图6是根据本公开的机器人的一个实施例的头部部分的电路原理图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。
图1a所示为机器人实物图,本发明的机器人主要用于服务。机器人分为底盘100、躯干200、头部300三个模块。并且每个模块可以独立使用。例如,可以将机器人头部取下按照头部与躯干的接插件接口定义换装监控相机、红外相机等;取下的头部供电连接网线可以用作独立的视频会议终端;躯干可以搭载其它类型底盘;底盘也可以搭载其它形式机器人本体。
在本实施例中,该机器人包括:可拆分的底盘、躯干、头部,底盘与躯干之间通过第一对接端子实现插拔连接,躯干与头部之间通过第二对接端子实现插拔连接,其中,底盘包括:供电模块、避障模块、网络模块、用于与第一对接端子插拔连接的网口和USB接口;躯干包括:交换机、躯干显示屏、用于与第一对接端子插拔连接的USB接口,其中,交换机通过不同的网口分别与第一对接端子的网口和第二对接端子的网口实现插拔连接;头部包括:头部显示屏、用于与第二对接 端子插拔连接的网口或与外网插拔连接的网口。
在本实施例的一些可选的实现方式中,第一对接端子内嵌于底盘中,与躯干的底部凸起插拔连接,第二对接端子内嵌于躯干中,与头部的底部凸起插拔连接。对接端子是内嵌进去的,可以起保护作用。
在本实施例的一些可选的实现方式中,机器人还包括控制器,控制器包括位于头部的头部控制器以及以下至少一种控制器:位于底盘的机器人控制器、位于躯干的躯干控制器。如果只包括头部控制器,则其它设备都与头部控制器直接或间接连接。由头部控制器识别用户的指令,并执行。传感器采集的数据也由头部控制器进行分析,控制机器人运动。例如,激光传感器将点云数据发给头部控制器,由头部控制器分析确定出存在障碍物后,则控制机器人转向。如果包括头部控制器和底盘控制器,则由底盘控制器实现运动控制,头部控制器主要实现语音采集、识别等功能。如果包括躯干控制器,则躯干控制器可控制躯干显示屏、躯干音响,还可控制读卡器采集用户信息进行身份识别。
在本实施例的一些可选的实现方式中,该种类型的机器人主要用于服务。机器人的高度与人的平均身高接近。麦克风和音响可安装在头部,方便采集人的声音并输出应答。头部显示屏安装于头部,可用于显示表情信息,比如笑脸等。还可以显示其它提示信息,比如机器人没理解用户的问题,可显示问号。这里的“第一”、“第二”没有实际意义,“第一”只是表示安装在头部,“第二”表示安装在躯干。由于头部可拆下来用作独立的会议终端,因此头部还可包括麦克风和音响。
在本实施例的一些可选的实现方式中,头部还可作为视频会议终端,因此,头部可包括摄像头。另外,摄像头还可用于采集人脸图像进行人脸识别,从而确定用户的身份,授予相应的权限。例如,未注册用户可以进行常规问答,但不能让机器人移动。
在本实施例的一些可选的实现方式中,摄像头不是固定在头部不动的,可根据声源的位置旋转朝向声源。
在本实施例的一些可选的实现方式中,摄像头固定在头部的正面(头部显示屏侧),如果麦克风检测到声源位置,则头部可旋转朝向声 源。此时,显示屏和摄像头都会朝向用户,可采集用户的正脸图像,并且可让用户从正面看到显示的内容。
在本实施例的一些可选的实现方式中,头部与躯干之间通过活动关节连接。机器人可在人机对话过程中根据回答的内容做出相应的头部动作,例如,点头、摇头等。活动关节通过舵机控制头部动作。
在本实施例的一些可选的实现方式中,头部还可包括语音识别模块。可进行离线的语音识别。可选择将用户的语音发送到服务器进行识别,也可选择本地的离线语音识别。可检测网络信号的强弱,如果信号较弱则可自动切换成本地的离线语音识别。
在本实施例的一些可选的实现方式中,头部控制器通过摄像头获取到人脸图像后,可定位出用户相对于机器人的位置,然后控制头部转动,使得头部显示屏朝向用户。如果头部旋转角度超过预定值,则可使机器人整体旋转,使得头部显示屏朝向用户。
在本实施例的一些可选的实现方式中,头部的显示屏的数量大于1。如果头部为长方体,则头部最多可布置4块显示屏,同时为4个用户服务。为了防止干扰,此时可不接受语音对话,而是通过显示屏的触摸操作服务。
在本实施例的一些可选的实现方式中,通过摄像头或麦克风定位出用户的方向后,可点亮朝向用户侧的头部的显示屏,头部的其它显示屏不点亮。这样不需要机器人的头部转动即可实现面向用户的效果。或者只需要转动较小的角度即可实现面向用户的效果。
在本实施例的一些可选的实现方式中,头部的摄像头的数量大于1。如果头部为长方体,则头部最多可布置4个摄像头,为不同方向的用户服务。
在本实施例的一些可选的实现方式中,麦克风可以是麦克风阵列,通过麦克风阵列可定位出用户相对于机器人的位置,然后控制头部转动,使得头部显示屏朝向用户。如果头部旋转角度超过预定值,则可使机器人整体旋转,使得头部显示屏朝向用户。
在本实施例的一些可选的实现方式中,头部的网口除可以与躯干连接之外,还可与外网连接。当头部拆下来单独当会议终端使用时, 给该网口插上网线,即可进行网络电话会议。
在本实施例的一些可选的实现方式中,头部也可包括24V供电模块、12V供电模块、5V供电模块,24V供电模块包括过流保护模块和24V电压转换模块,12V供电模块包括过流保护模块和12V电压转换模块,5V供电模块包括过流保护模块和5V电压转换模块。这些供电模块可单独为头部供电,直接接入交流电源并插上网线,即可将头部作为会议终端使用。
在本实施例的一些可选的实现方式中,头部也可包括24V供电模块、12V供电模块、5V供电模块中的任一种。24V供电模块包括过流保护模块和24V电压转换模块,12V供电模块包括过流保护模块和12V电压转换模块,5V供电模块包括过流保护模块和5V电压转换模块。这些供电模块可单独为头部供电,直接接入交流电源并插上网线,即可将头部作为会议终端使用。
在本实施例的一些可选的实现方式中,头部也可包括24V供电模块、12V供电模块、5V供电模块中的任两种。24V供电模块包括过流保护模块和24V电压转换模块,12V供电模块包括过流保护模块和12V电压转换模块,5V供电模块包括过流保护模块和5V电压转换模块。这些供电模块可单独为头部供电,直接接入交流电源并插上网线,即可将头部作为会议终端使用。
在本实施例的一些可选的实现方式中,头部上还可安装头部指示灯。当头部电路连通时,头部指示灯亮起。
在本实施例的一些可选的实现方式中,头部指示灯可以是三色的LED指示灯,用不同的颜色表示不同的状态,例如,红灯亮表示电路连通,绿灯亮表示网络连通,蓝灯亮表示语音识别功能激活等。
在本实施例的一些可选的实现方式中,头部显示屏可以是触摸屏。头部显示屏上可显示一些问答选项供用户选择。可选地,触摸屏可用于指纹识别。可识别出用户的身份,执行符合其身份的指令。对于未注册用户,可通过指纹的大小来确定用户是成人还是儿童。可针对不同类型的用户采用不同的应答方式。例如,对于儿童可以用动画人物的声音回答问题。
在本实施例的一些可选的实现方式中,头部还可包括各种传感器,例如,超声波传感器、悬崖传感器、温湿度传感器、PM2.5传感器等。超声波传感器用于补充激光雷达盲区,进行避障;悬崖传感器,用于探测地面上存在的台阶等,防止机器人发生跌落事故;温湿度传感器和PM2.5传感器用于对运行环境的监测,还可测量体温。机器人在识别用户身份的同时还可量体温,进行防疫检查。PM2.5传感器可检测空气质量。不同用途的机器人可搭配不同的传感器。传感器是可插拔设计,任意搭配组合。
在本实施例的一些可选的实现方式中,头部还可包括3D避障摄像头,通过USB接口与底盘的机器人控制器连接。3D避障摄像头可以是红外摄像头,可测量深度信息进行避障。
在本实施例的一些可选的实现方式中,头部还可包括读卡器。读卡器通过USB接口与头部控制器连接。通过躯干的交换机,将读取的信息传输到底盘的机器人控制器。读卡器可以是RFID读卡器,读取身份证信息,或者还可以用于读取条形码信息。用户可也佩戴专用的手环,读卡器读取手环的信息从而确定用户身份。
在本实施例的一些可选的实现方式中,头部可安装急停键。急停键与底盘的供电模块中的继电器连接。相应的,第二对接端子包括急停键接口。头部的急停键通过所述第二对接端子的急停键接口连接到第一对接端子的急停键接口,再连接到所述供电模块的继电器。
在本实施例的一些可选的实现方式中,头部可安装开机键。相应的,第二对接端子包括开机键接口。头部的开机键通过所述第二对接端子的开机键接口连接到第一对接端子的开机键接口,再连接到所述供电模块的开关机控制器。
在本实施例的一些可选的实现方式中,头部还可包括GPS模块。GPS模块用于定位机器人的位置。特别是在室外活动时,可使用GPS配置激光雷达进行精确定位。
在本实施例的一些可选的实现方式中,头部还可包括太阳能电池板。
在本实施例的一些可选的实现方式中,头部显示屏可输出机器人 的标识,作为语音识别的唤醒词,例如,机器人标识为“007”。用户只要说出“零零七”即可唤醒该机器人来服务。可以通过标识来区别不同的机器人,有针对性地选择用于服务的机器人。
在本实施例的一些可选的实现方式中,头部控制器还可记录唤醒机器人的用户的声纹特征,将唤醒的用户进行绑定作为主人。将接收到的多人的语音指令中过滤出该用户的指令,从而避免其它用户指令的干扰。例如,用户A唤醒了机器人007,007后来接收到用户A的指令前进和用户B的指令后退。机器人007能识别出主人用户A,只服从用户A的指令。相应的,用户还可发出解绑指令,释放出机器人资源。
在本实施例的一些可选的实现方式中,躯干还可包括麦克风。用于采集用户的声音。
在本实施例的一些可选的实现方式中,躯干还可包括躯干显示屏。躯干显示屏的尺寸比头部显示屏的尺寸大。头部显示屏主要用于显示表情。躯干显示屏可显示一些文字信息。
在本实施例的一些可选的实现方式中,躯干还可包括躯干音响。躯干有单独的麦克风和音响,因此可独立作为会议终端使用。躯干音响的位置与头部音响的位置可构造环绕立体声,提升音效。
在本实施例的一些可选的实现方式中,躯干还可包括连接外部的网口或USB接口。可扩充电子设备。
在本实施例的一些可选的实现方式中,躯干还可作为视频会议终端,因此,躯干可包括摄像头。另外,摄像头还可用于采集人脸图像进行人脸识别,从而确定用户的身份,授予相应的权限。例如,未注册用户可以进行常规问答,但不能让机器人移动。
在本实施例的一些可选的实现方式中,可在躯干的不同高度处布置摄像头,方便采集不同身高的用户的人脸图像。
在本实施例的一些可选的实现方式中,躯干的摄像头与躯干显示屏在同一侧。
在本实施例的一些可选的实现方式中,躯干的显示屏的数量大于1。如果躯干为长方体,则躯干最多可布置4块显示屏,同时为4个用 户服务。为了防止干扰,此时可不接受语音对话,而是通过显示屏的触摸操作服务。
在本实施例的一些可选的实现方式中,通过摄像头或麦克风定位出用户的方向后,可点亮朝向用户侧的躯干的显示屏,其它躯干的显示屏不点亮。这样不需要机器人的躯干转动即可实现面向用户的效果。或者只需要转动较小的角度即可实现面向用户的效果。
在本实施例的一些可选的实现方式中,躯干的摄像头的数量大于1。如果躯干为长方体,则躯干最多可布置4个摄像头,为不同方向的用户服务。
在本实施例的一些可选的实现方式中,躯干还可包括手臂、手掌。机器人可在人机对话过程中根据回答的内容做出相应的躯干动作,例如,握手、鼓掌、拥抱等。手掌上可设置指纹采集器,可通过握手动作采集用户的指纹信息,进行身份识别。
在本实施例的一些可选的实现方式中,躯干的高度可以调节。可以手动调节高度,还可根据定位出的声源高度确定用户的身高,然后自动调整躯干的高度,从而方便用户,特别是儿童等个矮的用户触摸操作。并且能更准确地采集声音和人脸图像。
在本实施例的一些可选的实现方式中,躯干还可包括语音识别模块。可进行离线的语音识别。可选择将用户的语音发送到服务器进行识别,也可选择本地的离线语音识别。可检测网络信号的强弱,如果信号较弱则可自动切换成本地的离线语音识别。
在本实施例的一些可选的实现方式中,躯干控制器通过摄像头获取到人脸图像后,可定位出用户相对于机器人的位置,然后控制躯干转动,使得躯干显示屏朝向用户。如果躯干旋转角度超过预定值,则可使机器人整体旋转,使得躯干显示屏朝向用户。
在本实施例的一些可选的实现方式中,躯干的麦克风可以是麦克风阵列,通过麦克风阵列可定位出用户相对于机器人的位置,然后控制躯干转动,使得躯干显示屏朝向用户。如果躯干旋转角度超过预定值,则可使机器人整体旋转,使得躯干显示屏朝向用户。
在本实施例的一些可选的实现方式中,躯干的网口除可以与躯干 连接之外,还可与外网连接。当躯干拆下来单独当会议终端使用时,给该网口插上网线,即可进行网络电话会议。
在本实施例的一些可选的实现方式中,躯干也可包括24V供电模块、12V供电模块、5V供电模块,24V供电模块包括过流保护模块和24V电压转换模块,12V供电模块包括过流保护模块和12V电压转换模块,5V供电模块包括过流保护模块和5V电压转换模块。这些供电模块可单独为躯干供电,直接接入交流电源并插上网线,即可将躯干作为会议终端使用
在本实施例的一些可选的实现方式中,躯干也可包括24V供电模块、12V供电模块、5V供电模块中的任一种。24V供电模块包括过流保护模块和24V电压转换模块,12V供电模块包括过流保护模块和12V电压转换模块,5V供电模块包括过流保护模块和5V电压转换模块。这些供电模块可单独为躯干供电,直接接入交流电源并插上网线,即可将躯干作为会议终端使用。
在本实施例的一些可选的实现方式中,躯干也可包括24V供电模块、12V供电模块、5V供电模块中的任两种。24V供电模块包括过流保护模块和24V电压转换模块,12V供电模块包括过流保护模块和12V电压转换模块,5V供电模块包括过流保护模块和5V电压转换模块。这些供电模块可单独为躯干供电,直接接入交流电源并插上网线,即可将躯干作为会议终端使用。
在本实施例的一些可选的实现方式中,躯干上还可安装躯干指示灯。当躯干电路连通时,躯干指示灯亮起。
在本实施例的一些可选的实现方式中,躯干指示灯可以是三色的LED指示灯,用不同的颜色表示不同的状态,例如,红灯亮表示电路连通,绿灯亮表示网络连通,蓝灯亮表示语音识别功能激活等。
在本实施例的一些可选的实现方式中,躯干显示屏可以是触摸屏。躯干显示屏上可显示一些问答选项供用户选择。可选地,触摸屏可用于指纹识别。可识别出用户的身份,执行符合其身份的指令。对于未注册用户,可通过指纹的大小来确定用户是成人还是儿童。可针对不同类型的用户采用不同的应答方式。例如,对于儿童可以用动画人物 的声音回答问题。
在本实施例的一些可选的实现方式中,躯干还可包括各种传感器,例如,超声波传感器、悬崖传感器、温湿度传感器、PM2.5传感器等。超声波传感器用于补充激光雷达盲区,进行避障;悬崖传感器,用于探测地面上存在的台阶等,防止机器人发生跌落事故;温湿度传感器和PM2.5传感器用于对运行环境的监测,还可测量体温。机器人在识别用户身份的同时还可量体温,进行防疫检查。PM2.5传感器可检测空气质量。不同用途的机器人可搭配不同的传感器。传感器是可插拔设计,任意搭配组合。
在本实施例的一些可选的实现方式中,躯干还可包括3D避障摄像头,通过USB接口与底盘的机器人控制器连接。3D避障摄像头可以是红外摄像头,可测量深度信息进行避障。
在本实施例的一些可选的实现方式中,躯干还可包括读卡器。读卡器通过USB接口与躯干控制器连接。通过躯干的交换机,将读取的信息传输到底盘的机器人控制器。读卡器可以是RFID读卡器,读取身份证信息,或者还可以是用于读取条形码信息。也可为用户分配专用的手环,读卡器读取手环的信息从而确定用户身份。
在本实施例的一些可选的实现方式中,躯干可安装急停键。急停键与底盘的供电模块中的继电器连接。
在本实施例的一些可选的实现方式中,躯干可安装开机键。
在本实施例的一些可选的实现方式中,躯干还可包括GPS模块。GPS模块用于定位机器人的位置。特别是在室外活动时,可使用GPS配置激光雷达进行精确定位。
在本实施例的一些可选的实现方式中,躯干还可包括太阳能电池板。
在本实施例的一些可选的实现方式中,躯干显示屏可输出机器人的标识,作为语音识别的唤醒词,例如,机器人标识为“007”。用户只要说出“零零七”即可唤醒该机器人来服务。可以通过标识来区别不同的机器人,有针对性地选择用于服务的机器人。
在本实施例的一些可选的实现方式中,躯干控制器还可记录唤醒 机器人的用户的声纹特征,将唤醒的用户进行绑定作为主人。将接收到的多人的语音指令中过滤出该用户的指令,从而避免其它用户指令的干扰。例如,用户A唤醒了机器人007,007后来接收到用户A的指令前进和用户B的指令后退。机器人007能识别出主人用户A,只服从用户A的指令。相应的,用户还可发出解绑指令,释放出机器人资源。
在本实施例的一些可选的实现方式中,底盘还可包括激光雷达。激光雷达用于进行激光点云识别障碍物,构建地图。相应的,底盘是可移动的。底盘可以是轮式底盘或履带式底盘。底盘是可替换的。可以插拔的方式更换轮式底盘或履带式底盘。
在本实施例的一些可选的实现方式中,底盘还可包括各种传感器,例如,超声波传感器、悬崖传感器、温湿度传感器、PM2.5传感器等。超声波传感器用于补充激光雷达盲区,进行避障;悬崖传感器,用于探测地面上存在的台阶等,防止机器人发生跌落事故;温湿度传感器和PM2.5传感器用于对运行环境的监测,还可测量体温。机器人在识别用户身份的同时还可量体温,进行防疫检查。PM2.5传感器可检测空气质量。不同用途的机器人可搭配不同的传感器。传感器是可插拔设计,任意搭配组合。
在本实施例的一些可选的实现方式中,底盘上还可安装底盘指示灯。当底盘电路连通时,底盘指示灯亮起。
在本实施例的一些可选的实现方式中,底盘指示灯可以是三色的LED指示灯,用不同的颜色表示不同的状态,例如,红灯亮表示电路连通,绿灯亮表示网络连通,蓝灯亮表示电机运转等。
在本实施例的一些可选的实现方式中,底盘还包括电机驱动器、运动电机和编码器,电机驱动器与机器人控制器通过CAN总线连接。机器人控制器接收到外部的移动指令后,控制电机驱动器驱动电运动电机带动轮子或履带转动,从而使得机器人移动。
在本实施例的一些可选的实现方式中,底盘还可包括GPS模块。GPS模块用于定位机器人的位置。特别是在室外活动时,可使用GPS配置激光雷达进行精确定位。
在本实施例的一些可选的实现方式中,底盘还可以是双腿形式,有活动的膝关节,可以上下楼梯。悬崖传感器检测到有台阶时,通知机器人控制器。机器人控制器控制电机驱动器驱动运动电机,使得机器人抬腿迈台阶。
在本实施例的一些可选的实现方式中,底盘还可包括太阳能电池板。在本实施例的一些可选的实现方式中,底盘还可包括摄像头,用于拍摄地面。将所拍摄的地面图像与标准图像进行比对,如果匹配度小于预定值,则说明地面有污渍。可将污渍地点通知保洁进行清扫。
在本实施例的一些可选的实现方式中,底盘还可包括清洁装置,当通过摄像头检测到地面有污渍时,可进行清洁。
如图1b所示,机器人的底盘100可包括激光雷达101、传感器102(可选,可包括超声波传感器、悬崖传感器、温湿度传感器、PM2.5传感器等)、底盘指示灯(可选)103、充电极104;机器人躯干200可包括躯干显示屏201、躯干指示灯202(可选)、3D避障摄像头203(可选)、读卡器204(可选);机器人的头部300可包括活动关节301、麦克风302、头部显示屏303、摄像头304,还可包括外接设备接口305。底盘100与躯干200之间通过第一对接端子12实现插拔连接。躯干200与头部300之间通过第二对接端子23实现插拔连接。
图2a、2b所示为机器人的底盘电路原理图。底盘集成了信号控制(图2a所示)和电源控制(图2b、2c所示)。底盘包括激光雷达,可以进行激光点云识别,用于机器人的地图构建。底盘还可包括IMU(惯性测量单元),用于机器人的自身位置定位。激光雷达和IMU共同构成了机器人的导航控制系统。机器人的底盘可以是固定的也可以是移动的。如果是移动的,则底盘还包括运动电机来控制轮子或履带转动。机器人控制器可通过CAN总线向电机驱动器发送运动指令,驱动器收到指令后驱动电机旋转实现机器人的前进、后退、转向等操作,电机上的编码器监测电机的旋转状态,实现对电机的闭环控制。
底盘上还包括与供电模块连接的信号控制模块,信号控制模块与以下至少一个模块连接:超声波传感器、悬崖传感器、指示灯驱动电路、电池通信接口、温湿度传感器、PM2.5传感器。超声波传感器用 于补充激光雷达盲区,进行避障;悬崖传感器,用于探测地面上存在的台阶等,防止机器人发生跌落事故;温湿度传感器和PM2.5传感器用于对运行环境的监测;指示灯驱动电路用于驱动指示灯显示机器人工作状态;电池通信接口用于采集电池的电量、温度、电流灯充放电信息。信号控制模块可通过RS232接口与机器人控制器间进行通信,机器人控制器可通过网口与网络模块连接。网络模块可包括有线路由和无线路由两种模式。通过网络模块,可以远程访问机器人。机器人控制器向躯干部分,做了网口和USB接口的预留。
图2b所示为电源控制部分的示意图。供电模块可包括开关机控制器,开关机控制器分别与开机键、继电器、24V供电模块、12V供电模块、5V供电模块、电池、充电插头和/或充电极连接。机器人可以采用充电桩充电(即充电极)和插头充电(充电插头)两种方式中任一种,也可采用两种方式。机器人可以边充电边工作。机器人躯干部分可安装有开机按钮,用于机器人开关机控制,机器人电机供电通过继电器控制,继电器逻辑线圈部分还可串接急停开关。当按下急停后,电机供电被切断,机器人动力系统进入强制断电状态。机器人提供三种供电回路:12V,24V,5V。电池输入通过5V电压转换模块、12V电压转换模块、24V电压转换模块分别转换为5V、12V、24V电源,供机器人所有设备供电,并且每个供电回路有过流保护模块,5V、12V、24V电源总线贯穿底盘、躯干、头部三个模块,如图2c所示。
图3所示的是机器人底盘和躯干之间的第一对接端子,机器人底盘和躯干之间通过第一对接端子连接。底盘和躯干做成了可插拔设计。底盘和躯干之间电气连接的第一对接端子包含机械导向柱、24V电接口、12V电接口、5V电接口、接地接口、指示灯接口、USB接口、网口、开机键接口、急停键接口,第一对接端子两端具有机械导向柱,用于底盘和躯干之间对接时的机械导向,保护端子。
在本实施例的一些可选的实现方式中,第一对接端子包括24V电接口、12V电接口、5V电接口中任一种电接口、接地接口、网口,以及以下至少一项:指示灯接口、USB接口、开机键接口、急停键接口。具体的实现方式与头部、躯干、底盘采用的供电模块一致。
在本实施例的一些可选的实现方式中,第一对接端子包括24V电接口、12V电接口、5V电接口中任两种电接口、接地接口、网口,以及以下至少一项:指示灯接口、USB接口、开机键接口、急停键接口。具体的实现方式与头部、躯干、底盘采用的供电模块一致。
图4所示为机器人躯干电路图,躯干部分可包括躯干控制器(可以是安卓主板)、躯干显示屏、读卡器(可选)、躯干指示灯(可选)、3D避障摄像头(可选)、交换机、躯干音响(,并通过网口、USB与底盘进行通信。躯干控制器可以驱动躯干显示屏可以显示特定场景下的内容,并通过躯干音响进行声音播放。躯干控制器与读卡器通过USB通信,用于刷卡识别获取机器人权限。3D避障摄像头(可以是红外相机等可测量尝试的摄像头)通过USB和底盘控制器连接,进行图像避障。躯干装有交换机将一路网口分到躯干和头部接口。
图5所示为机器人躯干和头部之间的第二连接端子,第二连接端子可包括机械导向柱、24V电接口、12V电接口、5V电接口、接地接口、网口。第二对接端子两端具有机械导向柱,用于躯干和头部之间对接时的机械导向,保护端子。
在本实施例的一些可选的实现方式中,第二对接端子包括24V电接口、12V电接口、5V电接口中任一种电接口、接地接口、网口,以及以下至少一项:指示灯接口、USB接口、开机键接口、急停键接口。具体的实现方式与头部、躯干、底盘采用的供电模块一致。
在本实施例的一些可选的实现方式中,第二对接端子包括24V电接口、12V电接口、5V电接口中任两种电接口、接地接口、网口,以及以下至少一项:指示灯接口、USB接口、开机键接口、急停键接口。具体的实现方式与头部、躯干、底盘采用的供电模块一致。
图6所示为机器人的头部的电路原理图。头部可包括:麦克风、摄像头、头部控制器、头部显示屏(也可称为表情屏)、头部音响,并通过网口与底盘和躯干进行通信。机器人头部可以是固定或者可以活动的。如果是活动的则安装有舵机。头部控制器可以是安卓主板,用于驱动头部显示屏,可以进行表情显示与用户进行互动。头部还可包括语音模块,用于进行语音识别,并做出对应的动作。头部也可将麦 克风采集的语音通过网络发送到云端服务器进行语音识别。摄像头可以识别用户面部特征、也可以用于面部识别。通过控制舵机实现机器人头部显示屏的人员跟踪。机器人头部可以单独取下,提供电源和网口后,可以单独使用提供譬如远程会议、人工智能助手等功能。麦克风可以采用麦克风阵列形式,可以定位声源的方向,从而让头部的头部显示屏朝向声源。
本发明为一种模块化的服务型机器人,将服务型机器人化分为底盘、躯干、头部三部分,实现了机器人可插拔设计,便于运输和快速组装,并且底盘、躯干、头部三个模块可以单独使用,增强了机器人应用场景,根据机器人接口定义可以使机器人搭载不同的底盘比如将轮式底盘换成履带式底盘增强地形适应能力,或者在机器人底盘/躯干上扩展其它的硬件设备,比如取下头部安装红外和监控相机进行安防巡逻等任务,或者单独取下机器人头部,用作远程会议终端。本发明提高了机器人模块化和硬件可扩展性,增加了机器人应用场景。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (16)

  1. 一种机器人,包括可拆分的底盘、躯干、头部,所述底盘与所述躯干之间通过第一对接端子实现插拔连接,所述躯干与所述头部之间通过第二对接端子实现插拔连接,其中,
    所述底盘包括:供电模块、避障模块、网络模块、用于与所述第一对接端子插拔连接的网口和USB接口;
    所述躯干包括:交换机、躯干显示屏、用于与所述第一对接端子插拔连接的USB接口,其中,所述交换机通过不同的网口分别与所述第一对接端子的网口和所述第二对接端子的网口实现插拔连接;
    所述头部包括:头部显示屏、用于与所述第二对接端子插拔连接的网口或与外网插拔连接的网口。
  2. 根据权利要求1所述的机器人,其中,所述第一对接端子包括分别与所述底盘和所述躯干插拔连接的:网口、USB接口、机械导向柱、电接口、接地接口、指示灯接口、开机键接口、急停键接口,所述第二对接端子包括分别与所述底盘和所述躯干插拔连接的:网口、USB接口、机械导向柱、电接口、接地接口。
  3. 根据权利要求2所述的机器人,其中,所述供电模块包括开关机控制器,所述开关机控制器分别与开机键、继电器连接,其中,所述继电器与安装在所述躯干上的急停键通过所述第一对接端子的急停键接口实现插拔连接,安装在所述躯干上的开机键与所述开关机控制器通过所述第一对接端子的开机键接口实现插拔连接。
  4. 根据权利要求2所述的机器人,其中,所述第二对接端子还包括开机键接口及急停键接口,所述头部安装有开机键和急停键,所述头部的急停键通过所述第二对接端子的急停键接口连接到第一对接端子的急停键接口,再连接到所述供电模块的继电器,所述头部的开机键通过所述第二对接端子的开机键接口连接到第一对接端子的开机键 接口,再连接到所述供电模块的开关机控制器。
  5. 根据权利要求1所述的机器人,其中,所述第一对接端子内嵌于所述底盘中,与所述躯干的底部凸起插拔连接,所述第二对接端子内嵌于所述躯干中,与所述头部的底部凸起插拔连接。
  6. 根据权利要求1所述的机器人,其中,所述机器人还包括控制器,控制器包括位于头部的头部控制器以及以下至少一种控制器:位于底盘的机器人控制器、位于躯干的躯干控制器。
  7. 根据权利要求6所述的机器人,其中,所述头部还包括与所述头部控制器连接的以下至少一项设备:麦克风、头部音响、语音识别模块和舵机、摄像头、3D避障摄像头、读卡器,其中,所述3D避障摄像头和所述读卡器通过USB接口与所述第二对接端子的USB接口插拔连接。
  8. 根据权利要求6所述的机器人,其中,所述躯干还包括与所述躯干控制器连接的以下至少一项设备:躯干音响、3D避障摄像头、读卡器,其中,所述3D避障摄像头和所述读卡器通过USB接口与所述第一对接端子的USB接口实现插拔连接。
  9. 根据权利要求2所述的机器人,其中,所述电接口包括以下至少一种:24V电接口、12V电接口、5V电接口。
  10. 根据权利要求6所述的机器人,其中,所述底盘还包括与所述控制器通过串口连接的信号控制模块,所述信号控制模块与以下至少一个模块连接:
    超声波传感器、悬崖传感器、温湿度传感器、PM2.5传感器、指示灯驱动电路、电池通信接口。
  11. 根据权利要求6所述的机器人,其中,所述底盘还包括电机驱动器、运动电机和编码器,所述电机驱动器与所述控制器通过CAN总线连接。
  12. 根据权利要求1所述的机器人,其中,所述机器人还包括分别安装在底盘、躯干、头部上的底盘指示灯、躯干指示灯、头部指示灯。
  13. 根据权利要求7所述的机器人,其中,所述麦克风为麦克风阵列。
  14. 根据权利要求1所述的机器人,其中,所述底盘为轮式底盘或履带式底盘。
  15. 根据权利要求1所述的机器人,其中,所述避障模块包括以下至少一种设备:激光雷达、GPS模块、超声波传感器、悬崖传感器、3D避障摄像头。
  16. 根据权利要求1所述的机器人,其中,所述供电模块包括以下至少一种:24V供电模块、12V供电模块、5V供电模块,每种供电模块包括过流保护模块和电压转换模块。
PCT/CN2021/110711 2020-08-18 2021-08-05 机器人 WO2022037413A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010831200.7 2020-08-18
CN202010831200.7A CN111966100A (zh) 2020-08-18 2020-08-18 机器人

Publications (1)

Publication Number Publication Date
WO2022037413A1 true WO2022037413A1 (zh) 2022-02-24

Family

ID=73388247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/110711 WO2022037413A1 (zh) 2020-08-18 2021-08-05 机器人

Country Status (2)

Country Link
CN (1) CN111966100A (zh)
WO (1) WO2022037413A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966100A (zh) * 2020-08-18 2020-11-20 北京海益同展信息科技有限公司 机器人
CN113724454B (zh) * 2021-08-25 2022-11-25 上海擎朗智能科技有限公司 移动设备的互动方法、移动设备、装置及存储介质
CN115019461A (zh) * 2022-06-09 2022-09-06 河南省农业融资租赁股份有限公司 一种融资租赁安全管理系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102189540A (zh) * 2010-03-05 2011-09-21 上海未来伙伴机器人有限公司 模块化机器人
CN103252784A (zh) * 2012-10-26 2013-08-21 上海未来伙伴机器人有限公司 家居服务机器人
CN108818573A (zh) * 2018-09-11 2018-11-16 中新智擎科技有限公司 一种多功能服务机器人
CN208179552U (zh) * 2018-01-31 2018-12-04 河源市勇艺达科技有限公司 一种可换充电及行走底座模块化结构机器人
US20190054627A1 (en) * 2017-08-07 2019-02-21 Db1 Global Software S/A Auxiliary robot with artificial intelligence
CN111966100A (zh) * 2020-08-18 2020-11-20 北京海益同展信息科技有限公司 机器人

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102189540A (zh) * 2010-03-05 2011-09-21 上海未来伙伴机器人有限公司 模块化机器人
CN103252784A (zh) * 2012-10-26 2013-08-21 上海未来伙伴机器人有限公司 家居服务机器人
US20190054627A1 (en) * 2017-08-07 2019-02-21 Db1 Global Software S/A Auxiliary robot with artificial intelligence
CN208179552U (zh) * 2018-01-31 2018-12-04 河源市勇艺达科技有限公司 一种可换充电及行走底座模块化结构机器人
CN108818573A (zh) * 2018-09-11 2018-11-16 中新智擎科技有限公司 一种多功能服务机器人
CN111966100A (zh) * 2020-08-18 2020-11-20 北京海益同展信息科技有限公司 机器人

Also Published As

Publication number Publication date
CN111966100A (zh) 2020-11-20

Similar Documents

Publication Publication Date Title
WO2022037413A1 (zh) 机器人
Sakagami et al. The intelligent ASIMO: System overview and integration
CN106346487B (zh) 交互式vr沙盘展示机器人
US8983662B2 (en) Robots comprising projectors for projecting images on identified projection surfaces
US11697211B2 (en) Mobile robot operation method and mobile robot
CN105015645B (zh) 一种多功能无人探测机器人
US20060098089A1 (en) Method and apparatus for a multisensor imaging and scene interpretation system to aid the visually impaired
CN106113058B (zh) 一种陪护机器人
CN109093633A (zh) 一种可分体式机器人及其控制方法
CN106695849A (zh) 迎宾安保服务机器人感知、控制及执行系统
Tsui et al. Iterative design of a semi-autonomous social telepresence robot research platform: a chronology
Müller et al. Openbot: Turning smartphones into robots
KR102190743B1 (ko) 로봇과 인터랙션하는 증강현실 서비스 제공 장치 및 방법
CN108214497A (zh) 一种家庭助教智能机器人系统
Lopez et al. Robotman: A security robot for human-robot interaction
CN111168691B (zh) 机器人控制方法、控制系统以及机器人
WO2018117514A1 (ko) 공항용 로봇 및 그의 동작 방법
CN208930273U (zh) 一种可分体式机器人
CN205942440U (zh) 智能营业厅机器人
CN213122707U (zh) 机器人
KR20110125524A (ko) 멀티모달 상호작용을 이용한 로봇의 물체 학습 시스템 및 방법
Haritaoglu et al. Attentive Toys.
Carnegie et al. A human-like semi autonomous mobile security robot
CN211061899U (zh) 一种自主定位导航机器人
CN210757755U (zh) 慈善募捐机器人

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21857508

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21857508

Country of ref document: EP

Kind code of ref document: A1