CN208543477U - A kind of guest-meeting robot - Google Patents
A kind of guest-meeting robot Download PDFInfo
- Publication number
- CN208543477U CN208543477U CN201820068481.3U CN201820068481U CN208543477U CN 208543477 U CN208543477 U CN 208543477U CN 201820068481 U CN201820068481 U CN 201820068481U CN 208543477 U CN208543477 U CN 208543477U
- Authority
- CN
- China
- Prior art keywords
- robot
- target object
- processor
- electrically connected
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000001815 facial effect Effects 0.000 claims description 19
- 230000003993 interaction Effects 0.000 claims description 18
- 238000001514 detection method Methods 0.000 claims description 17
- 238000000034 method Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 4
- 230000002452 interceptive effect Effects 0.000 abstract 1
- 238000003491 array Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 206010011469 Crying Diseases 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Manipulator (AREA)
Abstract
The utility model embodiment discloses a kind of guest-meeting robot, comprising: removable bottom stage;The robot body being set on removable bottom stage, wherein robot body includes trunk and the head that is set to above trunk;Position sensing module, is set on the trunk of robot body, for detecting the position of human body at current time in preset range;Processor is electrically connected with position sensing module, for determining target object according to the position of human body detected;Controller is electrically connected with processor, for generating drive control information according to the location information of target object, and sends drive control information to drive module;Drive module is set in removable bottom stage, for driving robot to reach target position according to drive control information;Interactive module is electrically connected with processor, for interacting with target object, is realized robot and is actively gone in face of user the technical effect for carrying out welcome and guidance, improve the initiative of welcome and improve service quality.
Description
Technical Field
The embodiment of the utility model provides a relate to the robotechnology field, especially relate to a usher robot.
Background
With the rapid development of science and technology, the research of robots is more and more emphasized by people. The robot has a wide range of applications, such as being engaged in maintenance, repair, transportation, cleaning, security, rescue, monitoring, etc.
At present, in service industries such as hotel meetings and the like, a robot can be used for welcoming guide and other works, so that the labor cost is saved. The existing greeting robot provides greeting services, which are simple services, such as a robot calls or shows actions to a user when the user walks to the robot. Therefore, the greeting robot is not flexible, the user can only walk to the robot to carry out greeting service, and the initiative of the greeting is poor.
SUMMERY OF THE UTILITY MODEL
An embodiment of the utility model provides a usher robot to realize initiatively walking the user and carrying out the usher in the front, improve the initiative of usher and promote quality of service.
An embodiment of the utility model provides a usher robot, include:
a movable base;
a robot body disposed on the movable base table, wherein the robot body includes a trunk and a head disposed above the trunk;
the position detection module is arranged on the trunk part of the robot body and used for detecting the position of the human body at the current moment within a preset range;
the processor is electrically connected with the position detection module and is used for determining a target object according to the detected human body position;
the controller is electrically connected with the processor and used for generating driving control information according to the position information of the target object and sending the driving control information to the driving module;
the driving module is arranged in the movable base table and used for driving the robot to reach a target position according to the driving control information;
and the interaction module is electrically connected with the processor and is used for interacting with the target object.
Further, the position detection module comprises at least two pyroelectric infrared sensors, and the pyroelectric infrared sensors are horizontally and circumferentially distributed on the circular arc of the trunk part;
the pyroelectric infrared sensor is used for receiving infrared rays emitted by a moving human body and detecting the position of the human body at the current moment within a preset range.
Further, the processor is specifically configured to:
if only one human body position is detected in the preset range, determining an object corresponding to the human body position as a target object;
and if at least two human body positions are detected in the preset range, determining the distance between the corresponding robot and the object according to the detected human body positions, and determining the object corresponding to the minimum distance as the target object.
Further, the driving module comprises a motor driver, two universal wheels, two directional wheels and a battery; wherein,
the battery is electrically connected with the motor driver and is used for providing electric quantity for the motor driver;
the motor driver is electrically connected with the controller and used for receiving the drive control information and determining drive signals of the universal wheel and the directional wheel according to the drive control information;
the universal wheel is electrically connected with the motor driver, is used for adjusting the advancing direction according to a driving signal of the motor driver and moves along the adjusted advancing direction;
the directional wheel is electrically connected with the motor driver and is used for moving along the fixed direction of the motor driver according to the driving signal of the motor driver.
Further, the welcome robot further comprises:
the ultrasonic sensors are horizontally and circumferentially distributed on the arc of the trunk part and the arc of the movable base platform, are electrically connected with the controller, and are used for detecting an obstacle in the moving process and sending the position information of the obstacle to the controller;
the controller is further configured to receive position information of the obstacle and update the driving control information.
Further, the welcome robot further comprises an image acquisition module;
the processor is electrically connected with the image acquisition module and is further used for determining that the current robot reaches the target position and sending an image acquisition instruction to the image acquisition module when the distance between the robot and the target object is smaller than or equal to a preset distance;
the image acquisition module is arranged at the head of the robot body and used for acquiring a facial image of a target object according to the image acquisition instruction and sending the facial image to the processor;
the processor is further configured to receive a facial image of the target object and determine identity information of the target object from the facial image.
Further, the processor is further configured to:
detecting whether the target object is a registered person or not according to the identity information, if so, sending welcome information to the interaction module, wherein the welcome information carries position guide information of the target object;
if the target object is not a registered person, sending input prompt information to the interaction module;
the processor is further configured to record the occurrence time and the occurrence number of the target object.
Further, the interaction module comprises:
and the information display module is arranged on the trunk part of the robot body, is connected with the processor and is used for displaying the position guide information.
Further, the interaction module further comprises:
and the audio output equipment is electrically connected with the processor and is used for playing welcome information corresponding to the registered target object or playing unregistered target object input prompt information.
Further, the robot body further comprises arms fixed to two sides of the trunk, and the arms on each side have a degree of freedom.
The embodiment of the utility model provides a usher robot, robot survey the human position of predetermineeing the within range current moment through the position detection module to utilize the treater to confirm the target object, and utilize drive module to remove the target position, thereby interact with the target object, reached the effect that the robot initiatively walked to the user and carried out usher and guide, improved the initiative of usher and promoted quality of service.
Drawings
Fig. 1 is a schematic structural diagram of a greeting robot provided by the first embodiment of the present invention.
Fig. 2 is a front view of a greeting robot provided by the first embodiment of the present invention.
Fig. 3 is a rear view of a greeting robot according to an embodiment of the present invention.
Fig. 4 is a top view of a movable base platform in a welcome robot according to an embodiment of the present invention.
Fig. 5 is a flowchart illustrating a work flow of a greeting robot provided by the second embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is the utility model provides a structural schematic diagram of usher's robot, fig. 2 is that the utility model discloses a front view of usher's robot that a usher's robot provided, fig. 3 is the utility model discloses a rear view of usher's robot that a provides, fig. 4 is the utility model provides a but the top view of the movable base platform among the usher's robot that a usher's robot provided. The embodiment of the utility model provides a usher robot to realize initiatively walking the user and carrying out the usher in the front, improved the initiative of usher and promoted quality of service. The greeting robot in the embodiment can be used for but not limited to the hotel services industry. Referring to fig. 1, 2, 3 and 4, the welcome robot includes:
a movable base 1; a robot body 2 provided on the movable base 1, wherein the robot body 2 includes a trunk 21 and a head 22 provided above the trunk; the position detection module 211 is arranged on the trunk part 21 of the robot body 2 and used for detecting the position of the human body at the current moment within a preset range; a processor 212 electrically connected to the position detection module 211 for determining a target object according to the detected position of the human body; a controller 213 electrically connected to the processor 212, for generating driving control information according to the position information of the target object and transmitting the driving control information to the driving module 11; the driving module 11 is arranged in the movable base table 1 and used for driving the robot to reach a target position according to the driving control information; and the interaction module 3 is electrically connected with the processor 212 and is used for interacting with the target object.
In the welcome process, the user is in a moving state, so the position detection module 211 is required to detect the position of the human body at the current moment. The processor 212 and the controller 213 are both provided on the trunk 21 of the robot body 2. The interaction module in this embodiment may be disposed in the robot body 2, or may be disposed in the movable base 1, where fig. 1 is only one implementation manner, and the position of the interaction module in this embodiment is not limited. Illustratively, the interaction module may further comprise two or more sub-modules, wherein one sub-module is arranged in the robot body 2 and the other sub-module is arranged in the movable base 1.
Optionally, the position detection module 211 includes at least two pyroelectric infrared sensors 2111, and the pyroelectric infrared sensors 2111 are horizontally and circumferentially distributed on the arc of the trunk 21; the pyroelectric infrared sensor 2111 is used for receiving infrared rays emitted by a moving human body and detecting the position of the human body at the current time within a preset range.
The pyroelectric infrared sensors 2111 are distributed on the circular arc of the trunk 21 in an equiangular horizontal circumferential direction, and the included angle between each adjacent pyroelectric infrared sensor 2111 and the corresponding circle center is 10-45 degrees. The number of the pyroelectric infrared sensors 2111 and the included angle between the adjacent sensors are selected according to the detection range of the robot. The preset range refers to a range that the pyroelectric infrared sensor 2111 can detect. The pyroelectric infrared sensor 2111 can detect the dynamic condition of the human body at the current moment within a preset range. For example, the pyroelectric infrared sensor 2111 can be matched with an amplifying circuit to amplify the signal to more than 70 decibels, so that the motion condition of the human body at the current moment within the range of 20 meters can be detected. The specific position of the human body at the current moment is detected according to the infrared rays emitted by the moving human body, so that the robot has a sensing function.
Optionally, the processor is specifically configured to:
if only one human body position is detected in the preset range, determining an object corresponding to the human body position as a target object; and if at least two human body positions are detected in the preset range, determining the distance between the corresponding robot and the object according to the detected human body positions, and determining the object corresponding to the minimum distance as the target object.
The processor is electrically connected with the position detection module 211, receives the detected human body position at the current moment transmitted by the position detection module 211, selects a target object in at least one detected human body position, and provides a target object selection condition for the welcome robot. If only one human body position is received, directly determining an object corresponding to the human body position as a target object; and if at least two human body positions are received, calculating the distance between each human body position and the current position of the robot, and selecting an object corresponding to the minimum distance as a target object.
Optionally, the driving module 11 includes a motor driver 111, two universal wheels 112, two directional wheels 113, and a battery 114; wherein,
a battery 114 electrically connected to the motor driver 111 for supplying power to the motor driver 111; a motor driver 111 electrically connected to the controller 213 for receiving the driving control information and determining the driving signals of the universal wheel 112 and the directional wheel 113 according to the driving control information; a universal wheel 112 electrically connected to the motor driver 111 for adjusting a traveling direction according to a driving signal of the motor driver 111 and moving along the adjusted traveling direction; and a direction wheel 113 electrically connected to the motor driver 111 for moving in a fixed direction thereof according to a driving signal of the motor driver 111.
The universal wheels 112 adjust the traveling direction of the robot by adjusting the angles of the wheels. The wheel angle of the directional wheel 113 is fixed and it can move only in the direction corresponding to the fixed angle. When the wheel angle of the universal wheel 112 adjusted according to the driving signal of the motor driver 111 is the same as the wheel angle of the directional wheel 113, the robot can move straight forward; when the wheel angle of the universal wheel 112 and the wheel angle of the directional wheel 113 are not the same, the robot will turn at a fixed radius. By controlling the wheel angle of the universal wheel 112, the robot can move to the target position corresponding to the target object, and the technical effect of active welcoming is achieved. Optionally, the directional wheel is disposed at a position right in front of the greeting robot, and the universal wheel is disposed at a position back of the greeting robot, as shown in fig. 4.
Optionally, the greeting robot further includes:
at least two ultrasonic sensors 4 which are horizontally and circumferentially distributed on the arc of the trunk part 21 and the arc of the movable base 1, are electrically connected with the controller 213, and are used for detecting an obstacle in the moving process and sending the position information of the obstacle to the controller 213;
the controller is also used for receiving position information of the obstacle and updating the driving control information.
Alternatively, at least two ultrasonic sensors 4 are disposed on the arc of the body portion 21 of the robot and the arc of the movable base 1 at equal angles, respectively. The included angle between the adjacent ultrasonic sensors 4 and the corresponding circle centers is 10-45 degrees. And selecting the proper number of the ultrasonic sensors 4 and the included angle between the adjacent ultrasonic sensors 4 according to the detection range of the robot. Since the sensing distance of the ultrasonic sensor 4 is shorter than that of the pyroelectric infrared sensor 2111, the ultrasonic sensor 4 detects whether an obstacle is present during movement, and if so, the position information of the obstacle is transmitted to the controller 213, so that the controller 213 updates the drive control information according to the position of the obstacle, thereby avoiding the obstacle and providing the robot with a sensing function. Optionally, the ultrasonic sensor 4 disposed on the movable base 1 is configured to detect an obstacle with a height smaller than a first preset value, and the ultrasonic sensor 4 disposed on the trunk 21 of the robot is configured to detect an obstacle with a height larger than the first preset value, and particularly configured to detect an obstacle with a distance to the ground larger than a second preset value, where the first preset value and the second preset value are related to the position of the ultrasonic sensor. For example, the range of the first preset value and the second preset value may be greater than the height of the ultrasonic sensor 4 provided to the movable base 1 and less than the height of the ultrasonic sensor 4 provided to the trunk 21 of the robot.
Optionally, the greeting robot further includes an image acquisition module 221;
the processor 212 is electrically connected with the image acquisition module 221, and is further configured to determine that the current robot reaches the target position and send an image acquisition instruction to the image acquisition module 221 when the distance between the robot and the target object is less than or equal to a preset distance;
the image acquisition module 221 is arranged on the head 22 of the robot body 2 and used for acquiring a facial image of a target object according to an image acquisition instruction and sending the facial image to the processor 212;
the processor 212 is further configured to receive a facial image of the target object and determine identity information of the target object based on the facial image.
In the moving process of the robot, the position detection module 211 detects the human body position of the target object in real time and sends the human body position of the target object to the processor 212 in real time, the processor 212 calculates the distance between the human body position of the target object and the current position of the robot in real time, and determines that the current robot reaches the target position when the distance between the robot and the target object is less than or equal to a preset distance, and sends an image acquisition instruction to the image acquisition module 221 and a stop instruction to the controller 213. Upon receiving the stop command, the controller 213 stops the movement of the robot. The image acquisition module 221 may be one or more cameras. Upon receiving the image capture instruction, the camera is started to capture the facial image of the target object, and the captured facial image is sent to the processor 212 again. The processor 212 stores face images of all registered persons and welcome information in advance. The processor 212 compares the captured facial image with internally stored facial images to determine the identity information of the target object, i.e., whether the target object is a registered person. The image acquisition module 221 is disposed on the head 22 of the robot body 2, so as to facilitate acquisition of facial images of a human body and avoid difficulty in image recognition due to a shooting angle.
Optionally, the processor 212 is further configured to:
detecting whether the target object is a registered person or not according to the identity information, if so, sending welcome information to the interaction module 3, wherein the welcome information carries position guide information of the target object; if the target object is not a registered person, sending input prompt information to the interaction module 3; the processor 212 is also used to record the time of occurrence and the number of occurrences of the target object.
Before detecting whether the target object is a registered person according to the identity information, the processor 212 may further detect whether the target object has already been met according to the recorded occurrence time and occurrence frequency corresponding to the met object. If yes, the target object is selected again, and if not, whether the target object is a registered person is detected according to the identity information. Through the appearance time and the appearance times of the target object, the situation that multiple visitors are welcomed to the same target object can be avoided, the operation efficiency of the robot is improved, and meanwhile, the welcomed visitors and the number of people can be conveniently checked.
Optionally, the interaction module 3 includes:
the information display module 31 is provided in the body 21 of the robot body, connected to the processor 212, and displays position guide information.
The information display module 31 may be a touch LED (Light Emitting Diode) display screen. After receiving the welcome information sent by the processor 212, the information display module 31 displays the position guide information of the target object, so as to guide the target object, thereby realizing interaction between the robot and the guest. After receiving the entry prompt information sent by the processor 212, the information display module 31 displays an information entry interface, so that the target object enters information.
Optionally, the interaction module 3 further includes:
and the audio output device 32 is electrically connected with the processor and is used for playing welcome information corresponding to the registered target object or playing unregistered target object entry prompt information.
The audio output device 32 may be one or more speakers, may be disposed in the movable base 1, or may be disposed in the robot body 2. After receiving the welcome message sent by the processor 212, the audio output device 32 plays the welcome word in the welcome message and the position guide information of the target object, so as to guide the target object.
Optionally, the interaction module 3 further includes:
and the two LED lamp arrays 222 are arranged on the head 22 of the robot body 2 and used for displaying the expression of the robot.
Wherein the two LED light arrays 222 correspond to two eyes of the robot. The LED lamp arrays can be combined into various expressions such as happy, sad, crying, puzzled and the like, so that the expressive force of the robot is enhanced. Illustratively, the LED light array 222 displays a happy face to indicate a welcome to the guest upon receiving the welcome message sent by the processor 212.
Optionally, the robot body 2 further includes arms 23 fixed to both sides of the trunk 21, and the arms 23 on each side have a degree of freedom, see fig. 2.
Wherein the arm 23 can have, but is not limited to, three degrees of freedom, respectively realizing up-and-down movement, left-and-right movement, and rotation movement. The arm 23 may be electrically connected to the processor 212 for receiving the welcome message sent by the processor 212 and making corresponding guiding actions according to the position guiding information in the welcome message.
The embodiment of the utility model provides a usher robot, robot survey the human position of predetermineeing the within range current moment through the position detection module to utilize the treater to confirm the target object, and utilize drive module to remove the target position, thereby interact with the target object, reached the effect that the robot initiatively walked to the user and carried out usher and guide, improved the initiative of usher and promoted quality of service.
Example two
Fig. 5 is a flowchart illustrating a work flow of a greeting robot provided by the second embodiment of the present invention, and the specific operation steps include:
s510: and detecting the position of the human body at the current moment in a preset range.
S520: and determining the target object according to the detected position of the human body.
S530: and generating driving control information according to the position information of the target object.
S540: and driving the robot to reach the target position according to the driving control information, and generating an image acquisition instruction.
S550: and acquiring a facial image of the target object according to the image acquisition instruction, and determining the identity information of the target object according to the acquired facial image.
S560: and determining the occurrence times of the target object according to the identity information of the target object. If the target frequency is not zero, the target object is already present, and the process returns to step S510, and if the target frequency is zero, the target object is not already present, and step S570 is performed.
S570: and detecting whether the target object is a registered person or not according to the identity information of the target object. If yes, go to step S580; if not, the process proceeds to step S590.
S580: and acquiring welcome information corresponding to the target object. And proceeds to step S591.
S590: and displaying an information entry interface, and playing entry prompt information of the target object.
S591: and displaying welcome words in the welcome information and position guide information of the target object.
The above-mentioned method can be carried out by the welcome robot provided by the utility model discloses arbitrary embodiment to make the robot initiatively walk to the user and carry out the welcome and guide, improved the initiative of welcome and promoted quality of service.
It should be noted that the foregoing is only a preferred embodiment of the present invention and the technical principles applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail with reference to the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the scope of the present invention.
Claims (8)
1. A greeting robot, comprising:
a movable base;
a robot body disposed on the movable base table, wherein the robot body includes a trunk and a head disposed above the trunk;
the position detection module is arranged on the trunk part of the robot body and used for detecting the position of the human body at the current moment within a preset range;
the processor is electrically connected with the position detection module and is used for determining a target object according to the detected human body position;
the controller is electrically connected with the processor and used for generating driving control information according to the position information of the target object and sending the driving control information to the driving module;
the driving module is arranged in the movable base table and used for driving the robot to reach a target position according to the driving control information;
and the interaction module is electrically connected with the processor and is used for interacting with the target object.
2. The greeting robot of claim 1, wherein the position detection module comprises at least two pyroelectric infrared sensors, the pyroelectric infrared sensors being horizontally and circumferentially distributed on an arc of the trunk;
the pyroelectric infrared sensor is used for receiving infrared rays emitted by a moving human body and detecting the position of the human body at the current moment within a preset range.
3. The greeting robot of claim 1, wherein the drive module includes a motor drive, two universal wheels, two directional wheels, and a battery; wherein,
the battery is electrically connected with the motor driver and is used for providing electric quantity for the motor driver;
the motor driver is electrically connected with the controller and used for receiving the drive control information and determining drive signals of the universal wheel and the directional wheel according to the drive control information;
the universal wheel is electrically connected with the motor driver, is used for adjusting the advancing direction according to a driving signal of the motor driver and moves along the adjusted advancing direction;
the directional wheel is electrically connected with the motor driver and is used for moving along the fixed direction of the motor driver according to the driving signal of the motor driver.
4. The greeting robot of claim 1, further comprising:
the ultrasonic sensors are horizontally and circumferentially distributed on the arc of the trunk part and the arc of the movable base platform, are electrically connected with the controller, and are used for detecting an obstacle in the moving process and sending the position information of the obstacle to the controller;
the controller is further configured to receive position information of the obstacle and update the driving control information.
5. The greeting robot of claim 1, further comprising an image acquisition module;
the processor is electrically connected with the image acquisition module and is further used for determining that the current robot reaches the target position and sending an image acquisition instruction to the image acquisition module when the distance between the robot and the target object is smaller than or equal to a preset distance;
the image acquisition module is arranged at the head of the robot body and used for acquiring a facial image of a target object according to the image acquisition instruction and sending the facial image to the processor;
the processor is further configured to receive a facial image of the target object and determine identity information of the target object from the facial image.
6. The greeting robot of claim 5, wherein the interaction module comprises:
and the information display module is arranged on the trunk part of the robot body, is connected with the processor and is used for displaying the position guide information of the target object.
7. The greeting robot of claim 6, wherein the interaction module further comprises:
and the audio output equipment is electrically connected with the processor and is used for playing welcome information corresponding to the registered target object or playing unregistered target object input prompt information.
8. The welcome robot as claimed in any one of claims 1 to 7, wherein the robot body further comprises arms fixed to both sides of the trunk, and the arm of each side has a degree of freedom.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201820068481.3U CN208543477U (en) | 2018-01-16 | 2018-01-16 | A kind of guest-meeting robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201820068481.3U CN208543477U (en) | 2018-01-16 | 2018-01-16 | A kind of guest-meeting robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN208543477U true CN208543477U (en) | 2019-02-26 |
Family
ID=65417140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201820068481.3U Active CN208543477U (en) | 2018-01-16 | 2018-01-16 | A kind of guest-meeting robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN208543477U (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109927053A (en) * | 2019-05-06 | 2019-06-25 | 广东工业大学 | A kind of smart etiquette robot |
CN109979633A (en) * | 2019-04-08 | 2019-07-05 | 韦尔德海润(北京)智能科技有限公司 | A kind of nuclear power plant's secondary control system |
CN110228073A (en) * | 2019-06-26 | 2019-09-13 | 郑州中业科技股份有限公司 | Active response formula intelligent robot |
CN113359753A (en) * | 2021-06-24 | 2021-09-07 | 深圳市普渡科技有限公司 | Robot, robot welcome motion method and readable storage medium |
WO2021217350A1 (en) * | 2020-04-27 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Unmanned aerial vehicle control method, unmanned aerial vehicle, and storage medium |
CN113879921A (en) * | 2021-10-13 | 2022-01-04 | 昆山塔米机器人有限公司 | Control method and device for robot to enter elevator |
CN114153310A (en) * | 2021-11-18 | 2022-03-08 | 天津塔米智能科技有限公司 | Robot guest greeting method, device, equipment and medium |
-
2018
- 2018-01-16 CN CN201820068481.3U patent/CN208543477U/en active Active
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109979633A (en) * | 2019-04-08 | 2019-07-05 | 韦尔德海润(北京)智能科技有限公司 | A kind of nuclear power plant's secondary control system |
CN109927053A (en) * | 2019-05-06 | 2019-06-25 | 广东工业大学 | A kind of smart etiquette robot |
CN110228073A (en) * | 2019-06-26 | 2019-09-13 | 郑州中业科技股份有限公司 | Active response formula intelligent robot |
WO2021217350A1 (en) * | 2020-04-27 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Unmanned aerial vehicle control method, unmanned aerial vehicle, and storage medium |
CN113853554A (en) * | 2020-04-27 | 2021-12-28 | 深圳市大疆创新科技有限公司 | Control method of unmanned aerial vehicle, unmanned aerial vehicle and storage medium |
CN113359753A (en) * | 2021-06-24 | 2021-09-07 | 深圳市普渡科技有限公司 | Robot, robot welcome motion method and readable storage medium |
CN113359753B (en) * | 2021-06-24 | 2023-09-08 | 深圳市普渡科技有限公司 | Robot, robot welcome movement method and readable storage medium |
CN113879921A (en) * | 2021-10-13 | 2022-01-04 | 昆山塔米机器人有限公司 | Control method and device for robot to enter elevator |
CN113879921B (en) * | 2021-10-13 | 2023-10-10 | 苏州塔米机器人有限公司 | Control method and device for robot to enter elevator |
CN114153310A (en) * | 2021-11-18 | 2022-03-08 | 天津塔米智能科技有限公司 | Robot guest greeting method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN208543477U (en) | A kind of guest-meeting robot | |
US10507400B2 (en) | Robot | |
US11787061B2 (en) | Method for operating moving robot | |
US9662788B2 (en) | Communication draw-in system, communication draw-in method, and communication draw-in program | |
US9922236B2 (en) | Wearable eyeglasses for providing social and environmental awareness | |
JP5366048B2 (en) | Information provision system | |
EP2740013B1 (en) | Finding a called party | |
US20130338525A1 (en) | Mobile Human Interface Robot | |
CN105058389A (en) | Robot system, robot control method, and robot | |
JP6814220B2 (en) | Mobility and mobility systems | |
US11330951B2 (en) | Robot cleaner and method of operating the same | |
US20200156256A1 (en) | Mobile robot operation method and mobile robot | |
JP2010231359A (en) | Remote control device | |
JP2004230480A (en) | Robot device and robot control method, recording medium, and program | |
US20230195401A1 (en) | Information processing apparatus and information processing method | |
JP2017170568A (en) | Service providing robot system | |
JP2007229817A (en) | Autonomous mobile robot | |
JP2006231447A (en) | Confirmation method for indicating position or specific object and method and device for coordinate acquisition | |
JP2007156689A (en) | Light source position detection device and face recognition device using the same and self-propelled robot | |
JP5552710B2 (en) | Robot movement control system, robot movement control program, and robot movement control method | |
JP2006263873A (en) | Communication robot system and communication robot | |
JP2022078741A (en) | robot | |
JP2023133343A (en) | Robot, method for controlling direction, and program | |
KR20210022394A (en) | A moving robot for the blind and control method thereof | |
JP7258438B2 (en) | ROBOT, ROBOT CONTROL PROGRAM AND ROBOT CONTROL METHOD |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 215324 12th floor, complex building, No. 1699, Weicheng South Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province Patentee after: Suzhou Tami robot Co.,Ltd. Address before: 215324 12th floor, complex building, No. 1699, Weicheng South Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province Patentee before: KUNSHAN TAMI ROBOT Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |