KR20150097049A - self-serving robot system using of natural UI - Google Patents
self-serving robot system using of natural UI Download PDFInfo
- Publication number
- KR20150097049A KR20150097049A KR1020140018123A KR20140018123A KR20150097049A KR 20150097049 A KR20150097049 A KR 20150097049A KR 1020140018123 A KR1020140018123 A KR 1020140018123A KR 20140018123 A KR20140018123 A KR 20140018123A KR 20150097049 A KR20150097049 A KR 20150097049A
- Authority
- KR
- South Korea
- Prior art keywords
- robot
- natural
- sensor
- obstacle
- camera
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/081—Touching devices, e.g. pressure-sensitive
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Abstract
Description
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a robot system, and more particularly, to an autonomous serving robot system using a natural UI capable of performing accurate, quick and convenient ordering of goods.
Generally, a system is constructed by the following two methods in order to execute algorithms that require advanced computing capabilities such as face detection, face recognition, and binocular matching using image information obtained from a robot.
First, there is a method of executing the image processing by the robot itself by using a computer having a high processing capability for the image processing or transmitting the acquired image information to the network server and executing the image processing by the server.
When the first method is applied, there is a drawback that the size of the robot is increased and the power consumption is large. Therefore, such a method has a problem in that it can not be applied to a robot that receives operating power by a battery.
In addition, when the second method is applied, the complicated operation can be reduced because the network-based terminal robot in charge of the network server side is applied. However, even in this case, when a network-based terminal robot simply compresses image information and transmits the image information to the server side, excessive communication traffic due to transmission (uploading) of image information between the terminal robot and the server is caused, The reaction rate is slowed down.
So far, a network-based intelligent service robot uses a conventional image compression algorithm such as MPEG and H.264 to transmit image information from a robot to a server. However, in such a method, it is difficult to expect a higher compression rate because the server also performs compression on unnecessary image parts such as a background included in image information in addition to objects to be processed by the server.
In addition, in a Ubiquitous Robot Companion (URC) system in which a plurality of intelligent robots are connected to a single server and managed by a server, the capacity of information transmitted to the server is minimized, thereby reducing the load imposed on the network.
In the conventional intelligent service robot, most of the vision processing recognizes the external environment, the user's face, and the key through the image information collected and input from one camera (mono camera) following moving. In addition, in order to avoid obstacles that occur during the human following of the intelligent service robot, safety travel is performed using a combination of sensor information such as ultrasonic waves and infrared rays. Therefore, the intelligent service robot requires an excessive computation power and power of the processor, and it is difficult to apply it to a robot which receives operating power by a battery.
In addition, even if a complicated operation of the intelligent service robot is performed by a network-based terminal robot in the server side, excessive communication traffic is caused between the terminal robot and the server when the user follows the conventional technology, There are disadvantages.
Conventional techniques for stereo vision for acquiring image information through a pair of cameras mounted on an intelligent service robot are mostly focused on techniques for stereo matching of image information acquired from each camera, And the post-processing, and the technique of following the shape recognition of the user through it, have been invented as discrete element technologies in each patent, or are mostly not specific. Therefore, there is a need for a technology for an intelligent service robot capable of following a user more stably while avoiding obstacles while giving a small load to the built-in processor.
Until now, the intelligent service robots in home users have used the methods of motion detection, face recognition, pattern matching, color difference information utilization, etc. However, these technologies are not suitable for the intelligent service robots, , A lot of memory, excessive processor operation, and sensitivity to illumination.
On the other hand, the autonomous navigation technology for the existing intelligent service robot was possible through localization, map building and navigation based on the robot in the space where the robot exists. However, this method of movement can be applied to an intelligent service robot with an artificial external sensor which enables the self-position recognition, recognition of a natural landmark which can be used only in a specific situation, inability to avoid an abrupt emerging obstacle, CPU, and memory, which are difficult to apply to interactive real-time mobile systems.
An object of the present invention to solve the above-mentioned problems is to provide a real-time monitoring and control function that can be remotely monitored and controlled, can be ordered in a natural UI, and can be moved between signs and obstacles, To provide an autonomous serving robot system using the robot.
According to an aspect of the present invention, there is provided a robot control system including a three-dimensional motion recognition camera, an obstacle recognition sensor, a control processor for controlling the three-dimensional motion recognition camera and the obstacle recognition sensor, An autonomous serving robot having an apparatus; And a mobile device capable of real-time monitoring and control in wireless communication with the control processor.
Preferably, the three-dimensional motion recognition camera is a KINECT-based camera, and the obstacle recognition sensor includes a object recognition sensor and a front distance sensor provided in the Kinect.
Preferably, the mobile device is any one of a smart phone, a PDA, and a tablet PC, and the autonomous travel control of the robot through the mobile device may be realized by real-time image realization using template matching built in OpenCV In the case of the streaming of the image, it is preferable that the header portion is modified in the TCP / IP socket communication and implemented in MPEG streaming.
As described above, according to the present invention, it is possible to monitor and control in real time by using a smart phone differently from existing ones, to order by a natural UI, and not to set a track like a conventional serving robot, As a recognition sensor, KINECT sensor and front distance sensor can be used to distinguish between signs and obstacles, so that customers can easily order food ordered and ordered at a restaurant or a product store. An autonomous serving robot system using a natural UI is provided.
1 is a block diagram of an autonomous serving robot system using a natural UI according to an embodiment of the present invention,
FIG. 2 is a schematic diagram for serving an autonomous serving robot system using a natural UI according to an embodiment of the present invention. Referring to FIG.
BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention, and how to accomplish it, will be described with reference to the embodiments described in detail below with reference to the accompanying drawings. However, the present invention is not limited to the embodiments described herein but may be embodied in other forms. The embodiments are provided so that those skilled in the art can easily carry out the technical idea of the present invention to those skilled in the art.
In the drawings, embodiments of the present invention are not limited to the specific forms shown and are exaggerated for clarity. Also, the same reference numerals denote the same components throughout the specification.
The expression "and / or" is used herein to mean including at least one of the elements listed before and after. Also, singular forms include plural forms unless the context clearly dictates otherwise. Also, components, steps, operations and elements referred to in the specification as " comprises "or" comprising " refer to the presence or addition of one or more other components, steps, operations, elements, and / or devices.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the drawings.
1 is a block diagram of an autonomous serving robot system using a natural UI according to an embodiment of the present invention. 1, an autonomous serving robot system according to an embodiment of the present invention includes a three-dimensional
As described above, according to the present invention, it is possible to monitor and control in real time by using a smartphone differently from the existing serving
It is preferable that the three-dimensional
The Kinect recognizes the motion of the player with the
Also, the Kinect Xbox original game machine of Xbox One is equipped with the updated version of Kinect. The new Kinetect uses a 1080p time-of-light (ToF) camera (130) and processes the data at a rate of 2 gigabytes per second to read the environment.
Also, unlike the optional Kinect of the Xbox 360, the Xbox original game machine does not work unless the Kinect sensor is connected, but users can turn off all Kinect functions while the sensor is connected to the game machine. In the embodiment of the present invention, it is possible to perform the function as a sensor for detecting the obstacle in front of the
In an embodiment of the present invention, an autonomous serving robot system that realizes a natural UI by using the Kinect device and performs an order process conveniently and naturally is provided.
Natural UI is literally a 'natural user-manipulation environment', meaning that by pushing the arrow button on the keyboard, the user can reach for a more natural manipulation experience, rather than letting the character in the game fist out .
Natural UI refers to both the user's actions, as well as the technology that uses real human behavior itself, such as voice and gestures, to manipulate the PC with mostly keyboard and mouse, but the natural UI will fundamentally change the PC environment in the future. It is applied to the embodiment of the present invention in that it is a soil of innovation.
As a device capable of realizing such a natural UI, a typical product equipped with a motion recognition technology that can be felt by the current user is a three-dimensional motion recognition camera (hereinafter, referred to as " 3D camera ") in the robot system according to the embodiment of the present invention. 130) is a Kinect.
The principle of Kinect begins by reading the space. Three cameras (130) mounted on the Kinect are designed to shoot infrared rays, read infrared rays, and distinguish colors of real space. The upgraded Kinect has been refined to be able to distinguish the fingers of a person and to be aware of the heartbeat of those who play games.
As described above, the KINECT that implements the natural UI not only allows the user to operate various electronic devices such as a living room TV with voice or gesture at any time, but also realizes the natural UI of the service through the electronic device.
FIG. 2 is a schematic diagram for serving an autonomous serving robot system using a natural UI according to an embodiment of the present invention. Referring to FIG.
As shown in FIG. 2, the autonomous serving robot system using the natural UI according to the embodiment of the present invention largely performs autonomous serving in a two-step process. In order to place an order, the
Secondly, as shown in FIG. 2, the signboard is recognized to carry out autonomous travel in order to receive the ordered goods. If an obstacle appears during autonomous driving, it is detected using the Kinect sensor or the front distance sensor and immediately stopped to search for the route again. In this process, the
In case of such free running, it is desirable to use template matching built in OpenCV, and to implement MJPEG streaming in the case of streaming by modifying the header part in TCP / IP socket communication. Here, OpenCV (Open Computer Vision) is an open source computer vision C library that can be used on various platforms such as Windows and Linux. As a library focused on real-time image processing, the
That is, as shown in FIG. 2, the
While the invention has been shown and described with respect to the specific embodiments thereof, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined by the appended claims. Anyone with it will know easily.
100: autonomous serving robot, 113: control processor,
115: wireless communication device, 150: obstacle recognition device, 200: mobile device
Claims (6)
And a mobile device capable of real-time monitoring and control by wireless communication with the control processor.
Wherein the three-dimensional motion recognition camera is a KINECT-based camera.
Wherein the obstacle recognizing sensor includes an object recognition sensor and a front distance sensor provided on the kinem.
The mobile device includes:
A smart phone, a PDA, and a tablet PC.
Wherein the self-running control of the robot via the mobile device implements a real-time image using template matching built in OpenCV.
In the case of the above-mentioned streaming of images,
And is implemented in MPEG streaming by modifying the header part in TCP / IP socket communication. The autonomous serving robot system using the natural UI.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140018123A KR20150097049A (en) | 2014-02-17 | 2014-02-17 | self-serving robot system using of natural UI |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140018123A KR20150097049A (en) | 2014-02-17 | 2014-02-17 | self-serving robot system using of natural UI |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20150097049A true KR20150097049A (en) | 2015-08-26 |
Family
ID=54059110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140018123A KR20150097049A (en) | 2014-02-17 | 2014-02-17 | self-serving robot system using of natural UI |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20150097049A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105945947A (en) * | 2016-05-20 | 2016-09-21 | 西华大学 | Robot writing system based on gesture control and control method of robot writing system |
CN106737685A (en) * | 2017-01-16 | 2017-05-31 | 上海大界机器人科技有限公司 | Manipulator motion system based on computer vision with man-machine real-time, interactive |
CN108838998A (en) * | 2018-07-25 | 2018-11-20 | 安徽信息工程学院 | Novel robot data collection layer structure |
CN108965812A (en) * | 2018-07-25 | 2018-12-07 | 安徽信息工程学院 | Robot panoramic view data acquisition layer structure |
CN109079855A (en) * | 2018-07-25 | 2018-12-25 | 安徽信息工程学院 | Robot data collection layer |
CN109176605A (en) * | 2018-07-25 | 2019-01-11 | 安徽信息工程学院 | Robot data collection layer structure |
CN111949032A (en) * | 2020-08-18 | 2020-11-17 | 中国科学技术大学 | 3D obstacle avoidance navigation system and method based on reinforcement learning |
KR102217727B1 (en) * | 2020-01-29 | 2021-02-18 | 윤수정 | Voice service device and method |
-
2014
- 2014-02-17 KR KR1020140018123A patent/KR20150097049A/en not_active Application Discontinuation
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105945947A (en) * | 2016-05-20 | 2016-09-21 | 西华大学 | Robot writing system based on gesture control and control method of robot writing system |
CN106737685A (en) * | 2017-01-16 | 2017-05-31 | 上海大界机器人科技有限公司 | Manipulator motion system based on computer vision with man-machine real-time, interactive |
CN108838998A (en) * | 2018-07-25 | 2018-11-20 | 安徽信息工程学院 | Novel robot data collection layer structure |
CN108965812A (en) * | 2018-07-25 | 2018-12-07 | 安徽信息工程学院 | Robot panoramic view data acquisition layer structure |
CN109079855A (en) * | 2018-07-25 | 2018-12-25 | 安徽信息工程学院 | Robot data collection layer |
CN109176605A (en) * | 2018-07-25 | 2019-01-11 | 安徽信息工程学院 | Robot data collection layer structure |
KR102217727B1 (en) * | 2020-01-29 | 2021-02-18 | 윤수정 | Voice service device and method |
CN111949032A (en) * | 2020-08-18 | 2020-11-17 | 中国科学技术大学 | 3D obstacle avoidance navigation system and method based on reinforcement learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20150097049A (en) | self-serving robot system using of natural UI | |
US11126257B2 (en) | System and method for detecting human gaze and gesture in unconstrained environments | |
JP6968154B2 (en) | Control systems and control processing methods and equipment | |
Sanna et al. | A Kinect-based natural interface for quadrotor control | |
KR102567525B1 (en) | Mobile Robot System, Mobile Robot And Method Of Controlling Mobile Robot System | |
JP2019515407A (en) | System and method for initializing a robot-learned route to travel autonomously | |
EP3037917B1 (en) | Monitoring | |
US20110118877A1 (en) | Robot system and method and computer-readable medium controlling the same | |
US20180005445A1 (en) | Augmenting a Moveable Entity with a Hologram | |
CN106933227B (en) | Method for guiding intelligent robot and electronic equipment | |
JP2014059737A (en) | Self-propelled device | |
US9477302B2 (en) | System and method for programing devices within world space volumes | |
EP2917902B1 (en) | Remote control using depth camera | |
US20140173524A1 (en) | Target and press natural user input | |
CN105681747A (en) | Telepresence interaction wheelchair | |
JP6950192B2 (en) | Information processing equipment, information processing systems and programs | |
US10444852B2 (en) | Method and apparatus for monitoring in a monitoring space | |
KR20190104488A (en) | Artificial intelligence robot for managing movement of object using artificial intelligence and operating method thereof | |
KR20140009900A (en) | Apparatus and method for controlling robot | |
JP2021077311A (en) | Human-computer interaction system and human-computer interaction method | |
WO2018006481A1 (en) | Motion-sensing operation method and device for mobile terminal | |
KR101100240B1 (en) | System for object learning through multi-modal interaction and method thereof | |
US20150153715A1 (en) | Rapidly programmable locations in space | |
Oh et al. | Hybrid control architecture of the robotic surveillance system using smartphones | |
Vincze et al. | Perception and computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |