KR20150097049A - self-serving robot system using of natural UI - Google Patents

self-serving robot system using of natural UI Download PDF

Info

Publication number
KR20150097049A
KR20150097049A KR1020140018123A KR20140018123A KR20150097049A KR 20150097049 A KR20150097049 A KR 20150097049A KR 1020140018123 A KR1020140018123 A KR 1020140018123A KR 20140018123 A KR20140018123 A KR 20140018123A KR 20150097049 A KR20150097049 A KR 20150097049A
Authority
KR
South Korea
Prior art keywords
robot
natural
sensor
obstacle
camera
Prior art date
Application number
KR1020140018123A
Other languages
Korean (ko)
Inventor
김승호
이현우
이창영
성대경
Original Assignee
경북대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 경북대학교 산학협력단 filed Critical 경북대학교 산학협력단
Priority to KR1020140018123A priority Critical patent/KR20150097049A/en
Publication of KR20150097049A publication Critical patent/KR20150097049A/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/081Touching devices, e.g. pressure-sensitive
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Abstract

The present invention relates to a self-serving robot system using natural UI, comprising: a self-serving robot including a three-dimensional action recognition camera, an obstacle recognition sensor, a control processor to control the three-dimensional action recognition camera and the obstacle recognition sensor, and to control the movement of a robot, and a wireless communication apparatus; and a mobile apparatus capable of real-time monitoring and controlling by wireless communication with the control processor. The present invention provides the self-serving robot system using a natural UI, which is capable of real-time monitoring and controlling remotely, ordering through the natural UI, and moving by distinguishing between a sign and an obstacle, so an order service can be rapidly and exactly performed.

Description

[0002] Self-serving robot system using natural UI [

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a robot system, and more particularly, to an autonomous serving robot system using a natural UI capable of performing accurate, quick and convenient ordering of goods.

Generally, a system is constructed by the following two methods in order to execute algorithms that require advanced computing capabilities such as face detection, face recognition, and binocular matching using image information obtained from a robot.

First, there is a method of executing the image processing by the robot itself by using a computer having a high processing capability for the image processing or transmitting the acquired image information to the network server and executing the image processing by the server.

When the first method is applied, there is a drawback that the size of the robot is increased and the power consumption is large. Therefore, such a method has a problem in that it can not be applied to a robot that receives operating power by a battery.

In addition, when the second method is applied, the complicated operation can be reduced because the network-based terminal robot in charge of the network server side is applied. However, even in this case, when a network-based terminal robot simply compresses image information and transmits the image information to the server side, excessive communication traffic due to transmission (uploading) of image information between the terminal robot and the server is caused, The reaction rate is slowed down.

So far, a network-based intelligent service robot uses a conventional image compression algorithm such as MPEG and H.264 to transmit image information from a robot to a server. However, in such a method, it is difficult to expect a higher compression rate because the server also performs compression on unnecessary image parts such as a background included in image information in addition to objects to be processed by the server.

In addition, in a Ubiquitous Robot Companion (URC) system in which a plurality of intelligent robots are connected to a single server and managed by a server, the capacity of information transmitted to the server is minimized, thereby reducing the load imposed on the network.

In the conventional intelligent service robot, most of the vision processing recognizes the external environment, the user's face, and the key through the image information collected and input from one camera (mono camera) following moving. In addition, in order to avoid obstacles that occur during the human following of the intelligent service robot, safety travel is performed using a combination of sensor information such as ultrasonic waves and infrared rays. Therefore, the intelligent service robot requires an excessive computation power and power of the processor, and it is difficult to apply it to a robot which receives operating power by a battery.

In addition, even if a complicated operation of the intelligent service robot is performed by a network-based terminal robot in the server side, excessive communication traffic is caused between the terminal robot and the server when the user follows the conventional technology, There are disadvantages.

Conventional techniques for stereo vision for acquiring image information through a pair of cameras mounted on an intelligent service robot are mostly focused on techniques for stereo matching of image information acquired from each camera, And the post-processing, and the technique of following the shape recognition of the user through it, have been invented as discrete element technologies in each patent, or are mostly not specific. Therefore, there is a need for a technology for an intelligent service robot capable of following a user more stably while avoiding obstacles while giving a small load to the built-in processor.

Until now, the intelligent service robots in home users have used the methods of motion detection, face recognition, pattern matching, color difference information utilization, etc. However, these technologies are not suitable for the intelligent service robots, , A lot of memory, excessive processor operation, and sensitivity to illumination.

On the other hand, the autonomous navigation technology for the existing intelligent service robot was possible through localization, map building and navigation based on the robot in the space where the robot exists. However, this method of movement can be applied to an intelligent service robot with an artificial external sensor which enables the self-position recognition, recognition of a natural landmark which can be used only in a specific situation, inability to avoid an abrupt emerging obstacle, CPU, and memory, which are difficult to apply to interactive real-time mobile systems.

Korean Patent No. 10-0834577 (May 27, 2008)

An object of the present invention to solve the above-mentioned problems is to provide a real-time monitoring and control function that can be remotely monitored and controlled, can be ordered in a natural UI, and can be moved between signs and obstacles, To provide an autonomous serving robot system using the robot.

According to an aspect of the present invention, there is provided a robot control system including a three-dimensional motion recognition camera, an obstacle recognition sensor, a control processor for controlling the three-dimensional motion recognition camera and the obstacle recognition sensor, An autonomous serving robot having an apparatus; And a mobile device capable of real-time monitoring and control in wireless communication with the control processor.

Preferably, the three-dimensional motion recognition camera is a KINECT-based camera, and the obstacle recognition sensor includes a object recognition sensor and a front distance sensor provided in the Kinect.

Preferably, the mobile device is any one of a smart phone, a PDA, and a tablet PC, and the autonomous travel control of the robot through the mobile device may be realized by real-time image realization using template matching built in OpenCV In the case of the streaming of the image, it is preferable that the header portion is modified in the TCP / IP socket communication and implemented in MPEG streaming.

As described above, according to the present invention, it is possible to monitor and control in real time by using a smart phone differently from existing ones, to order by a natural UI, and not to set a track like a conventional serving robot, As a recognition sensor, KINECT sensor and front distance sensor can be used to distinguish between signs and obstacles, so that customers can easily order food ordered and ordered at a restaurant or a product store. An autonomous serving robot system using a natural UI is provided.

1 is a block diagram of an autonomous serving robot system using a natural UI according to an embodiment of the present invention,
FIG. 2 is a schematic diagram for serving an autonomous serving robot system using a natural UI according to an embodiment of the present invention. Referring to FIG.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention, and how to accomplish it, will be described with reference to the embodiments described in detail below with reference to the accompanying drawings. However, the present invention is not limited to the embodiments described herein but may be embodied in other forms. The embodiments are provided so that those skilled in the art can easily carry out the technical idea of the present invention to those skilled in the art.

In the drawings, embodiments of the present invention are not limited to the specific forms shown and are exaggerated for clarity. Also, the same reference numerals denote the same components throughout the specification.

The expression "and / or" is used herein to mean including at least one of the elements listed before and after. Also, singular forms include plural forms unless the context clearly dictates otherwise. Also, components, steps, operations and elements referred to in the specification as " comprises "or" comprising " refer to the presence or addition of one or more other components, steps, operations, elements, and / or devices.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the drawings.

1 is a block diagram of an autonomous serving robot system using a natural UI according to an embodiment of the present invention. 1, an autonomous serving robot system according to an embodiment of the present invention includes a three-dimensional motion recognition camera 130, an obstacle recognition sensor 150, a three-dimensional motion recognition camera 130, A control processor (113) for controlling the sensor (150) and controlling the movement of the robot (100), an autonomous serving robot (100) having a wireless communication device; And a mobile device (200) capable of real time monitoring and control by wireless communication with the control processor (113).

As described above, according to the present invention, it is possible to monitor and control in real time by using a smartphone differently from the existing serving robots 100, to order by a natural UI, and not to set a predetermined track like the existing serving robots , A three-dimensional motion recognition sensor, provides a robot system that can move a signboard and an obstacle by using a KINECT sensor and a front distance sensor, so that a customer can conveniently order and order Provided is an autonomous serving robot system using a natural UI that accurately delivers food or products.

It is preferable that the three-dimensional motion recognition camera 130 is a KINECT-based camera 130. Kinect is a camera that uses a body of a user without a controller, It was first announced as "Project Natal" on June 1, 2009 at E3, and it was announced at E3 2010 when it announced its official name "Kinect".

The Kinect recognizes the motion of the player with the camera 130 module, recognizes the motion of the player, recognizes the voice with the microphone module, requires a separate power source to connect to the old Xbox 360 model, It is a game specific motion recognition camera 130 that simultaneously launches 17 games to Kinect in the United States to capture the family floor.

Also, the Kinect Xbox original game machine of Xbox One is equipped with the updated version of Kinect. The new Kinetect uses a 1080p time-of-light (ToF) camera (130) and processes the data at a rate of 2 gigabytes per second to read the environment.

Also, unlike the optional Kinect of the Xbox 360, the Xbox original game machine does not work unless the Kinect sensor is connected, but users can turn off all Kinect functions while the sensor is connected to the game machine. In the embodiment of the present invention, it is possible to perform the function as a sensor for detecting the obstacle in front of the robot 100, Even if ordering is performed by a simple operation, it is possible to perform a function of recognizing the operation accurately and executing the command.

In an embodiment of the present invention, an autonomous serving robot system that realizes a natural UI by using the Kinect device and performs an order process conveniently and naturally is provided.

Natural UI is literally a 'natural user-manipulation environment', meaning that by pushing the arrow button on the keyboard, the user can reach for a more natural manipulation experience, rather than letting the character in the game fist out .

Natural UI refers to both the user's actions, as well as the technology that uses real human behavior itself, such as voice and gestures, to manipulate the PC with mostly keyboard and mouse, but the natural UI will fundamentally change the PC environment in the future. It is applied to the embodiment of the present invention in that it is a soil of innovation.

As a device capable of realizing such a natural UI, a typical product equipped with a motion recognition technology that can be felt by the current user is a three-dimensional motion recognition camera (hereinafter, referred to as " 3D camera ") in the robot system according to the embodiment of the present invention. 130) is a Kinect.

The principle of Kinect begins by reading the space. Three cameras (130) mounted on the Kinect are designed to shoot infrared rays, read infrared rays, and distinguish colors of real space. The upgraded Kinect has been refined to be able to distinguish the fingers of a person and to be aware of the heartbeat of those who play games.

As described above, the KINECT that implements the natural UI not only allows the user to operate various electronic devices such as a living room TV with voice or gesture at any time, but also realizes the natural UI of the service through the electronic device.

FIG. 2 is a schematic diagram for serving an autonomous serving robot system using a natural UI according to an embodiment of the present invention. Referring to FIG.

As shown in FIG. 2, the autonomous serving robot system using the natural UI according to the embodiment of the present invention largely performs autonomous serving in a two-step process. In order to place an order, the robot 100 recognizes a person through the camera 130 and places it in order to receive an order. When a person uses a hand gesture to drag a desired object or food into a basket, the order is completed . Here, recognizing human beings uses Kinect software to extract and apply the hand joints from the skeleton stream.

Secondly, as shown in FIG. 2, the signboard is recognized to carry out autonomous travel in order to receive the ordered goods. If an obstacle appears during autonomous driving, it is detected using the Kinect sensor or the front distance sensor and immediately stopped to search for the route again. In this process, the control processor 113 of the autonomous mobile robot 100 communicates with each other via the wireless communication device, thereby enabling control of the real time streaming video by using the mobile device 200. [ In the embodiment of the present invention, an Android-based smartphone is used. Here, the mobile device 200 is preferably a smart phone, a PDA, or a tablet PC. In addition, the mobile device 200 is convenient to carry and can perform various types of programs such as a PC, Any device or device is acceptable.

In case of such free running, it is desirable to use template matching built in OpenCV, and to implement MJPEG streaming in the case of streaming by modifying the header part in TCP / IP socket communication. Here, OpenCV (Open Computer Vision) is an open source computer vision C library that can be used on various platforms such as Windows and Linux. As a library focused on real-time image processing, the autonomous serving robot 100 according to an embodiment of the present invention can be remotely controlled while viewing a real-time streaming video using a mobile device 200 such as a smart phone.

That is, as shown in FIG. 2, the autonomous serving robot 100 according to the embodiment of the present invention recognizes a person and receives an order, recognizes the sign, moves according to the direction of the sign, By receiving and returning the item, it is possible to accurately convey the item to the user. In addition, when an obstacle is detected between the moving obstacles, it is automatically stopped to search for a path to prevent collision, and remote control with the mobile device 200 such as a smart phone is possible whenever necessary.

While the invention has been shown and described with respect to the specific embodiments thereof, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined by the appended claims. Anyone with it will know easily.

100: autonomous serving robot, 113: control processor,
115: wireless communication device, 150: obstacle recognition device, 200: mobile device

Claims (6)

A three-dimensional motion recognition camera, an obstacle recognition sensor, a control processor for controlling the three-dimensional motion recognition camera and the obstacle recognition sensor and controlling movement of the robot, and an autonomous serving robot having a wireless communication device; And
And a mobile device capable of real-time monitoring and control by wireless communication with the control processor.
The method according to claim 1,
Wherein the three-dimensional motion recognition camera is a KINECT-based camera.
3. The method of claim 2,
Wherein the obstacle recognizing sensor includes an object recognition sensor and a front distance sensor provided on the kinem.
4. The method according to any one of claims 1 to 3,
The mobile device includes:
A smart phone, a PDA, and a tablet PC.
5. The method of claim 4,
Wherein the self-running control of the robot via the mobile device implements a real-time image using template matching built in OpenCV.
6. The method of claim 5,
In the case of the above-mentioned streaming of images,
And is implemented in MPEG streaming by modifying the header part in TCP / IP socket communication. The autonomous serving robot system using the natural UI.

KR1020140018123A 2014-02-17 2014-02-17 self-serving robot system using of natural UI KR20150097049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140018123A KR20150097049A (en) 2014-02-17 2014-02-17 self-serving robot system using of natural UI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140018123A KR20150097049A (en) 2014-02-17 2014-02-17 self-serving robot system using of natural UI

Publications (1)

Publication Number Publication Date
KR20150097049A true KR20150097049A (en) 2015-08-26

Family

ID=54059110

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140018123A KR20150097049A (en) 2014-02-17 2014-02-17 self-serving robot system using of natural UI

Country Status (1)

Country Link
KR (1) KR20150097049A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105945947A (en) * 2016-05-20 2016-09-21 西华大学 Robot writing system based on gesture control and control method of robot writing system
CN106737685A (en) * 2017-01-16 2017-05-31 上海大界机器人科技有限公司 Manipulator motion system based on computer vision with man-machine real-time, interactive
CN108838998A (en) * 2018-07-25 2018-11-20 安徽信息工程学院 Novel robot data collection layer structure
CN108965812A (en) * 2018-07-25 2018-12-07 安徽信息工程学院 Robot panoramic view data acquisition layer structure
CN109079855A (en) * 2018-07-25 2018-12-25 安徽信息工程学院 Robot data collection layer
CN109176605A (en) * 2018-07-25 2019-01-11 安徽信息工程学院 Robot data collection layer structure
CN111949032A (en) * 2020-08-18 2020-11-17 中国科学技术大学 3D obstacle avoidance navigation system and method based on reinforcement learning
KR102217727B1 (en) * 2020-01-29 2021-02-18 윤수정 Voice service device and method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105945947A (en) * 2016-05-20 2016-09-21 西华大学 Robot writing system based on gesture control and control method of robot writing system
CN106737685A (en) * 2017-01-16 2017-05-31 上海大界机器人科技有限公司 Manipulator motion system based on computer vision with man-machine real-time, interactive
CN108838998A (en) * 2018-07-25 2018-11-20 安徽信息工程学院 Novel robot data collection layer structure
CN108965812A (en) * 2018-07-25 2018-12-07 安徽信息工程学院 Robot panoramic view data acquisition layer structure
CN109079855A (en) * 2018-07-25 2018-12-25 安徽信息工程学院 Robot data collection layer
CN109176605A (en) * 2018-07-25 2019-01-11 安徽信息工程学院 Robot data collection layer structure
KR102217727B1 (en) * 2020-01-29 2021-02-18 윤수정 Voice service device and method
CN111949032A (en) * 2020-08-18 2020-11-17 中国科学技术大学 3D obstacle avoidance navigation system and method based on reinforcement learning

Similar Documents

Publication Publication Date Title
KR20150097049A (en) self-serving robot system using of natural UI
US11126257B2 (en) System and method for detecting human gaze and gesture in unconstrained environments
JP6968154B2 (en) Control systems and control processing methods and equipment
Sanna et al. A Kinect-based natural interface for quadrotor control
KR102567525B1 (en) Mobile Robot System, Mobile Robot And Method Of Controlling Mobile Robot System
JP2019515407A (en) System and method for initializing a robot-learned route to travel autonomously
EP3037917B1 (en) Monitoring
US20110118877A1 (en) Robot system and method and computer-readable medium controlling the same
US20180005445A1 (en) Augmenting a Moveable Entity with a Hologram
CN106933227B (en) Method for guiding intelligent robot and electronic equipment
JP2014059737A (en) Self-propelled device
US9477302B2 (en) System and method for programing devices within world space volumes
EP2917902B1 (en) Remote control using depth camera
US20140173524A1 (en) Target and press natural user input
CN105681747A (en) Telepresence interaction wheelchair
JP6950192B2 (en) Information processing equipment, information processing systems and programs
US10444852B2 (en) Method and apparatus for monitoring in a monitoring space
KR20190104488A (en) Artificial intelligence robot for managing movement of object using artificial intelligence and operating method thereof
KR20140009900A (en) Apparatus and method for controlling robot
JP2021077311A (en) Human-computer interaction system and human-computer interaction method
WO2018006481A1 (en) Motion-sensing operation method and device for mobile terminal
KR101100240B1 (en) System for object learning through multi-modal interaction and method thereof
US20150153715A1 (en) Rapidly programmable locations in space
Oh et al. Hybrid control architecture of the robotic surveillance system using smartphones
Vincze et al. Perception and computer vision

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination