WO2021040214A1 - Mobile robot and method for controlling same - Google Patents

Mobile robot and method for controlling same Download PDF

Info

Publication number
WO2021040214A1
WO2021040214A1 PCT/KR2020/008496 KR2020008496W WO2021040214A1 WO 2021040214 A1 WO2021040214 A1 WO 2021040214A1 KR 2020008496 W KR2020008496 W KR 2020008496W WO 2021040214 A1 WO2021040214 A1 WO 2021040214A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking
image
mobile robot
data
scan data
Prior art date
Application number
PCT/KR2020/008496
Other languages
French (fr)
Korean (ko)
Inventor
김기현
김현숙
한상훈
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Publication of WO2021040214A1 publication Critical patent/WO2021040214A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/022Optical sensing devices using lasers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Definitions

  • the present invention relates to a mobile robot and a control method thereof.
  • Object tracking driving technology is used in various fields, and representatively, there is a guide service robot.
  • the information service robot follows the customer and guides the location or carries the baggage of the customer or object.
  • object tracking driving technology has difficulty in spreading commercialization by operating based on expensive sensors or driving in a constrained environment.
  • short-range wireless communication-based tracking technology as in the case of personal travel bags, follows a one-to-one mapping when a user wears a wearable device, so there are many restrictions in using a large number of objects and customers.
  • the conventional object tracking method using a depth camera often misses an object due to the low resolution of the depth camera and a narrow recognition area.
  • the problem to be solved by the present invention is to provide a mobile robot having an efficient object recognition and tracking function, and a control method thereof, using a laser scanner and an image capturing unit without the aid of a positioning infrastructure.
  • a mobile robot as a mobile robot, at least one image capturing unit installed at a predetermined position of the mobile robot, a laser scanner installed at a predetermined position of the mobile robot, the at least one image capturing unit, and the A memory in which a program for recognizing an object and tracking an object using a laser scanner is stored, and at least one processor executing the program, wherein the program is a tracking target based on the image generated by the at least one image capturing unit.
  • the at least one image capture unit is driven to compare the image captured at the current time with the previously stored image , And commands for re-detecting the object based on the laser scan data output at the current time.
  • the program crosses a plurality of object areas detected from the laser scan data collected at a first time point and a plurality of object areas detected from the laser scan data collected at a second time point later than the first time point, and crosses the A command for recognizing an object having the largest area ratio as a tracking object may be included.
  • the program may include a command for removing noise by applying a morphology operation to laser scan data arranged in two dimensions.
  • the program labels the laser scan data from which noise has been removed, detects at least one area with the same label as each object, and tracks the matching object by comparing the area detected as each object with the image It may include a command to select an object.
  • the program includes a command for detecting an object based on laser sensor data collected in a predetermined scan area, and the predetermined scan area may have a range corresponding to a set height for each body part of a person from the ground.
  • the program divides the data group based on the shortest distance between the point groups constituting the grouped laser sensor data from the laser sensor data grouped by the predetermined scan area, and sets a characteristic variable for each scan area within the divided data group. It may contain instructions to calculate.
  • the characteristic variable may include a total width of group data including the point groups, distances between two adjacent point groups, a width of the group, and an average of the distances.
  • a control method of a mobile robot collecting laser scan data, converting laser scan data composed of a polar coordinate system into a plurality of point cloud forms having position values in a Cartesian coordinate system, and converting into a point cloud form Comparing the generated laser scan data with image data captured by a camera and binarizing using pixel values of an image corresponding to each point group, labeling the laser scan data to detect a plurality of objects, and detecting a plurality of objects Comparing the image data to select a matching object as a tracking object, and driving toward the tracking object.
  • the collecting of the laser scan data further includes removing noise by applying a morphology mask, and the morphology mask may be set in advance to a size capable of covering a person's stride.
  • the driving may include comparing a plurality of objects detected from laser scan data at a first scan time point with a plurality of objects detected from laser scan data at a second scan time later than the first scan time, It may further include the step of recognizing the object having the largest ratio as a tracking object.
  • Recognizing the object having the largest ratio of the intersecting areas as the tracking object may be selectively driven according to a preset condition.
  • the collecting of the laser scan data may further include filtering scan data corresponding to a height set for each body part of a person from among the laser scan data.
  • the detecting may include applying the binarized laser scan data as a labeling target, dividing and grouping the data based on the distance between each point group, from the data applied as the labeling target, and grouping the grouped point groups It may include performing labeling for classifying them into the same object.
  • laser scan data composed of a polar coordinate system is converted into a plurality of point cloud forms having position values in a Cartesian coordinate system, and the laser scan data converted into a point cloud form is photographed with a camera.
  • a method of comparing one image data and binarizing using pixel values of an image corresponding to each point group, comparing a plurality of objects detected by labeling the binarized laser scan data with the image data, and selecting matching objects as tracking objects Can be made.
  • the storing may include extracting and registering feature point information from a captured image if it is determined as the first user, performing a task requested by the first user and storing a work history, and using the laser scanner Tracking the user, if the tracking of the first user fails, feature point information is extracted from the image acquired by photographing the front of the current point of view, and the extracted feature point information is compared with the registered feature point information to determine the object of the matching point.
  • the method may further include the step of searching for a matching object by comparing the skeleton image of the current viewpoint, and tracking the searched object.
  • an object recognition/tracking algorithm using an image capture unit and an object recognition/tracking algorithm using a laser scanner Combined to improve the object tracking rate. That is, the recognition range and processing speed of the image capturing unit can be supplemented through the laser scanner.
  • the problem of tracking failure caused by the low resolution of the camera and the narrow angle of view is compensated by using a laser scanner having a field of view capable of covering most of the front.
  • FIG. 1 is a block diagram showing the configuration of a mobile robot according to an embodiment of the present invention.
  • FIG. 2 is a view for explaining the angle of view of the mobile robot of FIG. 1.
  • 3 is an example of scan data converted into two dimensions according to an embodiment of the present invention.
  • FIG 4 is an exemplary view of setting a region of a mobile robot according to an embodiment of the present invention.
  • FIG. 5 is a flow chart showing the operation of the mobile robot according to an embodiment of the present invention.
  • FIG. 6 is a flowchart illustrating an object recognition/tracking operation of a mobile robot according to an embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating an object recognition/tracking operation of a mobile robot according to another embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a process of detecting an object through a labeling process according to an embodiment of the present invention.
  • FIG. 9 is a diagram illustrating a laser scan area according to an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating a coordinate system for collecting laser scan data from a mobile robot according to an embodiment of the present invention.
  • FIG. 11 is a diagram illustrating a process of extracting reliable data from labeled laser scan data according to an embodiment of the present invention.
  • FIG. 12 is a diagram for explaining feature variables from laser scan data according to an embodiment of the present invention.
  • FIG. 13 is a diagram illustrating a process of dividing an object based on a feature variable according to an embodiment of the present invention.
  • FIG. 14 is a diagram illustrating a frame matching technique according to an embodiment of the present invention.
  • the devices described in the present invention are composed of hardware including at least one processor, a memory device, a communication device, and the like, and a program that is combined with the hardware and executed is stored in a designated place.
  • the hardware has the configuration and capability to implement the method of the present invention.
  • the program includes instructions for implementing the operating method of the present invention described with reference to the drawings, and executes the present invention by combining it with hardware such as a processor and a memory device.
  • transmission or “providing” may include not only direct transmission or provision, but also transmission or provision indirectly through another device or using a bypass path.
  • the mobile robot is configured to travel in a desired direction while avoiding obstacles by mounting various detection sensors and autonomous driving control algorithms.
  • Mobile robots do not need to build indoor infrastructure or wear additional equipment such as wearable devices, so they can be applied not only to logistics, but also to hotel room services and shopping carts.
  • FIG. 1 is a block diagram showing the configuration of a mobile robot according to an embodiment of the present invention
  • FIG. 2 is a view for explaining the angle of view of the mobile robot of FIG. 1
  • FIG. 3 is a two-dimensional transformation according to an embodiment of the present invention. Is an example of the scanned data
  • FIG. 4 is an example of setting an area of a mobile robot according to an embodiment of the present invention.
  • the mobile robot 100 includes an image capture unit 101, a laser scanner 103, a traveling device 105, an input device 107, an output device 109, a communication device 111, and a storage device. It includes a device 113, a memory 115 and a control unit 117.
  • the image capturing unit 101 is attached to the outside of the mobile robot 100 and generates an image by photographing the outside of the mobile robot 100.
  • One image capturing unit 101 may be installed, but a plurality of image capturing units 101 may be installed depending on the embodiment.
  • the image capture unit 101 may use a depth camera that combines an RGB camera and a depth sensor.
  • the laser scanner 103 may be installed at an angle of view overlapping the image capturing unit 101.
  • the laser scanner 103 detects the periphery of the laser scanner 103 using a laser measuring beam that is attached to the outside of the mobile robot 100 and rotates rapidly.
  • the laser scanner 103 obtains scan data by irradiating a laser in the scanning direction while being mounted on the mobile robot 100 and scanning the surface of the object positioned in the scanning direction while the mobile robot 100 is moving.
  • the image capturing unit 101 is installed on the front side of the mobile robot 101 and photographs the front side at the current position of the mobile robot 101.
  • One image capture unit 101 is described as being installed, but a plurality of image capturing units 101 may be installed according to embodiments.
  • the image capture unit 101 may be installed on the top of the mobile robot 100 to recognize the entire skeleton of the object.
  • a plurality of laser scanners 103 may be installed at the angle of view overlapping the image capturing unit 101.
  • two laser scanners 103 may be installed at the front and one at the rear of the mobile robot 101.
  • the angle of view of the image capturing unit 101 is approximately 120 degrees, and the laser scanner 103 has a wider angle of view than that of the image capturing unit 101, which may be approximately 360 degrees.
  • the object tracking can be maintained through the laser scanner 103.
  • the traveling device 105 is a device that allows the mobile robot 101 to travel, and a driving means such as a plurality of wheels and a motor, and a driving force applied to the traveling means to drive the mobile robot 100. It may include a driving source for providing.
  • This traveling device 105 belongs to a well-known technology and a detailed description thereof will be omitted here.
  • the input device 107 is a means for receiving a user command.
  • the input device 107 may include devices such as a keyboard, a touch pad, an optical joystick, and a mouse.
  • the output device 109 may output a visual or an audible output according to a user command transmitted from the input device 107.
  • the output device 109 may include an output device such as a display device or a speaker.
  • the output device 109 may output a user interface (hereinafter, collectively referred to as'UI') screen.
  • the output device 109 may output voice, sound, and the like.
  • the input device 107 and the output device 109 may be implemented as a touch screen. That is, a touch keypad, a touch button, etc. may be activated on the screen to receive a user command and output various information.
  • the communication device 111 is connected to a remote server (not shown) through a communication network (not shown) to transmit and receive data.
  • the storage device 113 stores data necessary for the operation of the mobile robot 100 and data transmitted and received through the communication device 111.
  • the memory 115 stores programs and data necessary for the operation of the mobile robot 100. These programs are composed of instructions for performing the operation and control of the mobile robot 100.
  • the controller 117 is at least one processor and executes a program stored in the memory 115.
  • the control unit 117 operates based on a robot operating system (ROS), and in detail, an object tracking unit 119, a driving control unit 121, a user interface (UI) control unit 123, a remote control unit 125, and a task management unit Includes (127).
  • ROI robot operating system
  • object tracking unit 119 a driving control unit 121
  • UI user interface
  • remote control unit 125 a remote control unit 125
  • task management unit Includes
  • the object tracking unit 119 interlocks with the image capturing unit 101 and the laser scanner 103 to recognize and track an object.
  • the object tracking unit 119 requests the driving control unit 121 to detect and avoid an obstacle, reduce the speed, or make an emergency stop in the process of recognizing and tracking an object.
  • the object tracking unit 119 generates a workplace map for navigation of the mobile robot 100 and processes a navigation function.
  • the workplace refers to the environment in which the mobile robot 100 travels.
  • the object tracking unit 119 may operate in a tracking mode and a driving mode.
  • the object tracking unit 119 receives and processes raw data from the image capturing unit 101 and the laser scanner 103 so that it can be used for object recognition and tracking.
  • the object tracking unit 119 extracts a depth map and skeleton data from an image generated by the image capturing unit 101.
  • the object tracking unit 119 drives the driving control unit 121 to follow only the designated object by recognizing the object and tracking it continuously.
  • the object is a moving object, and in the present invention, the object means a person to be tracked.
  • the objects may be employees and guests.
  • the object tracking unit 119 identifies an object to be tracked based on the image received from the image capturing unit 101 and tracks the identified object using scan data received from the laser scanner 103. In this way, the object tracking unit 119 may improve the object tracking rate by combining the image capturing unit 101 and the laser scanner 103 to recognize and track the object.
  • the object tracking unit 119 recognizes the object by grasping the position of the skeleton of the object using the image received from the image capturing unit 101.
  • the object tracking unit 119 generates a skeleton image by skeletonizing the image received from the image capturing unit 101.
  • Skeletonization is a method of expressing the location of relatively unmodified body parts and connection information between the body parts, and various known algorithms can be used.
  • the object tracking unit 119 assumes that the skeleton image is for one person.
  • the object tracking unit 119 may process an error and request re-recognition when a plurality of skeletons are detected.
  • an input for selecting one of a plurality of skeletons may be received from a user through the input device 107. At this time, the selected skeleton is determined as a tracking object.
  • the object tracking unit 119 collects laser scan data scanned in seconds from the laser scanner 103. And, from the collected laser scan data, primary filtering is performed to extract only the radar scan data of a preset area.
  • the preset area may be set to a height corresponding to a person's legs, hips, and the like.
  • the object tracking unit 119 performs secondary filtering according to a filtering criterion among the primary filtered laser scan data. At this time, based on the point cloud data located at the center of the primary filtered laser scan data, secondary filtering is performed in which only a predetermined number of point cloud data are left for each area.
  • the predetermined number is set differently for each leg height, hip height, and back height.
  • the object tracking unit 119 performs a binarization operation on the secondary filtered laser scan data. Then, a morphology mask and labeling are applied to the binarized laser scan data to detect a plurality of objects.
  • the object tracking unit 119 compares the plurality of object areas with the skeleton image and selects an object identical to the skeleton image (shape) as the tracking object. Since the skeleton image can be regarded as a type of shape data, the object tracking unit 119 selects an object area that matches the skeleton image as a tracking object.
  • the object tracking unit 119 After recognizing the object through the above process, the object tracking unit 119 then compares the previous object area with the next object area and recognizes the object with the largest intersection ratio as the tracking object.
  • "previous” and "next” can be viewed as the collection point of laser scan data.
  • the object tracking unit 119 tracks the object by converting the scanned data scanned with the laser scanner into an object through pre/post processing processes.
  • the scan data obtained by the object tracking unit 119 from the laser scanner 103 is composed of a polar coordinate system consisting of a radius and an angle.
  • the object tracking unit 119 converts the scan data of the polar coordinate system into a rectangular coordinate system composed of X and Y. These scan data form a number of station points, that is, point clouds, each having position coordinates of X and Y.
  • an example of a code for converting scan data in the form of polar coordinates into scan data in the form of rectangular coordinates may be as shown in Table 1.
  • the object tracking unit 119 generates scan data in units of at least 1 centimeter (cm) so that the mobile robot 100 can smoothly track the object.
  • the scan data generation unit refers to a moving distance of the mobile robot 100. That is, each time the mobile robot 100 moves in units of 1 centimeter, scan data may be generated.
  • the scan distance for initial object recognition can be secured within 2 meters.
  • the object tracking unit 119 converts the scan data of the polar coordinate system into 2D data of the Cartesian coordinate system, and the converted 2D data is expressed as shown in FIG. 3.
  • 2D data has a value of 0 or 1. That is, the object tracking unit 119 performs a job of binarizing the scan data of the polar coordinate system. At this time, the object tracking unit 119 maps a number of station points each having position coordinates of X and Y to the image acquired from the image capturing unit 101, and is 0 or 0 based on the value of the pixel corresponding to each station point in the image. It can be expressed as 1.
  • a reference pixel value (eg, 255) may be determined, and if the pixel value corresponding to each point is smaller than the reference pixel value, it may be displayed as 0, and if it is greater than the reference pixel value, it may be displayed as 1.
  • Each station having a position coordinate of X and Y and a value of 0 or 1 is defined as a pixel.
  • a morphology operation is a technique that is widely used in image processing.
  • One of the morphological operations, dilation is a method of combining two adjacent objects and filling a hole inside the object. The erotion operation removes the noise and pillars of the object boundary.
  • a morphology mask having a size of 21 ⁇ 21 may be used to cover the stride of an object.
  • This mask can cover a person's normal stride of 40 centimeters.
  • the morphology mask is not limited thereto, and morphology masks of various sizes may be used.
  • the morphology operation of the object tracking unit 119 may use code examples shown in Table 2 below.
  • the object tracking unit 119 classifies an object to be tracked from an object to be tracked using the scan data acquired from the laser scanner 103 to speed up the initial recognition speed and performs labeling. In addition, other objects are removed from the list of objects of interest, centering on the tracked object. Table 3 shows an example of a code for removing the remaining objects except for the tracking object.
  • the labeling technique is particularly widely used as a preprocessing process for object recognition.
  • the object tracking unit 119 may define a set of adjacent pixels having a pixel value of 1 in the scan data as shown in FIG. 3 as an object.
  • An object consists of one or more adjacent pixels. Multiple objects can exist in one scan data, and labeling is the task of assigning a unique number to all pixels belonging to the same object.
  • the object tracking unit 119 may perform labeling using a 4-neighbor connectivity mask. Therefore, pixels located in a diagonal relationship have different labels. Two adjacent neighboring pixels among the neighboring pixels adjacent to a pixel at an arbitrary position (X, Y) can be classified into four cases as follows.
  • the object tracking unit 119 creates an equivalent table by propagating a label in the first scan, and gives a unique label to a pixel corresponding to the object by referring to the label in the equivalent table in the second scan.
  • the object tracking unit 119 recognizes an object composed of pixels having the same label. Then, a plurality of recognized objects are matched with a skeleton image to select a tracking object from among the plurality of objects.
  • the object tracking unit 119 may fail to recognize the tracking object. To compensate for this, the object tracking unit 119 uses a frame-to-frame matching algorithm to classify an object to be tracked and another object. The object tracking unit 119 assumes that the previously detected object does not suddenly move to a long distance or disappear in order to efficiently match frames. When a part of the scan data where a person is detected is identified as a space and the person's stride is taken into account, a person can be recognized by spatial region matching because a person exists within a certain range of space in the next frame. Since scan data is generated in units of frames per second, a large overlap area occurs. The object tracking unit 119 may perform a quick correction operation when a tracking object is temporarily missed through a spatial region matching technique.
  • the object tracking unit 119 uses a method of tracking an object having the largest ratio of an intersecting area among the objects detected in the previous frame and the objects detected in the current frame.
  • the object tracking unit 119 tracks the object recognized through the laser scanner 103 and cross-comparisons the laser scan data acquired in real time on a frame-by-frame basis.
  • the object with the largest crossing area is tracked by cross-comparison of the area where the tracked object is recognized in the previous frame and the current frame.
  • Another name for this spatial domain matching technique is the best matching method.
  • an optimal matching algorithm is used. In this way, the path the object has moved is predicted and traced without the need for repetitive detection of the object.
  • an algorithm called Kalman filter can be additionally used.
  • the object tracking unit 119 may deactivate the spatial area matching technique according to conditions during tracking driving. For example, when there is no movement of the mobile robot 100 and the object, that is, when a person waits in a space such as an elevator, other objects may be detected around the mobile robot 100. I can. At this time, if the tracking object is changed to another object, a problem occurs in the service. In this case, the spatial domain matching technique is deactivated.
  • the object tracking unit 119 may be deactivated according to conditions during tracking driving. For example, when there is a situation where a robot and a worker are waiting in a space such as an elevator, other workers or customers may be exposed around the camera and the laser sensor. At this time, if the tracking target is changed, the service may be affected, so it can be added to the deactivation condition.
  • Equation 1 if the change value of the previous coordinate (previous.coordicate) and the current coordinate (current.coordinate) of the object is less than the threshold, it is recognized as a tracking user, otherwise, another object ( otherUser).
  • the object tracking unit 119 recognizes the object as a tracking object when the current coordinates of the object are smaller than the matching area, and this is expressed as an equation, as shown in Equation 2.
  • the tracking object user(s) represents a case where the current coordinates (urrentUserPosition(x,y,z)) of the tracking object user(s) are smaller than the intersection area.
  • the object tracking unit 119 may recognize a region in which a person is based on the laser scan data, and use an image when a person is specifically identified.
  • the object tracking unit 119 classifies a person, that is, an object other than a tracked object in an area recognized as an object, as a non-tracking object such as an obstacle.
  • a tracked object is selected, information on the non-tracked object is deleted in order to determine that objects other than the selected object are not the target of tracking.
  • the object tracking unit 119 may match the skeleton image with the object recognized by the laser scanner 103 to continuously track while minimizing the re-recognition process.
  • the object tracking unit 119 may acquire 3D information on an image of a target scene in real time using distance sensors.
  • the state of the object recognized by the object tracking unit 119 is defined as "New”, “Visible”, “Tracked”, “NotVisible”, and “Lost” states.
  • the “New” state is a state in which an object is newly detected, and the start of object recognition can be known.
  • the “Visible” state is a state in which an object is detected within the angle of view. In the “Tracked” state, the object is being tracked and skeleton information is available.
  • the “NotVisible” state is a state in which an object cannot be detected and is invisible within the angle of view.
  • the “Lost” state is a state in which the object has been completely lost.
  • the object tracking unit 119 updates the current object tracking state while tracking the object based on these object states.
  • the object tracking unit 119 may process the object in a “NotVisible” state when the object is out of the field of view of the image capturing unit 101.
  • the object tracking unit 117 when the object tracking unit 117 fails to track the object, it may re-recognize other objects and fail to track the object. Even though the object tracking unit 119 exists in the angle of view, the object tracking unit 119 may miss the object according to the posture or situation of the object. For this reason, if the object is not visible within about 10 seconds or more after the object tracking fails, it is determined that the object has been lost and can be processed in a "Lost" state.
  • the object tracking unit 119 When the object tracking unit 119 re-recognizes an object that has failed to be tracked, the object ID is assigned as before, and may be processed in a "Tracked" state.
  • the object tracking unit 119 may process the object in a "New" state.
  • the object tracking unit 119 may process the object in a "Visible" state.
  • the object tracking unit 119 recognizes a new object within a preset radius, for example, within 1m, based on the coordinate values (X, Y, Z) in the space of the missed object as the main object and proceeds with the tracking. If object recognition is not performed for a certain period of time, it is processed as a “Lost” state, and an emergency stop is requested to the driving control unit 121. In addition, an emergency stop alarm may be requested from the UI controller 123.
  • various alarms may occur according to the service provided by the mobile robot 100.
  • the object tracking unit 119 requests an emergency stop due to an object tracking failure while tracking a hotel employee, it outputs a stop alarm to the outside through the UI control unit 123 so that the employee takes action to re-recognize the object. Do it.
  • a speaker alarm can be output to allow an employee to move within the object tracking radius.
  • the object tracking unit 119 if it fails in the middle of tracking a customer who uses the hotel, it instructs the remote control unit 125 to request an alarm from a server (not shown) located in a remote location, and the driving control unit 121 is set in advance. Request to return to the designated location.
  • the object tracking unit 119 when a destination is set by a customer, the object tracking unit 119 does not regard the object as a departure, even if an object is not detected within a certain radius around the destination, and if it reaches the destination, it is processed as a normal arrival and then a preset designated location. It can be requested to the driving control unit 121 to return to.
  • the object tracking unit 119 performs dynamic distance calculation and speed control. To this end, three distance states, that is, "Far”, “Near”, and “InZone” are defined according to how far the distance between the object and the mobile robot 100 is.
  • the Max(d) and Min(d) values may be set to 1.1m and 1.0m, respectively. Since the laser scanner 103 scans in real time per second, the object tracking unit 119 divides the distance between the mobile robot 100 and the tracked object into one of "Far”, “Near”, and “InZone” in seconds. . If this is expressed by an equation, it is shown in Equation 3
  • Equation 3 “Far” is defined as a case where the user distance (the distance between the object and the mobile robot) is greater than Max(d). "Near” is defined as a case where the user distance is less than Min(d). “inZone” is defined as a case where the user distance is greater than Min(d) and less than Max(d).
  • the object tracking unit 119 determines the state of the user distance and provides it to the driving control unit 121.
  • the driving control unit 121 pre-designates driving control information for each user distance state. Based on this, when the state of the user distance is "Far”, the driving control unit 121 increases the driving speed of the mobile robot 100. In addition, the driving control unit 121 maintains the driving speed of the mobile robot 100 when the state of the user distance is “inZone”. In addition, when the state of the user distance is "Near”, the driving control unit 121 may reduce the driving speed of the mobile robot 100 or stop the mobile robot 100 in an emergency.
  • the mobile robot 100 is divided into an obstacle zone and a tracking zone.
  • the object tracking unit 119 requests an emergency stop from the driving control unit 121.
  • the obstacle area and the tracking area may be changed according to embodiments, and may be set differently according to the tracking environment.
  • an object can be tracked up to 10 meters, but in the present invention, the object tracking unit 119 scans so as not to scan an object farther than the distance of the object selected as the target to be tracked. You can limit the area dynamically. By limiting the scan area, unnecessary scans and operations can be minimized and the accuracy of scan data can be improved.
  • An example of a code limiting the scan area may be shown in Table 6.
  • the object tracking unit 119 may generate a map and recognize its own location through SLAM (Simultaneous localization and mapping) technology.
  • SLAM Simultaneous localization and mapping
  • the object tracking unit 119 determines whether the location of the mobile robot 100 measured through the distance sensor matches the location information of the mobile robot 100 on the map.
  • the object tracking unit 119 determines the initial position of the mobile robot 100. This position indicates how far the mobile robot 100 is from the charging station (not shown) based on the distance from the charging station (not shown) where the mobile robot 100 charges the battery. Collected from) and the rotation angle collected from the gyro sensor. That is, the initial position information of the mobile robot 100 is composed of (x, y, radian), and radian represents the rotation angle.
  • the coordinate information of the tracking object is also not accurate, and thus the initial position of the mobile robot 100 needs to be matched.
  • the object tracking unit 119 acquires distance information about how far the object is from the mobile robot 100 through the laser scanner 103, and the distance information and the initial position By using, the position of the object may be calculated as a position value relative to the mobile robot 100.
  • the driving control unit 121 is a robot adapter and performs linear/angular speed calculation, motor revolutions per minute (RPM) calculation, speed control, encoder travel distance calculation, robot state value management, and the like.
  • RPM motor revolutions per minute
  • the driving control unit 121 includes a controller and an interface for driving and controlling hardware of the mobile robot 100.
  • the driving control unit 121 performs acceleration/deceleration by calculating the linear speed according to the distance to the tracked object output from the object recognition unit 119. Further, the angular velocity is calculated according to the moving direction of the object to calculate RPM (revolutions per minute) for the differential motor, and based on this, the traveling device 105 is controlled.
  • the driving control unit 121 transmits the rotation direction, and the RPM of the left and right motors converted based on the linear and angular speeds to the driving device 105 in real time to safely and smoothly move the mobile robot 100 to the target position. I can.
  • feedback control by transmitting the state values of the mobile robot 100 such as whether the front and rear bumpers of the mobile robot 100 collide, and the encoder value of the left and right motor rotation of the mobile robot 100 to the traveling device 105 ,
  • the moving distance and the rotation direction of the mobile robot 100 may be calculated and used as base information necessary for creating a map and tracking a moving path of the mobile robot 100.
  • the driving control unit 121 outputs a driving command such as a rotation direction, a linear speed, and an angular speed to the driving device 105 based on the result of calculating the position of the object and its own position on the map.
  • the driving control unit 121 may divide the speed control of the driving unit into L1 to L7 (the speed at which a person walks quickly, for example, 6km/h).
  • the UI control unit 123 outputs the contents necessary for the operation using the mobile robot 100 to the output device 109.
  • data input from the input device 107 may be output on the screen.
  • the UI controller 123 may provide a UI through which an operator can improve work efficiency by using a mobile robot, such as a logistics list, item inspection, and a transport route.
  • the UI controller 123 may provide a UI such as barcode recognition, product information, order information, and route search.
  • the remote control unit 125 is connected to the remote server 200 through the communication device 111 and interlocks with the remote server 200 to perform an operation for performing the state and control of the mobile robot 100.
  • the remote control unit 125 provides the object recognition status (5 types) determined by the object recognition unit 119 to the remote server 200 to query the object recognition status of the mobile robot 100 at a remote location. To be able to do it.
  • the remote control unit 125 may receive a control command from the remote server 200 and control the mobile robot 100 to operate based on the control command.
  • the task management unit 127 is in charge of efficient operation of the mobile robot 100 through real-time monitoring, task management, operation status management, and automation information system linkage.
  • the task management unit 127 may manage tracking authority.
  • the tracking authority is used when a specific user, such as a hotel employee, not a number of unspecified users, but a specific user such as a hotel employee, is used to connect to other employees while working in tracking mode or when the job is terminated. At this time, it is set so that it cannot be used except for the changed tracking target.
  • FIG. 5 is a flow chart showing an operation of a mobile robot according to an embodiment of the present invention, and shows an operation according to a service scenario in a specific place where the mobile robot is installed.
  • the specific place may be a hotel, but is not limited thereto.
  • a skeleton image of a first user eg, employees, hereinafter, collectively referred to as “employees” for convenience of explanation
  • the DB is built (S101).
  • the control unit 117 of the mobile robot 100 performs authentication before starting object recognition/tracking (S103).
  • the controller 117 may perform authentication by requesting input of authentication information (eg, biometric authentication information such as a fingerprint, a face, etc., a password, etc.) stored in the storage device 113.
  • the controller 117 may perform authentication by requesting input of authentication information stored in the remote server 200.
  • the image photographing unit 101 of the mobile robot 100 photographs the front side and generates an image (S105).
  • the object tracking unit 119 of the mobile robot 100 generates a skeleton image by skeletonizing the generated image (S105) (S107).
  • the object tracking unit 119 of the mobile robot 100 compares the skeleton image generated (S107) with the skeleton image registered in the employee DB (S109) to determine whether the skeleton image represents the employee (S111).
  • the object tracking unit 119 of the mobile robot 100 extracts and registers feature point information from the image generated in step S105 (S113).
  • the feature point information may be referred to as feature information representing a specific user.
  • it may include information such as the color and mark of an employee's uniform.
  • the mark may include a hotel label, name tag, or the like written on the uniform.
  • the feature point information may be a skeleton image generated in step S107.
  • the job management unit 127 of the mobile robot 100 performs a job according to the employee's request and stores the job history (S115).
  • Work history includes creating a map for driving or tracking people while carrying luggage.
  • the object tracking unit 119 of the mobile robot 100 tracks the employee recognized (S109) using the laser scanner 103 (S117).
  • the employee tracking using the laser scanner 103 corresponds to the operation of the object tracking unit 119, as described above.
  • the object tracking unit 119 of the mobile robot 100 determines whether the employee being tracked has left, that is, whether the object tracking has failed (S119).
  • step S117 If object tracking does not fail, it is performed again from step S117.
  • the object tracking unit 119 extracts a feature point from the image captured by the image capture unit 101 at the current time, compares it with the previously registered feature point (S113), and matches the object. That is, the employee is searched (S121). That is, a person with the same feature point information within a certain radius is searched through the camera. Alternatively, a skeleton image may be generated and an object matching the previously registered (S113) skeleton image may be searched.
  • the object tracking unit 119 requests the driving control unit 121 to move the mobile robot 100 to the position where the searched feature point is found (S123).
  • the job management unit 127 informs the employee of the previously stored job history (S115) to the employee through the UI control unit 123, and queries whether or not to accept the tracking intention (S123). In other words, the history of previous work is notified by voice or screen, and the intention to follow is confirmed.
  • the object tracking unit 119 starts again from step S115.
  • the object tracking unit 119 requests the driving control unit 121 to end tracking the object and move it to the start position (S127). Alternatively, you can call an employee.
  • step S111 if it is determined that it is not an employee in step S111, if it is determined as a second user (eg, customer, hereinafter, collectively referred to as "customer” corresponding to "employee” for convenience of explanation), the task management unit 127 Additional information and destination information are queried to the customer through the UI control unit 123 (S129).
  • the additional guest information may be a room number or the like.
  • the object tracking unit 119 extracts and temporarily registers feature point information from the image generated in step S105 (S131).
  • the feature point information may be a skeleton image generated in step S107.
  • the object tracking unit 119 generates navigation information to the destination based on the content received by the UI control unit 123 and starts the route guidance (S133).
  • the object tracking unit 119 uses the laser scanner 103 to track the customer, that is, the object determined in step S109 (S135).
  • customer tracking using the laser scanner 103 corresponds to the operation of the object tracking unit 119, as described above.
  • the object tracking unit 119 determines whether the customer leaves while driving to the destination (S137), and if the customer does not leave, it starts again from step S135.
  • step S141 It is determined whether the search is successful (S141), and if the search is successful, that is, the area in which the skeleton is detected is recognized as an object, that is, a customer, and starts again from step S135.
  • the object tracking unit 119 calls an employee through the UI controller 123 or requests the driving controller 121 to end object tracking and move to the starting position (S143).
  • the object tracking unit 119 generates skeleton data as an image signal and re-recognizes it when noise or interference occurs while tracking an object with a laser signal.
  • tracking may fail.
  • it may be difficult to identify objects only by laser scanning because a large number of people can continue to flow into the range of the raise scan in the elevator or in the hotel lobby.
  • object tracking may fail. For example, it could be due to highly reflective walls, glass, or an unexpected obstacle.
  • skeleton data is generated and registered skeleton data is re-recognized as an object, as described above.
  • FIG. 6 is a flowchart illustrating an object recognition/tracking operation of a mobile robot according to an embodiment of the present invention.
  • the image photographing unit 101 photographs the front of the mobile robot 100 and generates an image (S201).
  • the object tracking unit 119 generates a skeleton image by skeletonizing the image generated in step S201 (S203).
  • the object tracking unit 119 determines whether the skeleton image generated in step S203 is a registered image (S205).
  • S205 is to determine whether or not registered as a tracking target. If not registered as a tracking target, the skeleton image generated in step S203 is registered as a tracking candidate object (S207), and then step S205 is restarted.
  • the object tracking unit 119 recognizes the object with the laser scanner 103 (S209) and tracks the object (S211).
  • step S209 If it is determined that the object is being tracked, it starts again from step S209. On the other hand, if the object being tracked is not the object being tracked, object tracking is terminated and the object is moved to the start position (S221).
  • a process in which the object tracking unit 119 combines the image generator 101 and the laser scanner 103 to recognize/track an object will be described as follows.
  • FIG. 7 is a flow chart showing an object recognition/tracking operation of a mobile robot according to another embodiment of the present invention
  • FIG. 8 is a flow chart showing a process of detecting an object through a labeling process according to an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating a coordinate system for collecting laser scan data from a mobile robot according to an embodiment of the present invention
  • FIG. 11 is a view illustrating the present invention.
  • FIG. 12 is a diagram for explaining feature variables from laser scan data according to an embodiment of the present invention
  • FIG. 14 is a diagram illustrating a frame matching technique according to an embodiment of the present invention.
  • steps S301 to S313 correspond to step S209 in FIG. 6, that is, an object recognition step.
  • steps S315 to S329 correspond to step S211 in FIG. 6, that is, an object tracking step.
  • the object tracking unit 119 acquires the laser scan data at the time point t1 from the laser scanner 103 (S301).
  • the object tracking unit 119 filters (S303) the acquired laser scan data (S301), and performs a binarization operation on the filtered laser scan data (S305).
  • the object tracking unit 119 removes noise by applying a morphology operation to the laser scan data that has undergone the binarization operation (S307).
  • the object tracking unit 119 performs labeling processing on the laser scan data from which noise has been removed (S309).
  • the object tracking unit 119 compares a plurality of labeled objects with the skeleton image data, and selects a matching object as a tracking object (S311).
  • the distance and area of the labeled object may be calculated, and an object matching the skeleton shape on a plane of the same dimension may be detected.
  • the object tracking unit 119 deletes the remaining objects other than the tracking object selected in step S307 among the plurality of labeled objects (S313).
  • the object tracking unit 119 acquires laser scan data at a time point t2 (S315).
  • t1 and t2 denote a scan time unit, and may be, for example, a second unit.
  • the object tracking unit 119 performs filtering (S317) and binarization (S319) on the laser scan data at the point of time t2, and applies morphology calculation and labeling processing to the filtered and binarized laser scan data (S321, S323). do.
  • the object tracking unit 119 calculates the ratio of the area where the objects labeled at the time t1 and the objects labeled at the time t2 intersect (S325). In addition, the object having the largest ratio of the calculated intersection area is recognized as a tracking object (S327).
  • the object tracking unit 119 corrects the position of the tracked object calculated by the laser scanning method using a Kalman filter (S329).
  • the position of the tracking object is calculated as a relative position from the mobile robot 100 by using the initial position of the mobile robot 100 and distance information from the mobile robot 100 to the tracking object.
  • the distance information is generally calculated through a time when the beam irradiated by the laser scanner 103 arrives after being reflected from the object, but is not limited thereto, and various methods may be used.
  • the object tracking unit 119 performs primary filtering of only data having a height designated as a scan area from the laser scan data collected from the laser scanner 103 (S401).
  • the object tracking unit 119 may define a detailed position of a person as a laser scan area for identifying a tracked object in order to distinguish each person.
  • the first area is the legs. It is a part that has the lowest interference with other body parts and is suitable for extracting features. However, if there are many people and are close together, it may be difficult to track the target with only the legs. To compensate for this, the buttocks and back are additionally analyzed to extract a tracking object corresponding to a person.
  • FIG. 9(a) shows the main height to each part for identification of a tracked object, that is, a leg, a hip, and a back.
  • 9B is laser scan data collected from the laser scanner 103.
  • (C) of FIG. 9 shows scan data obtained by filtering out all data except for the data corresponding to the scan area among the collected laser scan data. That is, compared to FIG. 9B, in FIG. 9C, all scan data except for the scan data of the knee height, the scan data of the pelvis, and the scan data of the back height are deleted. Therefore, unnecessary scan data is excluded from the operation.
  • the object tracking unit 119 compares the laser scan data with the heights (h1, h2, h3) of each scan area, and stores the data of the corresponding height if the error limit value and threshold are not exceeded.
  • the object tracking unit 119 refers to data of the same height among data classified into three heights (legs, hips, etc.) as group data.
  • such laser scan data may be represented by a coordinate system (x, y) with the mobile robot 100 as an origin.
  • the object tracking unit 119 processes the labeling of group data collected from the laser scanner 103 and filtered into the scan area to group data having similar characteristics to distinguish objects having different characteristics.
  • the criterion that the characteristics are similar can determine a human body part according to the form of data (point cluster) of laser scan data.
  • people can be classified and grouped according to the clustered form.
  • the object tracking unit 119 defines variables that can be considered for labeling feature candidates from reliable laser scan data, as shown in Table 7. These variables are shown in Figure 12.
  • the object tracking unit 119 divides group data into reference points based on the shortest distances ⁇ d and D of two adjacent points at a specified height (S403).
  • D denotes the distance between two adjacent points among consecutive points, and data division, i.e., detection as a different object, when exceeding the distance limits (D leg , D hip , D back) of two points at each height Data partitioning is performed for this purpose.
  • the limit values (D leg , D hip , D back ) are set as the distance at which an adjacent part of a person's body can fall to the maximum.
  • Reliable data is extracted from the classified data group (S405). That is, secondary filtering is performed in which unnecessary data is removed by applying a filtering criterion within the divided data group (S405). This will be described as follows.
  • the object tracking unit 119 derives a variable value to be considered as a feature for object identification.
  • the start of feature analysis and labeling is to extract the number of reliable data from the sensor data in the center of the labeled laser scan data.
  • the object tracking unit 119 applies differently to each body part when selecting the number of reliable data. That is, in the case of the leg and the back of the tracking target, unnecessary data exists in the group data. Accordingly, the object tracking unit 119 determines a reliable number of data from the center of the remaining data after removing a certain number of data from both ends of the group data. On the other hand, in the case of hips, arm and hip data are often extracted together. When the arms are classified into the same group, since the number of unnecessary data is large according to the movement of the arm, the object tracking unit 119 applies the maximum value of unnecessary data and subtracts it from the number of group data, and is reliable based on the number of remaining data. Select the number of data.
  • Equation 5 is an equation for defining data analysis of leg and back height.
  • N trust is the number of selected reliable data
  • N group is the total number of group data
  • N outloer is the number of data excluded from group data.
  • Equation 4 indicates that reliable data is selected by subtracting the number of unnecessary data that appears fixedly from the entire group data.
  • Equation 5 it is possible to analyze based on reliable data by subtracting the number of unnecessary data that appears fixedly from the entire group data. And because the number of reliable data varies, each person can reflect their characteristics.
  • Equation 6 is an equation defining data analysis of the height of a person's hips.
  • N trust is the number of selected reliable data
  • N group is the total number of group data
  • outlier max is the maximum number of data that can be excluded.
  • Equation 6 indicates that the maximum number of reliable data is fixedly used. In this case, consistent features can be derived from situations in which unnecessary data is generated.
  • the object tracking unit 119 continuously tracks an object detected through a frame-to-frame matching technique. Examples of frame matching codes of the object tracking unit 119 are shown in Table 8.
  • the object tracking unit 119 calculates an overlapping area where the previous frame P10 and the current frame P20 intersect each other. At this time, delta_x(a) and delta_y(b) are measured through the coordinates of 1, 2, 3, and 4, and the intersection area is calculated by the product of delta_x(a) and delta_y(b). If this is formulated, it is as shown in Equation 7.
  • Equation 8 the object whose intersection over union with the object detected in the previous frame has the largest match is selected as the object to be tracked, and this is expressed as Equation 8.
  • the "intersection over union" of Equation 8 is obtained by dividing the ratio of the intersection area by a value obtained by subtracting the intersection area from the two object areas.
  • the embodiments of the present invention described above are not implemented only through an apparatus and a method, but may be implemented through a program that realizes a function corresponding to the configuration of the embodiment of the present invention or a recording medium in which the program is recorded.

Abstract

A mobile robot and a method for controlling same are provided. The mobile robot comprises: a laser scanner installed in a predetermined position on the mobile robot so as to output laser scan data on the basis of a scan unit, the laser scan data being produced through a scan within a viewing angle; a memory configured to store a program for performing object recognition and object tracking by using at least one image production unit and the laser scanner; and at least one processor configured to execute the program. The program comprises instructions for: converting an image produced by the at least one image production unit into a skeleton, thereby producing a skeleton image, and storing same; detecting an object (tracking target) on the basis of the laser scan data; driving the at least one image production unit, if noise or interference occurs or if object tracking fails, so as to produce a skeleton image from an image captured at the current timepoint; and re-detecting an object on the basis of laser scan data that is output at the current timepoint if the skeleton image at the current timepoint is identical to a prestored skeleton image.

Description

이동 로봇 및 그 제어 방법Mobile robot and its control method
본 발명은 이동 로봇 및 그 제어 방법에 관한 것이다.The present invention relates to a mobile robot and a control method thereof.
물류 시장의 자동화가 확장됨에 따라 물류 현장에서 사용중인 수동 카트 또는 대차에 객체를 인식해서 자동으로 따라다니는 객체 추적 주행 기술을 적용하는 사례가 증가하고 있다. 객체 추적 주행 기술은 다양한 분야에 활용되는데 대표적으로 안내 서비스 로봇이 있다. 안내 서비스 로봇은 고객을 따라다니면서 위치를 안내하거나 고객 또는 객체의 짐을 운반하는 역할을 한다. 그러나 이러한 객체 추적 주행 기술은 고비용의 센서를 기반으로 동작하거나 제약된 환경 내에서 주행하여 상용화 확산에 어려움이 있다.As the automation of the logistics market expands, there are increasing cases of applying an object tracking driving technology that automatically follows an object by recognizing an object on a manual cart or cart in use at the logistics site. Object tracking driving technology is used in various fields, and representatively, there is a guide service robot. The information service robot follows the customer and guides the location or carries the baggage of the customer or object. However, such object tracking driving technology has difficulty in spreading commercialization by operating based on expensive sensors or driving in a constrained environment.
통상적으로, 객체의 추적을 위해서 고비용의 센서, 예를들면, 라이다(LiDAR) 센서 등을 여러개 사용하고, 위치 측위를 위해서 지도 정보 및 측위 인프라를 활용하기에 비용의 증대와 유지 보수의 어려움이 있다.Typically, many expensive sensors, such as LiDAR sensors, are used for object tracking, and there is an increase in cost and difficulty in maintenance because map information and positioning infrastructure are used for location positioning. have.
또한, 개인 여행용 가방 사례와 같이 근거리 무선통신 기반의 추적 기술은 사용자가 웨어러블 디바이스를 착용하면 일대일 맵핑하여 따라 다니므로 다수의 객체와 고객이 이용하는데 있어서 제약사항이 많다.In addition, short-range wireless communication-based tracking technology, as in the case of personal travel bags, follows a one-to-one mapping when a user wears a wearable device, so there are many restrictions in using a large number of objects and customers.
또한, 종래에 깊이 카메라를 사용한 객체 추적 방식은 깊이 카메라의 낮은 해상도와 좁은 인식 영역으로 인하여 객체를 놓치는 경우가 종종 발생한다.In addition, the conventional object tracking method using a depth camera often misses an object due to the low resolution of the depth camera and a narrow recognition area.
본 발명이 해결하고자 하는 과제는 레이저 스캐너 및 영상 촬영부를 활용하여 위치 측위 인프라의 도움 없이도 효율적인 객체 인식 및 추적 기능을 가진 이동 로봇 및 그 제어 방법을 제공하는 것이다.The problem to be solved by the present invention is to provide a mobile robot having an efficient object recognition and tracking function, and a control method thereof, using a laser scanner and an image capturing unit without the aid of a positioning infrastructure.
본 발명의 하나의 특징에 따르면, 이동 로봇으로서, 상기 이동 로봇의 소정 위치에 설치되는 적어도 하나의 영상 촬영부, 상기 이동 로봇의 소정 위치에 설치되는 레이저 스캐너, 상기 적어도 하나의 영상 촬영부 및 상기 레이저 스캐너를 이용한 객체 인식 및 객체 추적을 수행하는 프로그램이 저장된 메모리, 그리고 상기 프로그램을 실행하는 적어도 하나의 프로세서를 포함하고, 상기 프로그램은, 상기 적어도 하나의 영상 촬영부가 생성한 영상을 기초로 추적 대상인 객체를 인식하고, 상기 레이저 스캔 데이터를 통해 추적 대상인 객체를 검출하여 추적하며, 객체 추적에 실패할 경우, 상기 적어도 하나의 영상 촬영부를 구동하여 현 시점에 촬영된 영상과, 기 저장된 영상을 비교하고, 현 시점에 출력된 레이저 스캔 데이터를 기초로 객체를 재검출하는 명령어들을 포함한다.According to one feature of the present invention, as a mobile robot, at least one image capturing unit installed at a predetermined position of the mobile robot, a laser scanner installed at a predetermined position of the mobile robot, the at least one image capturing unit, and the A memory in which a program for recognizing an object and tracking an object using a laser scanner is stored, and at least one processor executing the program, wherein the program is a tracking target based on the image generated by the at least one image capturing unit. Recognizes an object, detects and tracks an object to be tracked through the laser scan data, and if object tracking fails, the at least one image capture unit is driven to compare the image captured at the current time with the previously stored image , And commands for re-detecting the object based on the laser scan data output at the current time.
상기 프로그램은, 제1 시점에 수집된 레이저 스캔 데이터로부터 검출된 복수의 객체 영역과 상기 제1 시점보다 나중 시점인 제2 시점에 수집된 레이저 스캔 데이터로부터 검출된 복수의 객체 영역을 교차시키고, 교차 영역의 비율이 가장 큰 객체를 추적 객체로 인식하는 명령어를 포함할 수 있다.The program crosses a plurality of object areas detected from the laser scan data collected at a first time point and a plurality of object areas detected from the laser scan data collected at a second time point later than the first time point, and crosses the A command for recognizing an object having the largest area ratio as a tracking object may be included.
상기 프로그램은, 2차원 배열된 레이저 스캔 데이터에 모폴로지 연산을 적용하여 노이즈를 제거하는 명령어를 포함할 수 있다.The program may include a command for removing noise by applying a morphology operation to laser scan data arranged in two dimensions.
상기 프로그램은, 노이즈가 제거된 레이저 스캔 데이터를 레이블링하여, 동일한 레이블을 가진 적어도 하나의 영역을 각각의 객체로 검출하고, 상기 각각의 객체로 검출된 영역을 상기 영상과 비교하여 일치하는 객체를 추적 객체로 선별하는 명령어를 포함할 수 있다.The program labels the laser scan data from which noise has been removed, detects at least one area with the same label as each object, and tracks the matching object by comparing the area detected as each object with the image It may include a command to select an object.
상기 프로그램은, 정해진 스캔 영역에서 수집한 레이저 센서 데이터를 기초로, 객체를 검출하는 명령어를 포함하고, 상기 정해진 스캔 영역은, 지면으로부터 사람의 신체 부위별 설정된 높이에 해당하는 범위를 가질 수 있다.The program includes a command for detecting an object based on laser sensor data collected in a predetermined scan area, and the predetermined scan area may have a range corresponding to a set height for each body part of a person from the ground.
상기 프로그램은, 상기 정해진 스캔 영역 별로 그룹핑된 레이저 센서 데이터에서, 상기 그룹핑된 레이저 센서 데이터를 구성하는 점군 간의 최단 거리를 기준으로 데이터 그룹을 분할하고, 분할한 데이터 그룹 내에서 스캔 영역 별로 특징 변수를 계산하는 명령어들을 포함할 수 있다.The program divides the data group based on the shortest distance between the point groups constituting the grouped laser sensor data from the laser sensor data grouped by the predetermined scan area, and sets a characteristic variable for each scan area within the divided data group. It may contain instructions to calculate.
상기 특징 변수는, 상기 점군 들이 포함된 그룹 데이터의 전체 너비, 인접한 두 점군 사이의 거리들, 상기 그룹의 폭 및 상기 거리들의 평균을 포함할 수 있다.The characteristic variable may include a total width of group data including the point groups, distances between two adjacent point groups, a width of the group, and an average of the distances.
본 발명의 다른 특징에 따르면, 이동 로봇의 제어 방법으로서, 레이저 스캔 데이터를 수집하는 단계, 극좌표계로 구성된 레이저 스캔 데이터를 직교 좌표계의 위치 값을 가지는 복수개의 점군 형태로 변환하는 단계, 점군 형태로 변환된 레이저 스캔 데이터를 카메라로 촬영한 영상 데이터와 비교하여, 점군 별로 해당하는 영상의 픽셀값을 이용하여 이진화하는 단계, 상기 레이저 스캔 데이터를 레이블링하여 복수개의 객체를 검출하는 단계, 검출된 복수개의 객체를 상기 영상 데이터와 비교하여 일치하는 객체를 추적 객체로 선별하는 단계, 그리고 상기 추적 객체를 향하여 주행하는 단계를 포함한다.According to another feature of the present invention, as a control method of a mobile robot, collecting laser scan data, converting laser scan data composed of a polar coordinate system into a plurality of point cloud forms having position values in a Cartesian coordinate system, and converting into a point cloud form Comparing the generated laser scan data with image data captured by a camera and binarizing using pixel values of an image corresponding to each point group, labeling the laser scan data to detect a plurality of objects, and detecting a plurality of objects Comparing the image data to select a matching object as a tracking object, and driving toward the tracking object.
상기 레이저 스캔 데이터를 수집하는 단계는 모폴로지 마스크를 적용하여 노이즈를 제거하는 단계를 더 포함하고, 상기 모폴로지 마스크는, 사람의 보폭을 커버할 수 있는 크기로 사전에 설정될 수 있다.The collecting of the laser scan data further includes removing noise by applying a morphology mask, and the morphology mask may be set in advance to a size capable of covering a person's stride.
상기 주행하는 단계는, 제1 스캔 시점의 레이저 스캔 데이터로부터 검출된 복수개의 객체를 상기 제1 스캔 시점보다 나중인 제2 스캔 시점의 레이저 스캔 데이터로부터 검출된 복수개의 객체와 비교하여 교차하는 영역의 비율이 가장 큰 객체를 추적 객체로 인식하는 단계를 더 포함할 수 있다.The driving may include comparing a plurality of objects detected from laser scan data at a first scan time point with a plurality of objects detected from laser scan data at a second scan time later than the first scan time, It may further include the step of recognizing the object having the largest ratio as a tracking object.
상기 교차하는 영역의 비율이 가장 큰 객체를 추적 객체로 인식하는 단계는, 사전에 설정된 조건에 따라 선택적으로 구동될 수 있다.Recognizing the object having the largest ratio of the intersecting areas as the tracking object may be selectively driven according to a preset condition.
상기 레이저 스캔 데이터를 수집하는 단계는, 상기 레이저 스캔 데이터 중에서, 사람의 신체 부위 별로 설정된 높이에 해당하는 스캔 데이터를 필터링하는 단계를 더 포함할 수 있다.The collecting of the laser scan data may further include filtering scan data corresponding to a height set for each body part of a person from among the laser scan data.
상기 검출하는 단계는, 상기 이진화된 레이저 스캔 데이터를 레이블링 대상으로 적용하는 단계, 상기 레이블링 대상으로 적용된 상기 데이터에서, 각 점군 사이의 거리를 기준으로 상기 데이터를 분할하여 그룹핑하는 단계, 그리고 그룹핑된 점군들을 동일한 객체로 구분하기 위한 레이블링을 수행하는 단계를 포함할 수 있다.The detecting may include applying the binarized laser scan data as a labeling target, dividing and grouping the data based on the distance between each point group, from the data applied as the labeling target, and grouping the grouped point groups It may include performing labeling for classifying them into the same object.
본 발명의 또 다른 특징에 따르면, 이동 로봇의 동작 방법으로서, 원격지 서버와 연동하여 객체 추적을 위한 인증을 수행하는 단계, 상기 인증에 성공하면, 전방을 촬영하여 영상을 생성하고, 생성한 영상을 스켈레톤화하여 스켈레톤 영상을 생성하는 단계, 사용자 데이터베이스에 사전에 등록된 제1 사용자들의 스켈레톤 영상과 상기 생성한 스켈레톤 영상을 비교하여 상기 생성한 스켈레톤 영상이 제1 사용자인지 판별하는 단계, 제1 사용자로 판단되면, 레이저 스캐너를 이용하여 상기 제1 사용자를 추적하면서 제1 사용자의 요청에 따른 작업을 수행하고, 작업 이력을 저장하는 단계, 그리고 제1 사용자가 아니라고 판단되면, 제2 사용자 정보 및 목적지 정보의 입력을 요구하여 입력받고, 상기 레이저 스캐너를 이용하여 제2 사용자를 추적하면서 입력받은 목적지까지 길 안내를 수행하는 단계를 포함한다.According to another feature of the present invention, as a method of operating a mobile robot, the step of performing authentication for object tracking in connection with a remote server, and if the authentication is successful, photographing the front side to generate an image, and the generated image Generating a skeleton image by making a skeleton, determining whether the generated skeleton image is a first user by comparing the skeleton image of the first users registered in advance in a user database with the generated skeleton image, as a first user When it is determined, performing a task according to the request of the first user while tracking the first user using a laser scanner, storing the work history, and if it is determined that it is not the first user, second user information and destination information And performing a route guidance to the received destination while requesting and receiving an input of, and tracking a second user using the laser scanner.
상기 레이저 스캐너를 이용한 제1 사용자 추적 또는 제2 사용자 추적은, 극좌표계로 구성된 레이저 스캔 데이터를 직교 좌표계의 위치 값을 가지는 복수개의 점군 형태로 변환하고, 점군 형태로 변환된 레이저 스캔 데이터를 카메라로 촬영한 영상 데이터와 비교하여 점군 별로 해당하는 영상의 픽셀값을 이용하여 이진화하며, 이진화된 레이저 스캔 데이터를 레이블링하여 검출한 복수개의 객체를 상기 영상 데이터와 비교하여 일치하는 객체를 추적 객체로 선별하는 방식으로 이루어질 수 있다.In the first user tracking or second user tracking using the laser scanner, laser scan data composed of a polar coordinate system is converted into a plurality of point cloud forms having position values in a Cartesian coordinate system, and the laser scan data converted into a point cloud form is photographed with a camera. A method of comparing one image data and binarizing using pixel values of an image corresponding to each point group, comparing a plurality of objects detected by labeling the binarized laser scan data with the image data, and selecting matching objects as tracking objects Can be made.
상기 저장하는 단계는, 상기 제1 사용자로 판단되면, 촬영 영상으로부터 특징점 정보를 추출하여 등록하는 단계, 제1 사용자가 요청한 작업을 수행하고 작업 이력을 저장하는 단계, 상기 레이저 스캐너를 이용하여 제1 사용자를 추적하는 단계, 상기 제1 사용자의 추적에 실패하면, 현 시점의 전방을 촬영하여 획득한 영상으로부터 특징점 정보를 추출하고, 추출한 특징점 정보를 상기 등록한 특징점 정보와 비교하여 일치하는 지점의 객체를 제1 사용자로 판별하는 단계, 그리고 판별한 제1 사용자의 위치로 이동하여 이전 작업 이력을 안내하고 추적 의사가 있는지 질의하여, 추적을 수락하면 추적을 지속하고 추적을 거절하면 객체 추적을 종료하고 시작 위치로 이동하는 단계를 포함할 수 있다.The storing may include extracting and registering feature point information from a captured image if it is determined as the first user, performing a task requested by the first user and storing a work history, and using the laser scanner Tracking the user, if the tracking of the first user fails, feature point information is extracted from the image acquired by photographing the front of the current point of view, and the extracted feature point information is compared with the registered feature point information to determine the object of the matching point. The step of determining as the first user, and moving to the determined first user's location, guiding the previous work history and inquiring if there is any intention to follow. It may include moving to a location.
상기 길 안내를 수행하는 단계 이후, 상기 제2 사용자의 추적에 실패하면, 로봇의 전방을 촬영하여 생성한 영상을 스켈레톤화하여 현 시점의 스켈레톤 영상을 생성하는 단계, 그리고 기 생성한 스켈레톤 영상과 상기 현 시점의 스켈레톤 영상을 비교하여 일치하는 객체를 탐색하고, 탐색된 객체를 추적하는 단계를 더 포함할 수 있다.After the step of performing the route guidance, if the tracking of the second user fails, the image generated by photographing the front of the robot is skeletonized to generate a skeleton image at the present time, and the previously generated skeleton image and the The method may further include the step of searching for a matching object by comparing the skeleton image of the current viewpoint, and tracking the searched object.
실시예에 따르면, 수백~수천만원의 라이다(LiDAR) 센서를 대신하여 상대적으로 저렴한 레이저 스캐너 및 영상 촬영부를 활용함으로써, 영상 촬영부를 이용한 객체 인식/추적 알고리즘과 레이저 스캐너를 이용한 객체 인식/추적 알고리즘을 결합하여 객체 추적률을 향상시킨다. 즉, 영상 촬영부의 인식 범위 및 처리 속도를 레이저 스캐너를 통해 보완할 수 있다. According to an embodiment, by using a relatively inexpensive laser scanner and an image capture unit instead of a LiDAR sensor of hundreds to tens of millions of won, an object recognition/tracking algorithm using an image capture unit and an object recognition/tracking algorithm using a laser scanner Combined to improve the object tracking rate. That is, the recognition range and processing speed of the image capturing unit can be supplemented through the laser scanner.
또한, 카메라의 낮은 해상도와 화각이 좁음으로써 발생하는 추적 실패의 문제점을 전방의 대부분을 커버할 수 있는 화각을 가진 레이저 스캐너를 사용하여 보완한다.In addition, the problem of tracking failure caused by the low resolution of the camera and the narrow angle of view is compensated by using a laser scanner having a field of view capable of covering most of the front.
또한, 레이저 스캐너로 수집된 데이터를 효율적으로 처리함으로써, 지속적 추적 성능을 향상시킨다.In addition, by efficiently processing the data collected by the laser scanner, continuous tracking performance is improved.
또한, 객체 추적시 실내외 위치를 측위하는 인프라의 도움 없이 소프트웨어만으로 추적이 가능하므로 실제 현장에 적용시에 가격 경쟁력과 유지 보수의 효율 성능을 제고할 수 있다.In addition, when tracking an object, it is possible to track only with software without the help of an infrastructure for locating indoor and outdoor locations, so when applied to an actual site, price competitiveness and maintenance efficiency can be improved.
도 1은 본 발명의 한 실시예에 따른 이동 로봇의 구성을 나타낸 블록도이다.1 is a block diagram showing the configuration of a mobile robot according to an embodiment of the present invention.
도 2는 도 1의 이동 로봇의 화각을 설명하는 도면이다.FIG. 2 is a view for explaining the angle of view of the mobile robot of FIG. 1.
도 3은 본 발명의 실시예에 따른 2차원으로 변환된 스캔 데이터의 예시이다.3 is an example of scan data converted into two dimensions according to an embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 이동 로봇의 영역 설정 예시도이다.4 is an exemplary view of setting a region of a mobile robot according to an embodiment of the present invention.
도 5는 본 발명의 한 실시예에 따른 이동 로봇의 동작을 나타낸 순서도이다.5 is a flow chart showing the operation of the mobile robot according to an embodiment of the present invention.
도 6은 본 발명의 한 실시예에 따른 이동 로봇의 객체 인식/추적 동작을 나타낸 순서도이다.6 is a flowchart illustrating an object recognition/tracking operation of a mobile robot according to an embodiment of the present invention.
도 7은 본 발명의 다른 실시예에 따른 이동 로봇의 객체 인식/추적 동작을 나타낸 순서도이다.7 is a flowchart illustrating an object recognition/tracking operation of a mobile robot according to another embodiment of the present invention.
도 8은 본 발명의 실시예에 따른 레이블링 과정을 통해 객체를 검출하는 과정을 나타낸 순서도이다.8 is a flowchart illustrating a process of detecting an object through a labeling process according to an embodiment of the present invention.
도 9는 본 발명의 실시예에 따른 레이저 스캔 영역을 설명하는 도면이다.9 is a diagram illustrating a laser scan area according to an embodiment of the present invention.
도 10은 본 발명의 실시예에 따른 이동 로봇을 기점으로 레이저 스캔 데이터를 수집하는 좌표계를 설명하는 도면이다.10 is a diagram illustrating a coordinate system for collecting laser scan data from a mobile robot according to an embodiment of the present invention.
도 11은 본 발명의 실시예에 따른 레이블링된 레이저 스캔 데이터에서 신뢰할 수 있는 데이터를 추출하는 과정을 설명하는 도면이다.11 is a diagram illustrating a process of extracting reliable data from labeled laser scan data according to an embodiment of the present invention.
도 12는 본 발명의 실시예에 따른 레이저 스캔 데이터로부터 특징 변수를 설명하는 도면이다.12 is a diagram for explaining feature variables from laser scan data according to an embodiment of the present invention.
도 13은 본 발명의 실시예에 따른 특징 변수를 기초로 객체를 분할하는 과정을 설명하는 도면이다.13 is a diagram illustrating a process of dividing an object based on a feature variable according to an embodiment of the present invention.
도 14는 본 발명의 실시예에 따른 프레임 매칭 기술을 설명하는 도면이다.14 is a diagram illustrating a frame matching technique according to an embodiment of the present invention.
아래에서는 첨부한 도면을 참고로 하여 본 발명의 실시예에 대하여 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나 본 발명은 여러가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art may easily implement the present invention. However, the present invention may be implemented in various different forms and is not limited to the embodiments described herein. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and similar reference numerals are attached to similar parts throughout the specification.
명세서 전체에서, 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다.Throughout the specification, when a part "includes" a certain component, it means that other components may be further included rather than excluding other components unless specifically stated to the contrary.
또한, 명세서에 기재된 "…부", "…기", "…모듈" 등의 용어는 적어도 하나의 기능이나 동작을 처리하는 단위를 의미하며, 이는 하드웨어나 소프트웨어 또는 하드웨어 및 소프트웨어의 결합으로 구현될 수 있다.In addition, terms such as "... unit", "... group", and "... module" described in the specification mean a unit that processes at least one function or operation, which can be implemented by hardware or software or a combination of hardware and software. I can.
본 발명에서 설명하는 장치들은 적어도 하나의 프로세서, 메모리 장치, 통신 장치 등을 포함하는 하드웨어로 구성되고, 지정된 장소에 하드웨어와 결합되어 실행되는 프로그램이 저장된다. 하드웨어는 본 발명의 방법을 실행할 수 있는 구성과 성능을 가진다. 프로그램은 도면들을 참고로 설명한 본 발명의 동작 방법을 구현한 명령어(instructions)를 포함하고, 프로세서와 메모리 장치 등의 하드웨어와 결합하여 본 발명을 실행한다. The devices described in the present invention are composed of hardware including at least one processor, a memory device, a communication device, and the like, and a program that is combined with the hardware and executed is stored in a designated place. The hardware has the configuration and capability to implement the method of the present invention. The program includes instructions for implementing the operating method of the present invention described with reference to the drawings, and executes the present invention by combining it with hardware such as a processor and a memory device.
본 명세서에서 "전송" 또는 "제공"은 직접적인 전송 또는 제공하는 것 뿐만 아니라 다른 장치를 통해 또는 우회 경로를 이용하여 간접적으로 전송 또는 제공도 포함할 수 있다. In the present specification, "transmission" or "providing" may include not only direct transmission or provision, but also transmission or provision indirectly through another device or using a bypass path.
본 명세서에서 단수로 기재된 표현은 "하나" 또는 "단일" 등의 명시적인 표현을 사용하지 않은 이상, 단수 또는 복수로 해석될 수 있다.Expressions described in the singular in this specification may be interpreted as the singular or plural unless an explicit expression such as "one" or "single" is used.
본 명세서에서 이동 로봇은 다양한 감지 센서 및 자율적인 주행 제어 알고리즘을 탑재하여 장애물을 피해가면서 원하는 방향으로 주행하도록 구성된다. 이동 로봇은 실내 인프라 구축이나 웨어러블 디바이스와 같은 추가 장비 착용이 불필요하여 물류 분야 뿐만 아니라 호텔 룸서비스, 쇼핑 카트 등 많은 분야에 응용될 수 있다.In the present specification, the mobile robot is configured to travel in a desired direction while avoiding obstacles by mounting various detection sensors and autonomous driving control algorithms. Mobile robots do not need to build indoor infrastructure or wear additional equipment such as wearable devices, so they can be applied not only to logistics, but also to hotel room services and shopping carts.
이제, 도면을 참고하여 본 발명의 실시예에 따른 이동 로봇 및 그 제어 방법에 대하여 설명한다.Now, a mobile robot and a control method thereof according to an embodiment of the present invention will be described with reference to the drawings.
도 1은 본 발명의 실시예에 따른 이동 로봇의 구성을 나타낸 블록도이고, 도 2는 도 1의 이동 로봇의 화각을 설명하는 도면이며, 도 3은 본 발명의 실시예에 따른 2차원으로 변환된 스캔 데이터의 예시이고, 도 4는 본 발명의 실시예에 따른 이동 로봇의 영역 설정 예시도이다.1 is a block diagram showing the configuration of a mobile robot according to an embodiment of the present invention, FIG. 2 is a view for explaining the angle of view of the mobile robot of FIG. 1, and FIG. 3 is a two-dimensional transformation according to an embodiment of the present invention. Is an example of the scanned data, and FIG. 4 is an example of setting an area of a mobile robot according to an embodiment of the present invention.
도 1을 참조하면, 이동 로봇(100)은 영상 촬영부(101), 레이저 스캐너(103), 주행 장치(105), 입력 장치(107), 출력 장치(109), 통신 장치(111), 저장 장치(113), 메모리(115) 및 제어부(117)를 포함한다.Referring to FIG. 1, the mobile robot 100 includes an image capture unit 101, a laser scanner 103, a traveling device 105, an input device 107, an output device 109, a communication device 111, and a storage device. It includes a device 113, a memory 115 and a control unit 117.
영상 촬영부(101)는 이동 로봇(100)의 외부에 부착되고, 이동 로봇(100)의 외부를 촬영하여 영상을 생성한다. 영상 촬영부(101)는 1개 설치될 수 있으나, 실시예에 따라서는 복수개가 설치될 수도 있다. The image capturing unit 101 is attached to the outside of the mobile robot 100 and generates an image by photographing the outside of the mobile robot 100. One image capturing unit 101 may be installed, but a plurality of image capturing units 101 may be installed depending on the embodiment.
한 실시예에 따르면, 영상 촬영부(101)는 RGB 카메라와 뎁스(depth) 센서를 결합한 깊이 카메라가 사용될 수 있다.According to an embodiment, the image capture unit 101 may use a depth camera that combines an RGB camera and a depth sensor.
레이저 스캐너(103)는 영상 촬영부(101)와 겹치는 화각에 설치될 수 있다. 레이저 스캐너(103)는 이동 로봇(100)의 외부에 부착되어 빠르게 회전하는 레이저 측정 빔을 이용하여 레이저 스캐너의(103) 주변을 감지한다.The laser scanner 103 may be installed at an angle of view overlapping the image capturing unit 101. The laser scanner 103 detects the periphery of the laser scanner 103 using a laser measuring beam that is attached to the outside of the mobile robot 100 and rotates rapidly.
이때, 레이저 스캐너(103)는 이동 로봇(100)에 탑재된 채로 이동 로봇(100)이 이동하는 동안 스캔 방향으로 레이저를 조사하고 스캔 방향에 위치하는 객체의 표면을 스캔하여 스캔 데이터를 획득한다.At this time, the laser scanner 103 obtains scan data by irradiating a laser in the scanning direction while being mounted on the mobile robot 100 and scanning the surface of the object positioned in the scanning direction while the mobile robot 100 is moving.
도 2를 참조하면, 영상 촬영부(101)는 이동 로봇(101)의 전면에 설치되어 이동 로봇(101)의 현재 위치에서 전방을 촬영한다. 영상 촬영부(101)는 1개 설치된 것으로 설명하나, 실시예에 따라 복수개가 설치될 수도 있다.Referring to FIG. 2, the image capturing unit 101 is installed on the front side of the mobile robot 101 and photographs the front side at the current position of the mobile robot 101. One image capture unit 101 is described as being installed, but a plurality of image capturing units 101 may be installed according to embodiments.
영상 촬영부(101)는 객체의 전체 스켈레톤을 인식하기 위해 이동 로봇(100)의 상단에 설치될 수 있다.The image capture unit 101 may be installed on the top of the mobile robot 100 to recognize the entire skeleton of the object.
레이저 스캐너(103)는 영상 촬영부(101)와 겹쳐지는 화각에 복수개가 설치될 수 있다. 예를들어, 레이저 스캐너(103)는 이동 로봇(101)의 전면에 2개, 후면에 1개가 설치될 수 있다.A plurality of laser scanners 103 may be installed at the angle of view overlapping the image capturing unit 101. For example, two laser scanners 103 may be installed at the front and one at the rear of the mobile robot 101.
이때, 영상 촬영부(101)의 화각은 대략 120도이고, 레이저 스캐너(103)는 영상 촬영부(101)의 화각보다 넓은 화각을 가지는데, 대략 360도의 화각일 수 있다.At this time, the angle of view of the image capturing unit 101 is approximately 120 degrees, and the laser scanner 103 has a wider angle of view than that of the image capturing unit 101, which may be approximately 360 degrees.
따라서, 추적중인 객체가 영상 촬영부(101)의 화각을 벗어나도 레이저 스캐너(103)를 통하여 객체 추적을 유지할 수 있다.Accordingly, even if the object being tracked deviates from the angle of view of the image capturing unit 101, the object tracking can be maintained through the laser scanner 103.
다시, 도 1을 참조하면, 주행 장치(105)는 이동 로봇(101)을 주행하게 하는 장치로서, 이동 로봇(100)을 주행시키기 위하여 복수개의 바퀴, 모터 등과 같은 주행 수단과, 주행 수단에 구동력을 제공하기 위한 구동원을 포함할 수 있다. 이러한 주행 장치(105)는 널리 공지된 기술에 속하는 것으로 여기서는 그에 대한 상세한 설명을 생략한다.Again, referring to FIG. 1, the traveling device 105 is a device that allows the mobile robot 101 to travel, and a driving means such as a plurality of wheels and a motor, and a driving force applied to the traveling means to drive the mobile robot 100. It may include a driving source for providing. This traveling device 105 belongs to a well-known technology and a detailed description thereof will be omitted here.
입력 장치(107)는 사용자 명령을 입력받는 수단이다. 입력 장치(107)는 키보드, 터치패드, 광 조이스틱, 마우스 등의 장치를 포함할 수 있다.The input device 107 is a means for receiving a user command. The input device 107 may include devices such as a keyboard, a touch pad, an optical joystick, and a mouse.
출력 장치(109)는 입력 장치(107)로부터 전달되는 사용자 명령에 따라 시각적 출력 또는 청각적 출력을 할 수 있다. 출력 장치(109)는 디스플레이 장치나 스피커 등과 같은 출력 장치를 포함할 수 있다. 출력 장치(109)는 사용자 인터페이스(User Interface, 이하, 'UI'라 통칭함) 화면을 출력할 수 있다. 출력 장치(109)는 음성, 사운드 등을 출력할 수 있다. The output device 109 may output a visual or an audible output according to a user command transmitted from the input device 107. The output device 109 may include an output device such as a display device or a speaker. The output device 109 may output a user interface (hereinafter, collectively referred to as'UI') screen. The output device 109 may output voice, sound, and the like.
입력 장치(107) 및 출력 장치(109)는 터치스크린으로 구현될 수 있다. 즉, 화면 상에 터치 키패드, 터치 버튼 등이 활성화되어 사용자 명령을 입력받고 다양한 정보가 출력될 수 있다.The input device 107 and the output device 109 may be implemented as a touch screen. That is, a touch keypad, a touch button, etc. may be activated on the screen to receive a user command and output various information.
통신 장치(111)는 통신망(미도시)을 통해 원격지 서버(미도시)와 연결되어, 데이터를 송수신한다.The communication device 111 is connected to a remote server (not shown) through a communication network (not shown) to transmit and receive data.
저장 장치(113)는 이동 로봇(100)의 동작에 필요한 데이터, 통신 장치(111)를 통해 송수신되는 데이터를 저장한다.The storage device 113 stores data necessary for the operation of the mobile robot 100 and data transmitted and received through the communication device 111.
메모리(115)는 이동 로봇(100)의 동작에 필요한 프로그램 및 데이터를 저장한다. 이러한 프로그램은 이동 로봇(100)의 동작 및 제어를 수행하는 명령어들(Instructions)로 구성된다.The memory 115 stores programs and data necessary for the operation of the mobile robot 100. These programs are composed of instructions for performing the operation and control of the mobile robot 100.
제어부(117)는 적어도 하나의 프로세서로서, 메모리(115)에 저장된 프로그램을 실행한다.The controller 117 is at least one processor and executes a program stored in the memory 115.
제어부(117)는 ROS(Robot Operating System) 기반에서 동작하며, 세부적으로, 객체 추적부(119), 주행 제어부(121), UI(User Interface) 제어부(123), 원격 제어부(125) 및 작업 관리부(127)를 포함한다.The control unit 117 operates based on a robot operating system (ROS), and in detail, an object tracking unit 119, a driving control unit 121, a user interface (UI) control unit 123, a remote control unit 125, and a task management unit Includes (127).
객체 추적부(119)는 영상 촬영부(101) 및 레이저 스캐너(103)와 연동하여 객체를 인식 및 추적한다. 객체 추적부(119)는 객체 인식 및 추적 과정에서 장애물을 탐지하여 회피하거나, 속도를 줄이거나 또는 비상 정지하도록 주행 제어부(121)에게 요청한다. 객체 추적부(119)는 이동 로봇(100)의 네비게이션을 위한 작업장 지도를 생성하고, 네비게이션 기능을 처리한다. 여기서, 작업장은 이동 로봇(100)이 주행하는 환경을 지칭한다. 객체 추적부(119)는 추적 모드와 주행 모드로 구분하여 동작할 수 있다. The object tracking unit 119 interlocks with the image capturing unit 101 and the laser scanner 103 to recognize and track an object. The object tracking unit 119 requests the driving control unit 121 to detect and avoid an obstacle, reduce the speed, or make an emergency stop in the process of recognizing and tracking an object. The object tracking unit 119 generates a workplace map for navigation of the mobile robot 100 and processes a navigation function. Here, the workplace refers to the environment in which the mobile robot 100 travels. The object tracking unit 119 may operate in a tracking mode and a driving mode.
객체 추적부(119)는 객체 인식 및 추적에 사용할 수 있도록 영상 촬영부(101)와 레이저 스캐너(103)로부터 원시 데이터를 수신하고 가공 처리한다.The object tracking unit 119 receives and processes raw data from the image capturing unit 101 and the laser scanner 103 so that it can be used for object recognition and tracking.
객체 추적부(119)는 영상 촬영부(101)가 생성한 영상으로부터 깊이맵(depth map)과 스켈레톤 데이터(skeleton data)를 추출한다. The object tracking unit 119 extracts a depth map and skeleton data from an image generated by the image capturing unit 101.
객체 추적부(119)는 객체를 인식하고 이를 지속적으로 추적하여 지정된 객체만 따라가도록 주행 제어부(121)를 구동한다. 여기서, 객체는 움직이는 대상으로서, 본 발명에서 객체는 추적하고자 하는 사람을 의미한다. 한 실시예에 따르면, 이동 로봇이 호텔에서 사용되는 경우, 객체는 직원과 손님이 될 수 있다.The object tracking unit 119 drives the driving control unit 121 to follow only the designated object by recognizing the object and tracking it continuously. Here, the object is a moving object, and in the present invention, the object means a person to be tracked. According to one embodiment, when a mobile robot is used in a hotel, the objects may be employees and guests.
객체 추적부(119)는 영상 촬영부(101)로부터 수신한 영상을 기초로 추적할 객체를 식별하고, 식별한 객체를 레이저 스캐너(103)로부터 수신한 스캔 데이터를 이용하여 추적한다. 이처럼, 객체 추적부(119)는 영상 촬영부(101)와 레이저 스캐너(103)를 결합하여 객체를 인식 및 추적함으로써, 객체 추적률을 향상시킬 수 있다.The object tracking unit 119 identifies an object to be tracked based on the image received from the image capturing unit 101 and tracks the identified object using scan data received from the laser scanner 103. In this way, the object tracking unit 119 may improve the object tracking rate by combining the image capturing unit 101 and the laser scanner 103 to recognize and track the object.
객체 추적부(119)는 영상 촬영부(101)로부터 수신한 영상을 이용하여 객체의 스켈레톤(skeleton) 위치를 파악하여 객체를 인식한다. The object tracking unit 119 recognizes the object by grasping the position of the skeleton of the object using the image received from the image capturing unit 101.
객체 추적부(119)는 영상 촬영부(101)로부터 수신한 영상을 스켈레톤화(skeletonization)하여 스켈레톤 영상을 생성한다. 스켈레톤화는 비교적 변형되지 않는 신체부위들의 위치와 그 신체부위들 사이의 연결정보를 기본으로 표현하는 방법으로 공지된 다양한 알고리즘을 사용할 수 있다.The object tracking unit 119 generates a skeleton image by skeletonizing the image received from the image capturing unit 101. Skeletonization is a method of expressing the location of relatively unmodified body parts and connection information between the body parts, and various known algorithms can be used.
이때, 객체 추적부(119)는 스켈레톤 영상은 한 사람에 대한 것으로 가정한다. 객체 추적부(119)는 초기 스켈레톤 영상 생성시, 복수개의 스켈레톤이 검출되면, 에러 처리하고 재인식을 요구할 수 있다. 그리고 입력 장치(107)를 통해 사용자로부터 복수개의 스켈레톤 중 하나를 선택하는 입력을 받을 수 있다. 이때, 선택된 스켈레톤이 추적 객체로 결정된다.In this case, the object tracking unit 119 assumes that the skeleton image is for one person. When generating an initial skeleton image, the object tracking unit 119 may process an error and request re-recognition when a plurality of skeletons are detected. In addition, an input for selecting one of a plurality of skeletons may be received from a user through the input device 107. At this time, the selected skeleton is determined as a tracking object.
객체 추적부(119)는 레이저 스캐너(103)로부터 초단위로 스캐닝한 레이저 스캔 데이터를 수집한다. 그리고 수집한 레이저 스캔 데이터 중에서, 기 설정된 영역의 레이터 스캔 데이터만 추출하는 1차 필터링을 한다. 이때, 기 설정된 영역은 사람의 다리, 엉덩이 및 등에 해당하는 높이로 설정될 수 있다.The object tracking unit 119 collects laser scan data scanned in seconds from the laser scanner 103. And, from the collected laser scan data, primary filtering is performed to extract only the radar scan data of a preset area. In this case, the preset area may be set to a height corresponding to a person's legs, hips, and the like.
객체 추적부(119)는 1차 필터링된 레이저 스캔 데이터 중에서 필터링 기준에 따라 2차 필터링을 한다. 이때, 1차 필터링된 레이저 스캔 데이터의 중심에 위치하는 점군 데이터를 기초로 각 영역 별로 소정 개수의 점군 데이터만 남기는 2차 필터링을 한다. 여기서, 소정 개수는 다리 높이, 엉덩이 높이, 등 높이 별로 각각 다르게 설정된다.The object tracking unit 119 performs secondary filtering according to a filtering criterion among the primary filtered laser scan data. At this time, based on the point cloud data located at the center of the primary filtered laser scan data, secondary filtering is performed in which only a predetermined number of point cloud data are left for each area. Here, the predetermined number is set differently for each leg height, hip height, and back height.
객체 추적부(119)는 2차 필터링된 레이저 스캔 데이터를 대상으로 이진화 작업을 한다. 그리고 이진화된 레이저 스캔 데이터를 대상으로 모폴로지 마스크 및 레이블링을 적용하여 복수개의 객체를 검출한다.The object tracking unit 119 performs a binarization operation on the secondary filtered laser scan data. Then, a morphology mask and labeling are applied to the binarized laser scan data to detect a plurality of objects.
객체 추적부(119)는 복수개의 객체 영역을 스켈레톤 영상과 비교해서 스켈레톤 영상(형태)과 동일한 객체를 추적 객체로 선정한다. 스켈레톤 영상은 일종의 형태 데이터라 볼 수 있으므로, 객체 추적부(119)는 이러한 스켈레톤 영상과 일치하는 객체 영역을 추적 객체로 선정한다.The object tracking unit 119 compares the plurality of object areas with the skeleton image and selects an object identical to the skeleton image (shape) as the tracking object. Since the skeleton image can be regarded as a type of shape data, the object tracking unit 119 selects an object area that matches the skeleton image as a tracking object.
이상의 과정을 통해 객체를 인식하고 나면, 이후 객체 추적부(119)는 이전 객체 영역과 다음 객체 영역을 비교해서 그 교차 비율이 가장 큰 객체를 추적 객체로 인식한다. 여기서, "이전"과 "다음"은 레이저 스캔 데이터의 수집 시점으로 볼 수 있다.After recognizing the object through the above process, the object tracking unit 119 then compares the previous object area with the next object area and recognizes the object with the largest intersection ratio as the tracking object. Here, "previous" and "next" can be viewed as the collection point of laser scan data.
객체 추적부(119)는 레이저 스캐너로 스캔한 스캔 데이터를 전/후처리 과정을 거쳐서 오브젝트화시켜 객체를 추적한다.The object tracking unit 119 tracks the object by converting the scanned data scanned with the laser scanner into an object through pre/post processing processes.
객체 추적부(119)가 레이저 스캐너(103)에서 획득한 스캔 데이터는 거리(radius)와 각도(angle)로 이루어진 극좌표(Polar coordinate)계로 구성된다. The scan data obtained by the object tracking unit 119 from the laser scanner 103 is composed of a polar coordinate system consisting of a radius and an angle.
객체 추적부(119)는 극좌표계의 스캔 데이터를 X, Y로 구성된 직교 좌표(rectangular coordinate)계로 변환한다. 이러한 스캔 데이터는 X, Y의 위치 좌표를 각각 가지는 수많은 측점, 즉, 점군(Point Cloud) 형태를 이룬다.The object tracking unit 119 converts the scan data of the polar coordinate system into a rectangular coordinate system composed of X and Y. These scan data form a number of station points, that is, point clouds, each having position coordinates of X and Y.
이처럼, 극좌표 형태의 스캔 데이터가 직교 좌표 형태의 스캔 데이터로 변환되는 코드 예시는 표 1과 같을 수 있다.As such, an example of a code for converting scan data in the form of polar coordinates into scan data in the form of rectangular coordinates may be as shown in Table 1.
Figure PCTKR2020008496-appb-T000001
Figure PCTKR2020008496-appb-T000001
객체 추적부(119)는 이동 로봇(100)이 객체를 원활히 추적할 수 있도록 최소 1센티미터(cm) 단위로 스캔 데이터를 생성한다. 여기서, 스캔 데이터 생성 단위는 이동 로봇(100)의 이동 거리를 말한다. 즉, 이동 로봇(100)이 1센티미터 단위로 이동할때마다 스캔 데이터를 생성할 수 있다. 또한, 초기 객체 인식을 위한 스캔 거리는 2미터 이내로 확보할 수 있다.The object tracking unit 119 generates scan data in units of at least 1 centimeter (cm) so that the mobile robot 100 can smoothly track the object. Here, the scan data generation unit refers to a moving distance of the mobile robot 100. That is, each time the mobile robot 100 moves in units of 1 centimeter, scan data may be generated. In addition, the scan distance for initial object recognition can be secured within 2 meters.
객체 추적부(119)는 극좌표계의 스캔 데이터를 직교 좌표계의 2차원 데이터로 변환하는데, 변환된 2차원 데이터는 도 3과 같이 표현된다. The object tracking unit 119 converts the scan data of the polar coordinate system into 2D data of the Cartesian coordinate system, and the converted 2D data is expressed as shown in FIG. 3.
도 3을 참조하면, 2차원 데이터는 0 또는 1의 값을 가진다. 즉, 객체 추적부(119)는 극좌표계의 스캔 데이터를 이진화하는 작업을 한다. 이때, 객체 추적부(119)는 X, Y의 위치 좌표를 각각 가지는 수많은 측점을 영상 촬영부(101)로부터 획득한 이미지에 매핑시켜 그 이미지에서 각 측점에 해당하는 픽셀의 값을 기초로 0 또는 1로 표현할 수 있다. 객체 추적부(119)는 레이저 스캔 데이터에 RGB 함수를 적용하여 R=255, G=255, B=255로 구성된 영역은 해당 2차원 배열의 픽셀의 값으로 1을 부여하고 이외의 RGB값을 지닌 영역은 0으로 설정할 수 있다.Referring to FIG. 3, 2D data has a value of 0 or 1. That is, the object tracking unit 119 performs a job of binarizing the scan data of the polar coordinate system. At this time, the object tracking unit 119 maps a number of station points each having position coordinates of X and Y to the image acquired from the image capturing unit 101, and is 0 or 0 based on the value of the pixel corresponding to each station point in the image. It can be expressed as 1. The object tracking unit 119 applies the RGB function to the laser scan data, and assigns 1 as the value of the pixel of the corresponding two-dimensional array to the area composed of R=255, G=255, and B=255, and has other RGB values. The area can be set to 0.
예를들어, 기준 픽셀값(예, 255)을 정하고, 각 측점에 해당하는 픽셀값이 기준 픽셀값보다 작으면 0으로 표시하고 기준 픽셀값보다 크면 1로 표시할 수 있다.For example, a reference pixel value (eg, 255) may be determined, and if the pixel value corresponding to each point is smaller than the reference pixel value, it may be displayed as 0, and if it is greater than the reference pixel value, it may be displayed as 1.
X, Y의 위치 좌표와 0 또는 1의 값을 가지는 각 측점을 픽셀로 정의한다. 그리고 이러한 측점을 2차원 좌표계에 배열한 도 3과 같은 스캔 데이터에 모폴로지(morphology) 연산을 수행하여, 노이즈 제거 및 결측치에 대한 보완을 한다. 일반적으로, 모폴로지 연산은 영상 처리에서 많이 쓰이는 기법이다. 모폴로지 연산 중 하나인 팽창(dilation) 연산은 근접한 두 객체를 합치고 객체 내부의 구멍을 메우는 방식이다. 침식(erotion) 연산은 객체 경계의 노이즈와 돌기둥을 제거한다.Each station having a position coordinate of X and Y and a value of 0 or 1 is defined as a pixel. In addition, by performing a morphology operation on the scan data shown in FIG. 3 in which the measurement points are arranged in a two-dimensional coordinate system, noise is removed and missing values are compensated. In general, morphology operation is a technique that is widely used in image processing. One of the morphological operations, dilation, is a method of combining two adjacent objects and filling a hole inside the object. The erotion operation removes the noise and pillars of the object boundary.
한 실시예에 따르면, 객체의 보폭을 커버하기 위해서 21×21 크기의 모폴로지 마스크를 사용할 수 있다. 이 마스크는 사람의 통상적인 보폭인 40센티미터를 커버할 수 있다. 물론, 모폴로지 마스크가 이것으로 한정되는 것은 아니며, 다양한 크기의 모폴로지 마스크가 사용될 수 있다.According to an embodiment, a morphology mask having a size of 21×21 may be used to cover the stride of an object. This mask can cover a person's normal stride of 40 centimeters. Of course, the morphology mask is not limited thereto, and morphology masks of various sizes may be used.
객체 추적부(119)의 모폴로지 연산은 다음 표 2와 같은 코드 예시를 사용할 수 있다.The morphology operation of the object tracking unit 119 may use code examples shown in Table 2 below.
Figure PCTKR2020008496-appb-T000002
Figure PCTKR2020008496-appb-T000002
객체 추적부(119)는 초기 인식 속도를 빠르게 하기 위해서 레이저 스캐너(103)로부터 획득한 스캔 데이터를 사용하여 추적하고자 하는 객체와 비추적 대상 객체를 구분하여 레이블링(Labeling)을 한다. 그리고 추적 객체를 중심으로 다른 객체들은 관심 객체 리스트에서 제거한다. 추적 객체를 제외하고 나머지 객체를 제거하는 코드 예시는 표 3과 같을 수 있다.The object tracking unit 119 classifies an object to be tracked from an object to be tracked using the scan data acquired from the laser scanner 103 to speed up the initial recognition speed and performs labeling. In addition, other objects are removed from the list of objects of interest, centering on the tracked object. Table 3 shows an example of a code for removing the remaining objects except for the tracking object.
Figure PCTKR2020008496-appb-T000003
Figure PCTKR2020008496-appb-T000003
레이블링 기법은 특히 객체 인식을 위한 전처리 과정으로 많이 사용된다. 객체 추적부(119)는 도 3과 같은 스캔 데이터에서 픽셀값이 1인 인접한 픽셀들의 집합을 객체로 규정할 수 있다. 하나의 객체는 하나 이상의 인접한 픽셀로 이루어진다. 하나의 스캔 데이터에는 다수의 객체가 존재할 수 있고, 동일 객체에 속한 모든 픽셀에 고유한 번호를 매기는 작업이 레이블링이다.The labeling technique is particularly widely used as a preprocessing process for object recognition. The object tracking unit 119 may define a set of adjacent pixels having a pixel value of 1 in the scan data as shown in FIG. 3 as an object. An object consists of one or more adjacent pixels. Multiple objects can exist in one scan data, and labeling is the task of assigning a unique number to all pixels belonging to the same object.
한 실시예에 따르면, 객체 추적부(119)는 4-이웃 연결성 마스크를 사용하여 레이블링을 수행할 수 있다. 그러므로 대각선 관계로 위치한 픽셀은 서로 다른 레이블을 갖는다. 임의의 위치(X, Y)의 픽셀과 인접한 이웃 픽셀들 중에서 두개의 인접한 이웃 픽셀은 다음과 같이 네가지 경우로 분류할 수 있다.According to an embodiment, the object tracking unit 119 may perform labeling using a 4-neighbor connectivity mask. Therefore, pixels located in a diagonal relationship have different labels. Two adjacent neighboring pixels among the neighboring pixels adjacent to a pixel at an arbitrary position (X, Y) can be classified into four cases as follows.
Figure PCTKR2020008496-appb-T000004
Figure PCTKR2020008496-appb-T000004
객체 추적부(119)는 첫번째 스캔에서 레이블을 전파하며 등가 테이블을 작성하고, 두번째 스캔에서 등가 테이블에 있는 레이블을 참고하여 객체에 해당하는 픽셀에 고유한 레이블을 부여한다.The object tracking unit 119 creates an equivalent table by propagating a label in the first scan, and gives a unique label to a pixel corresponding to the object by referring to the label in the equivalent table in the second scan.
객체 추적부(119)는 레이블링이 완료되면, 동일한 레이블을 가진 픽셀들로 구성된 객체를 인식한다. 그리고 인식한 복수의 객체를 스켈레톤 영상에 매칭시켜 복수의 객체 중에서 추적 객체를 선정한다.When labeling is completed, the object tracking unit 119 recognizes an object composed of pixels having the same label. Then, a plurality of recognized objects are matched with a skeleton image to select a tracking object from among the plurality of objects.
객체 추적부(119)는 추적 객체 인식에 실패하는 경우가 발생할 수 있다. 이를 보완하기 위해서 객체 추적부(119)는 추적하는 객체와 타 객체를 분류하기 위해 프레임 간의 매칭 알고리즘을 사용한다. 객체 추적부(119)는 프레임 간의 매칭을 효율적으로 하기 위해서 이전에 검출된 객체가 갑자기 먼거리로 이동하거나 사라지지 않는다는 가정을 한다. 스캔 데이터에서 사람이 감지된 부분을 공간으로 식별하고 사람의 보폭을 감안하면 다음 프레임에서 일정 범위 공간내에 사람이 존재하기 때문에 공간 영역 매칭으로 사람을 인식할 수 있다. 스캔 데이터는 초당 프레임 단위로 생성되므로, 중복 영역이 크게 발생한다. 객체 추적부(119)는 공간 영역 매칭 기법을 통해서 일시적으로 추적 객체를 놓치게 될 경우 빠른 보정 작업을 수행할 수 있다.The object tracking unit 119 may fail to recognize the tracking object. To compensate for this, the object tracking unit 119 uses a frame-to-frame matching algorithm to classify an object to be tracked and another object. The object tracking unit 119 assumes that the previously detected object does not suddenly move to a long distance or disappear in order to efficiently match frames. When a part of the scan data where a person is detected is identified as a space and the person's stride is taken into account, a person can be recognized by spatial region matching because a person exists within a certain range of space in the next frame. Since scan data is generated in units of frames per second, a large overlap area occurs. The object tracking unit 119 may perform a quick correction operation when a tracking object is temporarily missed through a spatial region matching technique.
객체 추적부(119)는 이전 프레임에서 검출된 객체 및 현재의 프레임에서 검출된 객체 중에서 교차되는 영역의 비율이 가장 큰 객체를 추적하는 방식을 사용한다.The object tracking unit 119 uses a method of tracking an object having the largest ratio of an intersecting area among the objects detected in the previous frame and the objects detected in the current frame.
객체 추적부(119)는 객체 인식에 성공하면, 레이저 스캐너(103)를 통해 인식한 객체를 추적하며 실시간으로 획득한 레이저 스캔 데이터를 프레임 단위로 교차 비교한다. 이전 프레임과 현재 프레임에서 추적 객체가 인지되는 영역을 교차 비교해서 교차되는 영역이 가장 큰 객체를 추적한다. 이러한 공간 영역 매칭 기법의 다른 이름은 최적의 매칭(Best matching) 방법이다. 이때, 검출된 객체가 갑자기 먼거리로 이동하거나 사라지지 않는다는 가정을 한 후에 최적의 매칭 알고리즘을 사용한다. 이러한 방법으로 객체의 반복적인 검출 필요없이 객체가 이동한 경로를 예상하여 추적한다. 이때, 칼만 필터라는 알고리즘도 추가로 사용할 수 있다.When the object tracking unit 119 succeeds in recognizing the object, the object tracking unit 119 tracks the object recognized through the laser scanner 103 and cross-comparisons the laser scan data acquired in real time on a frame-by-frame basis. The object with the largest crossing area is tracked by cross-comparison of the area where the tracked object is recognized in the previous frame and the current frame. Another name for this spatial domain matching technique is the best matching method. At this time, after assuming that the detected object does not suddenly move to a long distance or disappear, an optimal matching algorithm is used. In this way, the path the object has moved is predicted and traced without the need for repetitive detection of the object. At this time, an algorithm called Kalman filter can be additionally used.
객체 추적부(119)는 추적 주행 중에 조건에 따라 공간 영역 매칭 기법을 비활성화할 수 있다. 예를들어, 이동 로봇(100)과 객체, 즉, 사람이 엘리베이터 등과 같은 공간에서 대기할 경우와 같이 이동 로봇(100)의 움직임이 없는 경우, 이동 로봇(100)의 주변에서 타 객체가 검출될 수 있다. 이때, 추적 객체를 타 객체로 변경하면 서비스에 문제가 발생하므로, 이런 경우, 공간 영역 매칭 기법을 비활성화한다.The object tracking unit 119 may deactivate the spatial area matching technique according to conditions during tracking driving. For example, when there is no movement of the mobile robot 100 and the object, that is, when a person waits in a space such as an elevator, other objects may be detected around the mobile robot 100. I can. At this time, if the tracking object is changed to another object, a problem occurs in the service. In this case, the spatial domain matching technique is deactivated.
객체 추적부(119)는 추적 주행 중에 조건에 따라 비활성화 될 수 있다. 상기 예를 들면 로봇과 작업자가 엘리베이터 등 공간에서 대기하는 상황이 있을 때 카메라와 레이저 센서 주변으로 다른 작업자 또는 고객이 노출될 수 있다. 이때 추적 대상을 변경하면 서비스에 문제가 되므로 비활성화 조건에 추가할 수 있다.The object tracking unit 119 may be deactivated according to conditions during tracking driving. For example, when there is a situation where a robot and a worker are waiting in a space such as an elevator, other workers or customers may be exposed around the camera and the laser sensor. At this time, if the tracking target is changed, the service may be affected, so it can be added to the deactivation condition.
객체 추적부(119)는 추적에 실패한 객체를 재인식할 경우, 추적에 실패한 객체 영역에서 거리 변화가 최소일 거라 가정한다. 이를 수식으로 나타내면, 수학식 1과 같다.When the object tracking unit 119 re-recognizes an object that has failed to be tracked, it is assumed that the distance change in the area of the object that has failed to be tracked will be minimal. If this is expressed by an equation, it is equal to Equation 1.
[수학식 1][Equation 1]
Figure PCTKR2020008496-appb-I000001
Figure PCTKR2020008496-appb-I000001
수학식 1에 따르면, 객체의 이전 좌표(previous.coordicate)와 현재 좌표(current.coordinate)의 변화값이 임계치(threshold)보다 작으면, 추적 객체(trackingUser)로 인식하고, 그렇지 않으면, 타 객체(otherUser)로 인식하는 공간 매칭 알고리즘을 나타낸다.According to Equation 1, if the change value of the previous coordinate (previous.coordicate) and the current coordinate (current.coordinate) of the object is less than the threshold, it is recognized as a tracking user, otherwise, another object ( otherUser).
또한, 객체 추적부(119)는 객체의 현재 좌표가 교차 영역(matching area) 보다 작으면 이 객체를 추적 객체로 인식하며, 이를 수식으로 나타내면, 수학식 2와 같다.Further, the object tracking unit 119 recognizes the object as a tracking object when the current coordinates of the object are smaller than the matching area, and this is expressed as an equation, as shown in Equation 2.
[수학식 2][Equation 2]
Figure PCTKR2020008496-appb-I000002
Figure PCTKR2020008496-appb-I000002
수학식 2에 따르면, 추적 객체(user(s))는 추적 객체(user(s))의 현재 좌표(urrentUserPosition(x,y,z))가 교차 영역보다 작은 경우를 나타낸다.According to Equation 2, the tracking object user(s) represents a case where the current coordinates (urrentUserPosition(x,y,z)) of the tracking object user(s) are smaller than the intersection area.
객체 추적부(119)는 레이저 스캔 데이터를 기초로 사람이 있는 영역을 인식하고, 사람의 구체적인 식별한 경우에 영상을 사용할 수 있다. 객체 추적부(119)는 사람, 즉, 객체로 인식된 영역에서 추적 객체 이외의 다른 객체는 장애물과 같은 비추적 객체로 분류한다. 추적 객체가 선택되면, 선택된 객체를 중심으로 이외의 객체들은 추적 대상이 아니라고 판단하기 위해서 비추적 객체의 정보를 삭제 처리한다.The object tracking unit 119 may recognize a region in which a person is based on the laser scan data, and use an image when a person is specifically identified. The object tracking unit 119 classifies a person, that is, an object other than a tracked object in an area recognized as an object, as a non-tracking object such as an obstacle. When a tracked object is selected, information on the non-tracked object is deleted in order to determine that objects other than the selected object are not the target of tracking.
레이저 스캐너(103)를 사용한 객체 인식 및 추적을 할 경우, 추적 속도가 빠르고 화각이 넓어서 좋지만 객체 식별 수준을 높이기 어렵고 놓치는 경우가 발생한다. 이때, 객체 추적부(119)는 레이저 스캐너(103)로 인식한 객체와 스켈레톤 영상을 매칭하여 재인식 과정을 최소화하면서 지속적으로 추적할 수 있다.In the case of object recognition and tracking using the laser scanner 103, it is good because the tracking speed is fast and the angle of view is wide, but it is difficult to increase the level of object identification and sometimes misses. In this case, the object tracking unit 119 may match the skeleton image with the object recognized by the laser scanner 103 to continuously track while minimizing the re-recognition process.
객체 추적부(119)는 거리 센서들을 사용하여 대상 장면의 영상에 대한 3차원의 정보를 실시간으로 획득할 수 있다.The object tracking unit 119 may acquire 3D information on an image of a target scene in real time using distance sensors.
객체 추적부(119)에서 인식된 객체의 상태는 "New", "Visible", "Tracked", "NotVisible", "Lost" 상태로 정의된다. "New" 상태는 새롭게 객체가 검출된 상태이며, 객체 인식의 시작을 알 수 있는 상태이다. "Visible" 상태는 화각 내에 객체가 검출된 상태이다. "Tracked" 상태는 객체를 추적중인 상태이며 스켈레톤 정보를 이용할 수 있다. "NotVisible" 상태는 객체를 검출할 수 없는 상태이며 화각 내에 보이지 않는 상태이다. "Lost" 상태는 객체를 완전히 잃어버린 상태이다. The state of the object recognized by the object tracking unit 119 is defined as "New", "Visible", "Tracked", "NotVisible", and "Lost" states. The "New" state is a state in which an object is newly detected, and the start of object recognition can be known. The "Visible" state is a state in which an object is detected within the angle of view. In the "Tracked" state, the object is being tracked and skeleton information is available. The "NotVisible" state is a state in which an object cannot be detected and is invisible within the angle of view. The "Lost" state is a state in which the object has been completely lost.
객체 추적부(119)는 이러한 객체 상태들을 기초로, 객체를 추적하면서 현재 객체 추적 상태를 업데이트한다.The object tracking unit 119 updates the current object tracking state while tracking the object based on these object states.
객체 추적부(119)는 객체가 영상 촬영부(101)의 화각을 벗어나면 "NotVisible" 상태로 처리할 수 있다.The object tracking unit 119 may process the object in a “NotVisible” state when the object is out of the field of view of the image capturing unit 101.
또한, 객체 추적부(117)는 객체 추적에 실패했을 때 타 객체를 재인식하고 객체 추적을 실패하는 경우도 발생할 수 있다. 객체 추적부(119)는 객체가 화각 내에 존재함에도 불구하고, 객체의 자세나 상황에 따라 놓치는 경우가 발생할 수 있다. 이로 인해 객체 추적을 실패한 후, 약 10초 이상 화각 내에 객체가 보이지 않을 경우, 객체를 잃어버린 것으로 판단하고 "Lost" 상태로 처리할 수 있다.In addition, when the object tracking unit 117 fails to track the object, it may re-recognize other objects and fail to track the object. Even though the object tracking unit 119 exists in the angle of view, the object tracking unit 119 may miss the object according to the posture or situation of the object. For this reason, if the object is not visible within about 10 seconds or more after the object tracking fails, it is determined that the object has been lost and can be processed in a "Lost" state.
객체 추적부(119)는 추적에 실패한 객체를 재인식 했을 경우 객체의 ID를 이전과 같게 할당 받는 경우, "Tracked" 상태로 처리할 수 있다.When the object tracking unit 119 re-recognizes an object that has failed to be tracked, the object ID is assigned as before, and may be processed in a "Tracked" state.
객체 추적부(119)는 객체를 새로운 ID로 할당 받을 수 있는 상황이 발생하면, "New" 상태로 처리할 수 있다.When a situation in which an object can be assigned a new ID occurs, the object tracking unit 119 may process the object in a "New" state.
객체 추적부(119)는 객체가 화각 내에서 검출되면, "Visible" 상태로 처리할 수 있다.When an object is detected within the angle of view, the object tracking unit 119 may process the object in a "Visible" state.
또한, 객체 추적부(119)는 놓쳤던 객체의 공간 상의 좌표값(X, Y, Z)을 기준으로 기 설정된 반경, 예를들어, 1m 이내의 새로운 객체를 주 객체로 인식하여 추적을 진행한다. 만약, 객체 인식이 일정 시간동안 이루어지지 않으면, "Lost" 상태로 처리하고, 주행 제어부(121)로 비상 정지를 요청한다. 또한, UI 제어부(123)로 비상 정지 알람을 요청할 수 있다.In addition, the object tracking unit 119 recognizes a new object within a preset radius, for example, within 1m, based on the coordinate values (X, Y, Z) in the space of the missed object as the main object and proceeds with the tracking. If object recognition is not performed for a certain period of time, it is processed as a “Lost” state, and an emergency stop is requested to the driving control unit 121. In addition, an emergency stop alarm may be requested from the UI controller 123.
이때, 이동 로봇(100)이 제공하는 서비스에 따라서 다양한 알람이 발생할 수 있다. 예를들어, 객체 추적부(119)는 호텔 직원을 추적하는 도중에 객체 추적 실패로 비상 정지를 요청한 경우, UI 제어부(123)를 통해 외부에 정지 알람을 출력하여 직원이 객체 재인식을 위한 조치를 취하도록 한다. 예컨대, 직원이 객체 추적 반경 이내로 이동하도록 스피커 알람을 출력 할 수 있다.At this time, various alarms may occur according to the service provided by the mobile robot 100. For example, if the object tracking unit 119 requests an emergency stop due to an object tracking failure while tracking a hotel employee, it outputs a stop alarm to the outside through the UI control unit 123 so that the employee takes action to re-recognize the object. Do it. For example, a speaker alarm can be output to allow an employee to move within the object tracking radius.
또한, 객체 추적부(119)는 호텔을 이용하는 고객을 추적하는 도중에 실패하면, 원격 제어부(125)에게 원격지에 위치한 서버(미도시)에 알람을 요청하도록 지시하고 주행 제어부(121)에게 사전에 설정된 지정 위치로 복귀하도록 요청다. 또한, 고객이 목적지를 설정한 경우, 객체 추적부(119)는 목적지 주변 일정 반경 이내에서는 객체가 검출되지 않더라도 객체 이탈로 간주하지 않고 목적지에 도달하면, 정상 도착으로 처리한 후 사전에 설정된 지정 위치로 복귀하도록 주행 제어부(121)에게 요청할 수 있다.In addition, if the object tracking unit 119 fails in the middle of tracking a customer who uses the hotel, it instructs the remote control unit 125 to request an alarm from a server (not shown) located in a remote location, and the driving control unit 121 is set in advance. Request to return to the designated location. In addition, when a destination is set by a customer, the object tracking unit 119 does not regard the object as a departure, even if an object is not detected within a certain radius around the destination, and if it reaches the destination, it is processed as a normal arrival and then a preset designated location. It can be requested to the driving control unit 121 to return to.
또한, 작업자 추적시 일정한 거리를 유지하기 위하여 객체 추적부(119)는 동적인 거리 계산 및 속도 제어를 수행한다. 이를 위해서 객체와 이동 로봇(100) 간의 거리가 얼마나 떨어졌는지에 따라 3가지 거리 상태, 즉, "Far", "Near", "InZone"가 정의된다. In addition, in order to maintain a constant distance when tracking an operator, the object tracking unit 119 performs dynamic distance calculation and speed control. To this end, three distance states, that is, "Far", "Near", and "InZone" are defined according to how far the distance between the object and the mobile robot 100 is.
Figure PCTKR2020008496-appb-T000005
Figure PCTKR2020008496-appb-T000005
표 3에서, 한 실시예에 따르면, Max(d)와, Min(d) 값은 각각 1.1m, 1.0m로 설정될 수 있다. 레이저 스캐너(103)가 초당 실시간으로 스캔을 하므로, 객체 추적부(119)는 이동 로봇(100)과 추적 객체와의 거리를 초단위로 "Far", "Near", "InZone" 중에 하나로 구분한다. 이를 수식으로 나타내면, 다음 수학식 3과 같다.In Table 3, according to an embodiment, the Max(d) and Min(d) values may be set to 1.1m and 1.0m, respectively. Since the laser scanner 103 scans in real time per second, the object tracking unit 119 divides the distance between the mobile robot 100 and the tracked object into one of "Far", "Near", and "InZone" in seconds. . If this is expressed by an equation, it is shown in Equation 3
[수학식 3][Equation 3]
Figure PCTKR2020008496-appb-I000003
Figure PCTKR2020008496-appb-I000003
수학식 3에 따르면, "Far"는 user distance(객체와 이동 로봇 간 거리)가 Max(d)보다 큰 경우로 정의된다. "Near"는 user distance가 Min(d) 보다 작은 경우로 정의된다. "inZone"은 user distance가 Min(d)보다 크고 Max(d)보다 작은 경우로 정의된다. According to Equation 3, "Far" is defined as a case where the user distance (the distance between the object and the mobile robot) is greater than Max(d). "Near" is defined as a case where the user distance is less than Min(d). "inZone" is defined as a case where the user distance is greater than Min(d) and less than Max(d).
객체 추적부(119)는 user distance의 상태를 판별하고, 이를 주행 제어부(121)로 제공한다. 주행 제어부(121)는 user distance의 상태 별로 주행 제어 정보가 사전에 지정되어 있다. 이를 기초로, 주행 제어부(121)는 user distance의 상태가 "Far"인 경우, 이동 로봇(100)의 구동 속도를 증가시킨다. 또한, 주행 제어부(121)는 user distance의 상태가 "inZone"인 경우, 이동 로봇(100)의 구동 속도를 유지한다. 또한, 주행 제어부(121)는 user distance의 상태가 "Near"인 경우, 이동 로봇(100)의 구동 속도를 감소시키거나 또는 비상 정지할 수 있다.The object tracking unit 119 determines the state of the user distance and provides it to the driving control unit 121. The driving control unit 121 pre-designates driving control information for each user distance state. Based on this, when the state of the user distance is "Far", the driving control unit 121 increases the driving speed of the mobile robot 100. In addition, the driving control unit 121 maintains the driving speed of the mobile robot 100 when the state of the user distance is “inZone”. In addition, when the state of the user distance is "Near", the driving control unit 121 may reduce the driving speed of the mobile robot 100 or stop the mobile robot 100 in an emergency.
도 4를 참조하면, 이동 로봇(100)을 기점으로 장애물 영역(obstacle zone)과 추적 영역(tracking zone)으로 구분된다. 이때, 장애물 영역에서 객체가 검출되면, 객체 추적부(119)는 주행 제어부(121)에게 비상 정지를 요청한다. Referring to FIG. 4, the mobile robot 100 is divided into an obstacle zone and a tracking zone. At this time, when an object is detected in the obstacle area, the object tracking unit 119 requests an emergency stop from the driving control unit 121.
한 실시예에 따르면, 장애물 영역은 a=0.8m, b=0.3m~0.4m이고, 추적 영역은 c=4.2m, d=1.1m 일 수 있다. 여기서, 장애물 영역 및 추적 영역은 실시예에 따라 변경될 수 있고, 추적 환경에 따라 다르게 설정될 수 있다.According to an embodiment, the obstacle area may be a=0.8m, b=0.3m~0.4m, and the tracking area may be c=4.2m, d=1.1m. Here, the obstacle area and the tracking area may be changed according to embodiments, and may be set differently according to the tracking environment.
또한, 레이저 스캐너(103)의 성능에 따라 최대 10미터까지 객체를 추적할 수 있으나, 본 발명에서 객체 추적부(119)는 추적할 대상으로 선택된 객체의 거리보다 먼 곳에 있는 객체는 스캔하지 않도록 스캔 영역을 동적으로 제한할 수 있다. 이러한 스캔 영역의 제한을 통해 불필요한 스캔 및 연산을 최소화하고 스캔 데이터의 정확도를 높일 수 있다. 스캔 영역을 제한하는 코드 예시는 표 6과 같을 수 있다.In addition, depending on the performance of the laser scanner 103, an object can be tracked up to 10 meters, but in the present invention, the object tracking unit 119 scans so as not to scan an object farther than the distance of the object selected as the target to be tracked. You can limit the area dynamically. By limiting the scan area, unnecessary scans and operations can be minimized and the accuracy of scan data can be improved. An example of a code limiting the scan area may be shown in Table 6.
Figure PCTKR2020008496-appb-T000006
Figure PCTKR2020008496-appb-T000006
객체 추적부(119)는 SLAM(Simultaneous localization and mapping) 기술을 통해 맵을 생성하고 자기 위치를 인식할 수 있다. The object tracking unit 119 may generate a map and recognize its own location through SLAM (Simultaneous localization and mapping) technology.
객체 추적부(119)는 거리 센서를 통해 측정한 이동 로봇(100)의 위치와 맵상에서 이동 로봇(100)의 위치 정보가 일치하는지를 판단한다. The object tracking unit 119 determines whether the location of the mobile robot 100 measured through the distance sensor matches the location information of the mobile robot 100 on the map.
객체 추적부(119)는 이동 로봇(100)의 초기 위치를 결정한다. 이 위치는 이동 로봇(100)이 배터리를 충전하는 충전 스테이션(미도시)과의 거리를 기준으로 충전 스테이션(미도시)으로부터 이동 로봇(100)이 얼마만큼 떨어져있는지를 바퀴 회전수(엔코더라는 센서에서 수집)와 자이로 센서에서 수집한 회전각으로 결정된다. 즉, 이동 로봇(100)의 초기 위치 정보는 (x, y, radian)으로 구성되며 radian이 회전각을 나타낸다.The object tracking unit 119 determines the initial position of the mobile robot 100. This position indicates how far the mobile robot 100 is from the charging station (not shown) based on the distance from the charging station (not shown) where the mobile robot 100 charges the battery. Collected from) and the rotation angle collected from the gyro sensor. That is, the initial position information of the mobile robot 100 is composed of (x, y, radian), and radian represents the rotation angle.
이동 로봇(100) 자신의 초기 위치가 정확하지 않으면 추적 객체의 좌표 정보도 정확하지 않으므로, 이동 로봇(100)의 초기 위치 정합이 필요하다. If the initial position of the mobile robot 100 is not correct, the coordinate information of the tracking object is also not accurate, and thus the initial position of the mobile robot 100 needs to be matched.
객체 추적부(119)는 이동 로봇(100)의 초기 위치가 정합되면, 객체가 이동 로봇(100)으로부터 얼마만큼 떨어져있는지 거리 정보를 레이저 스캐너(103)를 통해 획득하고, 이 거리 정보와 초기 위치를 이용하여 이동 로봇(100) 대비 상대적인 위치값으로 객체의 위치를 산출할 수 있다.When the initial position of the mobile robot 100 is matched, the object tracking unit 119 acquires distance information about how far the object is from the mobile robot 100 through the laser scanner 103, and the distance information and the initial position By using, the position of the object may be calculated as a position value relative to the mobile robot 100.
다시, 도 1을 참조하면, 주행 제어부(121)는 로봇 어댑터로서, 선속도/각속도 연산, 모터 RPM(revolutions per minute) 연산, 속도 제어, 엔코더 주행 거리 연산, 로봇 상태값 관리 등을 수행한다.Referring again to FIG. 1, the driving control unit 121 is a robot adapter and performs linear/angular speed calculation, motor revolutions per minute (RPM) calculation, speed control, encoder travel distance calculation, robot state value management, and the like.
주행 제어부(121)는 이동 로봇(100)의 하드웨어를 구동 및 제어하기 위한 컨트롤러 및 인터페이스를 포함한다. 주행 제어부(121)는 객체 인식부(119)로부터 출력되는 추적 객체와의 거리에 따라 선속도를 계산하여 가감속을 수행한다. 그리고 객체의 이동 방향에 따라 각속도를 계산하여 차동 모터에 대한 RPM(revolutions per minute)을 계산하고 이를 기초로 주행 장치(105)를 제어한다.The driving control unit 121 includes a controller and an interface for driving and controlling hardware of the mobile robot 100. The driving control unit 121 performs acceleration/deceleration by calculating the linear speed according to the distance to the tracked object output from the object recognition unit 119. Further, the angular velocity is calculated according to the moving direction of the object to calculate RPM (revolutions per minute) for the differential motor, and based on this, the traveling device 105 is controlled.
주행 제어부(121)는 주행 장치(105)로 회전 방향, 그리고 선속도 및 각속도를 기초로 변환된 좌측 모터 및 우측 모터의 RPM을 실시간 전송하여 이동 로봇(100)을 목표 위치로 안전하고 부드럽게 이동시킬 수 있다. 또한, 안전을 위해 이동 로봇(100)의 전방 및 후방 범퍼의 충돌 여부 등 이동 로봇(100)의 상태값과 이동 로봇(100)의 좌우 모터 회전 엔코더 값을 주행 장치(105)에 전송함으로써 피드백 제어, 이동 로봇(100)의 이동 거리 및 회전 방향을 계산하여 지도 작성, 이동 로봇(100)의 이동 경로 추적에 필요한 기반 정보로 사용할 수 있다.The driving control unit 121 transmits the rotation direction, and the RPM of the left and right motors converted based on the linear and angular speeds to the driving device 105 in real time to safely and smoothly move the mobile robot 100 to the target position. I can. In addition, for safety, feedback control by transmitting the state values of the mobile robot 100 such as whether the front and rear bumpers of the mobile robot 100 collide, and the encoder value of the left and right motor rotation of the mobile robot 100 to the traveling device 105 , The moving distance and the rotation direction of the mobile robot 100 may be calculated and used as base information necessary for creating a map and tracking a moving path of the mobile robot 100.
UI 제어부(123)를 통해 출력되는 화면 또는 음성 인식을 통해 인지한 목적지 정보를 주행 제어부(121)로 전달한다. 주행 제어부(121)는 맵 상에서 자신의 위치와 객체의 위치를 계산한 결과를 기초로 회전 방향, 선속도, 각속도 등의 주행 명령을 주행 장치(105)로 출력한다.Destination information recognized through a screen output through the UI control unit 123 or voice recognition is transmitted to the driving control unit 121. The driving control unit 121 outputs a driving command such as a rotation direction, a linear speed, and an angular speed to the driving device 105 based on the result of calculating the position of the object and its own position on the map.
주행 제어부(121)는 구동부 속도 제어를 L1 ~ L7(사람이 빠르게 걷는 속도, 예: 6km/h)까지로 구분할 수 있다.The driving control unit 121 may divide the speed control of the driving unit into L1 to L7 (the speed at which a person walks quickly, for example, 6km/h).
UI 제어부(123)는 이동 로봇(100)을 이용한 작업에 필요한 내용을 출력 장치(109)로 출력한다. 그리고 입력 장치(107)로부터 입력받은 데이터를 화면에 출력할 수 있다. 예를들어, UI 제어부(123)는 물류 목록, 물품 검수, 운송 경로 등 작업자가 이동 로봇을 이용하여 작업의 효율성을 향상할 수 있는 UI를 제공할 수 있다. The UI control unit 123 outputs the contents necessary for the operation using the mobile robot 100 to the output device 109. In addition, data input from the input device 107 may be output on the screen. For example, the UI controller 123 may provide a UI through which an operator can improve work efficiency by using a mobile robot, such as a logistics list, item inspection, and a transport route.
한 실시예에 따르면, UI 제어부(123)는 바코드 인식, 물품 정보, 오더정보, 경로 탐색 등의 UI를 제공할 수 있다.According to an embodiment, the UI controller 123 may provide a UI such as barcode recognition, product information, order information, and route search.
원격 제어부(125)는 통신 장치(111)를 통해 원격 서버(200)와 접속되어, 원격 서버(200)와 연동하여 이동 로봇(100)의 상태 및 제어를 수행하기 위한 동작을 수행한다. 한 실시예에 따르면, 원격 제어부(125)는 객체 인식부(119)에 의해 결정된 객체 인식 상태(5가지)를 원격 서버(200)에게 제공하여 원격지에서 이동 로봇(100)의 객체 인식 상태를 조회할 수 있게 한다. 원격 제어부(125)는 원격 서버(200)의 제어 명령을 수신하고 이를 기초로 이동 로봇(100)이 동작하도록 제어할 수 있다.The remote control unit 125 is connected to the remote server 200 through the communication device 111 and interlocks with the remote server 200 to perform an operation for performing the state and control of the mobile robot 100. According to an embodiment, the remote control unit 125 provides the object recognition status (5 types) determined by the object recognition unit 119 to the remote server 200 to query the object recognition status of the mobile robot 100 at a remote location. To be able to do it. The remote control unit 125 may receive a control command from the remote server 200 and control the mobile robot 100 to operate based on the control command.
작업 관리부(127)는 실시간 모니터링, 작업 관리, 운영 현황 관리, 자동화 정보 시스템 연계를 통하여 이동 로봇(100)의 효율적 운영을 담당한다.The task management unit 127 is in charge of efficient operation of the mobile robot 100 through real-time monitoring, task management, operation status management, and automation information system linkage.
작업 관리부(127)는 추적 권한을 관리할 수 있다. 추적 권한은 일반 사용자와 같은 불특정 다수가 아니라 호텔 직원과 같이 특정 사용자가 사용하는 경우 추적 모드로 업무를 하는 도중이나 업무가 종료되어 다른 직원에게 연결해줄 때 사용된다. 이때, 변경된 추적 대상자 이외에는 사용이 불가능하도록 설정된다.The task management unit 127 may manage tracking authority. The tracking authority is used when a specific user, such as a hotel employee, not a number of unspecified users, but a specific user such as a hotel employee, is used to connect to other employees while working in tracking mode or when the job is terminated. At this time, it is set so that it cannot be used except for the changed tracking target.
그러면, 이제 이동 로봇(100)의 동작에 대해 설명한다. Then, the operation of the mobile robot 100 will now be described.
먼저, 도 5는 본 발명의 한 실시예에 따른 이동 로봇의 동작을 나타낸 순서도로서, 이동 로봇이 설치된 특정 장소에서의 서비스 시나리오에 따른 동작을 나타낸다. 여기서, 특정 장소는 호텔 등이 될 수 있으나 이로 한정되는 것은 아니다.First, FIG. 5 is a flow chart showing an operation of a mobile robot according to an embodiment of the present invention, and shows an operation according to a service scenario in a specific place where the mobile robot is installed. Here, the specific place may be a hotel, but is not limited thereto.
도 5를 참조하면, 이동 로봇(100)의 저장 장치(113) 또는 원격 서버(200)에 제1 사용자(예, 직원들, 이하, 설명의 편의를 위해 "직원"으로 통칭함)의 스켈레톤 영상 DB가 구축된다(S101).Referring to FIG. 5, a skeleton image of a first user (eg, employees, hereinafter, collectively referred to as "employees" for convenience of explanation) on the storage device 113 or the remote server 200 of the mobile robot 100 The DB is built (S101).
이동 로봇(100)의 제어부(117)는 객체 인식/추적을 시작하기 전에 인증을 수행한다(S103). 예를들어, 제어부(117)는 저장 장치(113)에 저장된 인증 정보(예, 지문, 얼굴 등과 같은 생체 인증 정보, 비밀번호 등)의 입력을 요구하여 인증을 수행할 수 있다. 또는 제어부(117)는 원격 서버(200)에 저장된 인증 정보의 입력을 요구하여 인증을 수행할 수 있다.The control unit 117 of the mobile robot 100 performs authentication before starting object recognition/tracking (S103). For example, the controller 117 may perform authentication by requesting input of authentication information (eg, biometric authentication information such as a fingerprint, a face, etc., a password, etc.) stored in the storage device 113. Alternatively, the controller 117 may perform authentication by requesting input of authentication information stored in the remote server 200.
인증에 성공하면, 이동 로봇(100)의 영상 촬영부(101)는 전방을 촬영하여 영상을 생성한다(S105). If authentication is successful, the image photographing unit 101 of the mobile robot 100 photographs the front side and generates an image (S105).
이동 로봇(100)의 객체 추적부(119)는 생성(S105)한 영상을 스켈레톤화하여 스켈레톤 영상을 생성한다(S107). The object tracking unit 119 of the mobile robot 100 generates a skeleton image by skeletonizing the generated image (S105) (S107).
이동 로봇(100)의 객체 추적부(119)는 생성(S107)한 스켈레톤 영상을 직원 DB에 등록된 스켈레톤 영상과 비교(S109)하여 스켈레톤 영상이 직원을 나타내는지 판단한다(S111). The object tracking unit 119 of the mobile robot 100 compares the skeleton image generated (S107) with the skeleton image registered in the employee DB (S109) to determine whether the skeleton image represents the employee (S111).
직원으로 판단되면, 이동 로봇(100)의 객체 추적부(119)는 S105 단계에서 생성된 영상으로부터 특징점 정보를 추출 및 등록한다(S113). 여기서, 특징점 정보는 특정 사용자를 나타내는 특징 정보라 할 수 있다. 예를들어, 직원의 유니폼 색상, 마크 등의 정보를 포함할 수 있다. 마크는 유니폼에 기재된 호텔 라벨, 명찰 등을 포함할 수 있다. 또한, 특징점 정보는 S107 단계에서 생성한 스켈레톤 영상일 수 있다.If it is determined as an employee, the object tracking unit 119 of the mobile robot 100 extracts and registers feature point information from the image generated in step S105 (S113). Here, the feature point information may be referred to as feature information representing a specific user. For example, it may include information such as the color and mark of an employee's uniform. The mark may include a hotel label, name tag, or the like written on the uniform. Also, the feature point information may be a skeleton image generated in step S107.
이동 로봇(100)의 작업 관리부(127)는 직원의 요청에 따른 작업을 수행하고 작업 이력을 저장한다(S115). 작업 내역은 주행을 위한 맵을 생성하거나, 짐을 운반하며 사람을 추적하는 작업이 있다.The job management unit 127 of the mobile robot 100 performs a job according to the employee's request and stores the job history (S115). Work history includes creating a map for driving or tracking people while carrying luggage.
이동 로봇(100)의 객체 추적부(119)는 레이저 스캐너(103)를 이용하여 인식(S109)한 직원을 추적한다(S117). 여기서, 레이저 스캐너(103)를 이용한 직원 추적은 앞서 설명한 바와 같이, 객체 추적부(119)의 동작에 해당한다.The object tracking unit 119 of the mobile robot 100 tracks the employee recognized (S109) using the laser scanner 103 (S117). Here, the employee tracking using the laser scanner 103 corresponds to the operation of the object tracking unit 119, as described above.
이동 로봇(100)의 객체 추적부(119)는 추적 중인 직원의 이탈 여부, 즉, 객체 추적 실패 여부를 판단(S119)한다. The object tracking unit 119 of the mobile robot 100 determines whether the employee being tracked has left, that is, whether the object tracking has failed (S119).
객체 추적에 실패하지 않으면, S117 단계부터 다시 수행한다.If object tracking does not fail, it is performed again from step S117.
반면, 객체 추적에 실패로 판단되면, 객체 추적부(119)는 현 시점에 영상 촬영부(101)가 촬영한 영상으로부터 특징점을 추출하고, 이를 기 등록(S113)된 특징점과 비교하여 일치하는 객체, 즉, 직원을 탐색한다(S121). 즉, 카메라를 통해 일정 주변 반경 내 동일한 특징점 정보를 지닌 사람을 탐색한다. 혹은, 스켈레톤 영상을 생성하고 기 등록(S113)한 스켈레톤 영상과 일치하는 객체를 탐색할 수 있다.On the other hand, if it is determined that object tracking has failed, the object tracking unit 119 extracts a feature point from the image captured by the image capture unit 101 at the current time, compares it with the previously registered feature point (S113), and matches the object. That is, the employee is searched (S121). That is, a person with the same feature point information within a certain radius is searched through the camera. Alternatively, a skeleton image may be generated and an object matching the previously registered (S113) skeleton image may be searched.
객체 추적부(119)는 탐색한 특징점이 발견된 위치로 이동 로봇(100)을 이동시키도록 주행 제어부(121)에게 요청한다(S123). 그리고 작업 관리부(127)는 기 저장(S115)한 작업 이력을 UI 제어부(123)를 통해 직원에게 안내하고, 추적 의사 수락 여부를 질의한다(S123). 즉, 이전 작업 이력을 음성 또는 화면으로 알려주고 추적 의사 여부를 확인한다.The object tracking unit 119 requests the driving control unit 121 to move the mobile robot 100 to the position where the searched feature point is found (S123). In addition, the job management unit 127 informs the employee of the previously stored job history (S115) to the employee through the UI control unit 123, and queries whether or not to accept the tracking intention (S123). In other words, the history of previous work is notified by voice or screen, and the intention to follow is confirmed.
이때, UI 제어부(123)가 추적 의사에 수락하는 응답을 수신하는 경우, 객체 추적부(119)는 S115 단계부터 다시 시작한다. In this case, when the UI controller 123 receives a response to accept the tracking intention, the object tracking unit 119 starts again from step S115.
반면, 추적 의사를 거부하면 객체 추적부(119)는 주행 제어부(121)에게 요청하여 객체 추적을 종료하고 시작 위치로 이동시킨다(S127). 혹은, 직원을 호출할 수도 있다.On the other hand, if the intention to track is rejected, the object tracking unit 119 requests the driving control unit 121 to end tracking the object and move it to the start position (S127). Alternatively, you can call an employee.
한편, S111 단계에서 직원이 아니라고 판단되면, 제2 사용자(예, 고객, 이하, 설명의 편의를 위해 "직원"에 대응하는 "고객"으로 통칭함)로 판단되면, 작업 관리부(127)는 투숙객 추가 정보 및 목적지 정보를 UI 제어부(123)를 통해 고객에게 질의한다(S129). 여기서, 투숙객 추가 정보는 룸넘버 등일 수 있다.On the other hand, if it is determined that it is not an employee in step S111, if it is determined as a second user (eg, customer, hereinafter, collectively referred to as "customer" corresponding to "employee" for convenience of explanation), the task management unit 127 Additional information and destination information are queried to the customer through the UI control unit 123 (S129). Here, the additional guest information may be a room number or the like.
객체 추적부(119)는 S105 단계에서 생성된 영상으로부터 특징점 정보를 추출 및 임시 등록한다(S131). 여기서, 특징점 정보는 S107 단계에서 생성한 스켈레톤 영상일 수 있다.The object tracking unit 119 extracts and temporarily registers feature point information from the image generated in step S105 (S131). Here, the feature point information may be a skeleton image generated in step S107.
그리고 객체 추적부(119)는 UI 제어부(123)가 응답받은 내용을 기초로 목적지까지 네비게이션 정보를 생성하여 길 안내를 시작한다(S133).In addition, the object tracking unit 119 generates navigation information to the destination based on the content received by the UI control unit 123 and starts the route guidance (S133).
또한, 객체 추적부(119)는 레이저 스캐너(103)를 이용하여 S109 단계에서 판별된 고객, 즉, 객체 추적을 한다(S135). 여기서, 레이저 스캐너(103)를 이용한 고객 추적은 앞서 설명한 바와 같이, 객체 추적부(119)의 동작에 해당한다.In addition, the object tracking unit 119 uses the laser scanner 103 to track the customer, that is, the object determined in step S109 (S135). Here, customer tracking using the laser scanner 103 corresponds to the operation of the object tracking unit 119, as described above.
객체 추적부(119)는 목적지까지 주행 중에 고객이 이탈하는지 판단(S137)하고, 고객이 이탈하지 않으면, S135 단계부터 다시 시작한다.The object tracking unit 119 determines whether the customer leaves while driving to the destination (S137), and if the customer does not leave, it starts again from step S135.
반면, 고객이 이탈하면, 영상 촬영부(101)를 통해 전방을 촬영하여 생성한 영상을 스켈레톤화하고 기 생성한 스켈레톤 영상과 일치하는 객체를 탐색한다(S139).On the other hand, when the customer leaves, the image generated by photographing the front through the image photographing unit 101 is skeletonized, and an object matching the previously generated skeleton image is searched for (S139).
탐색에 성공하는지를 판단(S141)하고, 탐색에 성공하면, 즉, 그 스켈레톤이 검출된 영역을 객체, 즉, 고객으로 인식하고 S135 단계부터 다시 시작한다. It is determined whether the search is successful (S141), and if the search is successful, that is, the area in which the skeleton is detected is recognized as an object, that is, a customer, and starts again from step S135.
반면, 탐색에 실패하면, 객체 추적부(119)는 UI 제어부(123)를 통해 직원을 호출하거나 또는 객체 추적을 종료하고 시작 위치로 이동하도록 주행 제어부(121)에게 요청한다(S143).On the other hand, if the search fails, the object tracking unit 119 calls an employee through the UI controller 123 or requests the driving controller 121 to end object tracking and move to the starting position (S143).
객체 추적부(119)는 레이저 신호로 대상을 추적하는 도중 노이즈 또는 간섭이 발생하면 영상 신호로 스켈레톤 데이터를 생성하여 재인식한다.The object tracking unit 119 generates skeleton data as an image signal and re-recognizes it when noise or interference occurs while tracking an object with a laser signal.
예를들어, 추적 중 다수의 사람이 인식되는 경우, 추적에 실패할 수 있다. 엘리베이터에 승차하거나 또는 호텔 로비에서는 다수의 사람이 레이즈 스캔 범위 내에 계속 유입될 수 있으므로, 이 경우, 레이저 스캔만으로 객체 식별이 어려울 수 있다. 또는 객체 추적 중 레이저 스캔 데이터가 장애물 또는 노이즈에 의해 왜곡되는 경우, 객체 추적에 실패할 수 있다. 예를들어, 반사율이 심한 벽이나 유리 또는 갑작스럽게 등장한 장애물 때문일 수 있다. For example, if a large number of people are recognized during tracking, tracking may fail. In this case, it may be difficult to identify objects only by laser scanning because a large number of people can continue to flow into the range of the raise scan in the elevator or in the hotel lobby. Alternatively, when laser scan data is distorted by an obstacle or noise during object tracking, object tracking may fail. For example, it could be due to highly reflective walls, glass, or an unexpected obstacle.
이와 같이, 레이저 스캔으로 객체 추적에 실패하는 경우, 앞서 설명한 바와 같이, 스켈레톤 데이터를 생성하여 등록된 스켈레톤 데이터를 객체로 재인식한다.As described above, when object tracking fails by laser scanning, skeleton data is generated and registered skeleton data is re-recognized as an object, as described above.
이때, 재인식에 성공하면 레이저 스캐너로 객체 인식과 추적을 수행하여 목적지까지 추적한다. 이에 대해 설명하면, 도 6과 같다.At this time, if re-recognition is successful, object recognition and tracking are performed with a laser scanner to track to the destination. This will be described in FIG. 6.
도 6은 본 발명의 한 실시예에 따른 이동 로봇의 객체 인식/추적 동작을 나타낸 순서도이다.6 is a flowchart illustrating an object recognition/tracking operation of a mobile robot according to an embodiment of the present invention.
도 6을 참조하면, 영상 촬영부(101)는 이동 로봇(100)의 전방을 촬영하여 영상을 생성한다(S201). 객체 추적부(119)는 S201 단계에서 생성된 영상을 스켈레톤화하여 스켈레톤 영상을 생성한다(S203).Referring to FIG. 6, the image photographing unit 101 photographs the front of the mobile robot 100 and generates an image (S201). The object tracking unit 119 generates a skeleton image by skeletonizing the image generated in step S201 (S203).
객체 추적부(119)는 S203 단계에서 생성한 스켈레톤 영상이 등록된 영상인지 판단한다(S205). 여기서, S205는 추적 대상으로 등록이 되었는지를 판단하는 것이다. 추적 대상으로 등록이 되어 있지 않다면, S203 단계에서 생성한 스켈레톤 영상을 추적 후보 객체로 등록(S207)한 후, S205 단계를 다시 시작한다.The object tracking unit 119 determines whether the skeleton image generated in step S203 is a registered image (S205). Here, S205 is to determine whether or not registered as a tracking target. If not registered as a tracking target, the skeleton image generated in step S203 is registered as a tracking candidate object (S207), and then step S205 is restarted.
S205 단계에서 등록된 스켈레톤 영상으로 판단되면, 객체 추적부(119)는 레이저 스캐너(103)로 객체를 인식(S209)하고, 객체를 추적한다(S211).If it is determined that the skeleton image is registered in step S205, the object tracking unit 119 recognizes the object with the laser scanner 103 (S209) and tracks the object (S211).
이때, 객체를 추적하는 과정에서, 노이즈 또는 간섭이 발생하는지 판단한다(S213). 노이즈 또는 간섭이 발생하지 않으면, S211 단계로 돌아간다.At this time, it is determined whether noise or interference occurs in the process of tracking the object (S213). If noise or interference does not occur, the process returns to step S211.
반면, 노이즈 또는 간섭이 발생하면, 현 시점에 생성(S215)한 촬영 영상을 스켈레톤화(S217)한 스켈레톤 영상이 S203 단계에서 생성 또는 S207 단계에서 등록한 스켈레톤 영상인지 비교하여 추적 중인 객체인지 판단한다(S219).On the other hand, if noise or interference occurs, it is determined whether the object being tracked by comparing whether the skeleton image generated at the current time (S215) is the skeleton image generated at the current time (S215) is the skeleton image generated at the S203 or registered at the S207 S219).
추적중인 객체로 판단되면, S209 단계부터 다시 시작한다. 반면, 추적 중인 객체가 아니면, 객체 추적을 종료하고 시작 위치로 이동한다(S221).If it is determined that the object is being tracked, it starts again from step S209. On the other hand, if the object being tracked is not the object being tracked, object tracking is terminated and the object is moved to the start position (S221).
객체 추적부(119)가 영상 생성부(101)와 레이저 스캐너(103)를 결합하여 객체 인식/추적하는 과정에 대해 설명하면, 다음과 같다.A process in which the object tracking unit 119 combines the image generator 101 and the laser scanner 103 to recognize/track an object will be described as follows.
도 7은 본 발명의 다른 실시예에 따른 이동 로봇의 객체 인식/추적 동작을 나타낸 순서도이고, 도 8은 본 발명의 실시예에 따른 레이블링 과정을 통해 객체를 검출하는 과정을 나타낸 순서도이며, 도 9는 본 발명의 실시예에 따른 레이저 스캔 영역을 설명하는 도면이고, 도 10은 본 발명의 실시예에 따른 이동 로봇을 기점으로 레이저 스캔 데이터를 수집하는 좌표계를 설명하는 도면이며, 도 11은 본 발명의 실시예에 따른 레이블링된 레이저 스캔 데이터에서 신뢰할 수 있는 데이터를 추출하는 과정을 설명하는 도면이고, 도 12는 본 발명의 실시예에 따른 레이저 스캔 데이터로부터 특징 변수를 설명하는 도면이며, 도 13은 본 발명의 실시예에 따른 특징 변수를 기초로 객체를 분할하는 과정을 설명하는 도면이고, 도 14는 본 발명의 실시예에 따른 프레임 매칭 기술을 설명하는 도면이다.7 is a flow chart showing an object recognition/tracking operation of a mobile robot according to another embodiment of the present invention, and FIG. 8 is a flow chart showing a process of detecting an object through a labeling process according to an embodiment of the present invention. Is a diagram illustrating a laser scan area according to an embodiment of the present invention, FIG. 10 is a diagram illustrating a coordinate system for collecting laser scan data from a mobile robot according to an embodiment of the present invention, and FIG. 11 is a view illustrating the present invention. Is a diagram for explaining a process of extracting reliable data from labeled laser scan data according to an embodiment of the present invention, FIG. 12 is a diagram for explaining feature variables from laser scan data according to an embodiment of the present invention, and FIG. A diagram illustrating a process of dividing an object based on a feature variable according to an embodiment of the present invention, and FIG. 14 is a diagram illustrating a frame matching technique according to an embodiment of the present invention.
먼저, 도 7을 참조하면, S301 단계 ~ S313 단계는 도 6에서 S209 단계, 즉, 객체 인식 단계에 해당한다. 그리고 S315 단계 ~ S329 단계는 도 6에서 S211 단계, 즉, 객체 추적 단계에 해당한다. First, referring to FIG. 7, steps S301 to S313 correspond to step S209 in FIG. 6, that is, an object recognition step. In addition, steps S315 to S329 correspond to step S211 in FIG. 6, that is, an object tracking step.
객체 추적부(119)는 t1 시점의 레이저 스캔 데이터를 레이저 스캐너(103)로부터 획득한다(S301). The object tracking unit 119 acquires the laser scan data at the time point t1 from the laser scanner 103 (S301).
객체 추적부(119)는 획득(S301)한 레이저 스캔 데이터를 필터링(S303)하고, 필터링한 레이저 스캔 데이터를 대상으로 이진화 작업을 한다(S305).The object tracking unit 119 filters (S303) the acquired laser scan data (S301), and performs a binarization operation on the filtered laser scan data (S305).
객체 추적부(119)는 이진화 작업을 거친 레이저 스캔 데이터에 모폴로지 연산을 적용하여 노이즈를 제거한다(S307).The object tracking unit 119 removes noise by applying a morphology operation to the laser scan data that has undergone the binarization operation (S307).
객체 추적부(119)는 노이즈가 제거된 레이저 스캔 데이터에 레이블링 처리를 한다(S309). 객체 추적부(119)는 레이블링 처리된 복수의 객체를 스켈레톤 영상 데이터와 비교하여, 일치하는 객체를 추적 객체로 선택한다(S311). 한 예시로, 레이블링 처리된 객체의 거리 및 영역을 계산하고, 이를 동일 차원의 평면 상에서 스켈레톤 형태와 일치하는 객체를 검출할 수 있다.The object tracking unit 119 performs labeling processing on the laser scan data from which noise has been removed (S309). The object tracking unit 119 compares a plurality of labeled objects with the skeleton image data, and selects a matching object as a tracking object (S311). As an example, the distance and area of the labeled object may be calculated, and an object matching the skeleton shape on a plane of the same dimension may be detected.
객체 추적부(119)는 레이블링된 복수의 객체 중에서 S307 단계에서 선택된 추적 객체를 제외한 나머지 객체들을 삭제한다(S313).The object tracking unit 119 deletes the remaining objects other than the tracking object selected in step S307 among the plurality of labeled objects (S313).
이후, 객체 추적부(119)는 t2 시점의 레이저 스캔 데이터를 획득한다(S315). 여기서, t1, t2는 스캔 시간 단위를 의미하는데, 예를들어, 초 단위일 수 있다.Thereafter, the object tracking unit 119 acquires laser scan data at a time point t2 (S315). Here, t1 and t2 denote a scan time unit, and may be, for example, a second unit.
객체 추적부(119)는 t2 시점의 레이저 스캔 데이터에 대해서 필터링(S317), 이진화 작업(S319)을 거치고, 필터링 및 이진화된 레이저 스캔 데이터를 대상으로 모폴로지 연산 및 레이블링 처리를 적용(S321, S323)한다.The object tracking unit 119 performs filtering (S317) and binarization (S319) on the laser scan data at the point of time t2, and applies morphology calculation and labeling processing to the filtered and binarized laser scan data (S321, S323). do.
객체 추적부(119)는 t1 시점에서 레이블링된 객체들과 t2 시점에 레이블링된 객체들이 교차하는 영역의 비율을 계산한다(S325). 그리고 계산된 교차 영역의 비율이 가장 큰 객체를 추적 객체로 인식한다(S327).The object tracking unit 119 calculates the ratio of the area where the objects labeled at the time t1 and the objects labeled at the time t2 intersect (S325). In addition, the object having the largest ratio of the calculated intersection area is recognized as a tracking object (S327).
객체 추적부(119)는 레이저 스캔 방식으로 산출한 추적 객체의 위치를 칼만 필터로 보정한다(S329). 여기서, 추적 객체의 위치는 앞서 설명한 것처럼, 이동 로봇(100)의 초기 위치와, 이동 로봇(100)으로부터 추적 객체까지의 거리 정보를 이용하여 이동 로봇(100)으로부터의 상대적인 위치로 산출된다. 여기서, 거리 정보는 레이저 스캐너(103)가 조사한 빔이 객체로부터 반사되어 도착하는 시간을 통해 산출되는 것이 일반적이나, 이로 한정되는 것은 아니고 다양한 방식이 사용될 수 있다.The object tracking unit 119 corrects the position of the tracked object calculated by the laser scanning method using a Kalman filter (S329). Here, as described above, the position of the tracking object is calculated as a relative position from the mobile robot 100 by using the initial position of the mobile robot 100 and distance information from the mobile robot 100 to the tracking object. Here, the distance information is generally calculated through a time when the beam irradiated by the laser scanner 103 arrives after being reflected from the object, but is not limited thereto, and various methods may be used.
객체 추적부(119)의 필터링 과정(S303, S317)을 상세히 나타내면, 도 8과 같다.The filtering process (S303, S317) of the object tracking unit 119 will be described in detail as shown in FIG. 8.
도 8을 참조하면, 객체 추적부(119)는 레이저 스캐너(103)로부터 수집한 레이저 스캔 데이터에서 스캔 영역으로 지정된 높이의 데이터만 1차 필터링한다(S401).Referring to FIG. 8, the object tracking unit 119 performs primary filtering of only data having a height designated as a scan area from the laser scan data collected from the laser scanner 103 (S401).
객체 추적부(119)는 각 사람을 구분하기 위해 사람의 세부위를 추적 객체 식별을 위한 레이저 스캔 영역으로 정의할 수 있다. 첫번째 영역은 다리이다. 다른 신체 부위와의 간섭이 가장 낮은 부분으로 특징을 추출하기에 적합하다. 하지만 사람이 많아 가까이 붙어 있는 경우 다리만으로는 타겟 추적이 어려울 수 있다. 이를 보완하기 위하여 엉덩이와 등을 추가적으로 분석하여 사람에 해당하는 추적 객체를 추출한다.The object tracking unit 119 may define a detailed position of a person as a laser scan area for identifying a tracked object in order to distinguish each person. The first area is the legs. It is a part that has the lowest interference with other body parts and is suitable for extracting features. However, if there are many people and are close together, it may be difficult to track the target with only the legs. To compensate for this, the buttocks and back are additionally analyzed to extract a tracking object corresponding to a person.
도 9의 (a)는 추적 객체의 식별을 위한 각 부위, 즉, 다리, 엉덩이, 등까지의 주요 높이를 표시한 것이다. 도 9의 (b)는 레이저 스캐너(103)로부터 수집한 레이저 스캔 데이터이다. 도 9의 (c)는 수집한 레이저 스캔 데이터 중에서 스캔 영역에 해당하는 데이터를 제외한 나머지 데이터를 모두 필터링한 스캔 데이터를 나타낸다. 즉, 도 9의 (b)와 비교할때, 도 9의 (c)는 무릎 높이의 스캔 데이터, 골반 높이의 스캔 데이터, 등 높이의 스캔 데이터를 제외한 나머지 스캔 데이터는 모두 삭제되어 있다. 따라서, 불필요한 스캔 데이터는 연산에서 제외되도록 한다.FIG. 9(a) shows the main height to each part for identification of a tracked object, that is, a leg, a hip, and a back. 9B is laser scan data collected from the laser scanner 103. (C) of FIG. 9 shows scan data obtained by filtering out all data except for the data corresponding to the scan area among the collected laser scan data. That is, compared to FIG. 9B, in FIG. 9C, all scan data except for the scan data of the knee height, the scan data of the pelvis, and the scan data of the back height are deleted. Therefore, unnecessary scan data is excluded from the operation.
객체 추적부(119)는 레이저 스캔 데이터를 스캔 영역 별 높이(h1, h2, h3)와 비교하고 오차 한계값 Wthreshold을 넘지 않으면 해당 높이의 데이터로 저장한다. 객체 추적부(119)는 3개(다리, 엉덩이, 등)의 높이로 분류된 데이터 중 같은 높이 데이터들을 그룹 데이터라 한다.The object tracking unit 119 compares the laser scan data with the heights (h1, h2, h3) of each scan area, and stores the data of the corresponding height if the error limit value and threshold are not exceeded. The object tracking unit 119 refers to data of the same height among data classified into three heights (legs, hips, etc.) as group data.
도 10을 참조하면, 이러한 레이저 스캔 데이터는 이동 로봇(100)을 원점으로 한 좌표계(x, y)로 나타낼 수 있다.Referring to FIG. 10, such laser scan data may be represented by a coordinate system (x, y) with the mobile robot 100 as an origin.
객체 추적부(119)는 레이저 스캐너(103)로부터 수집되어 스캔 영역으로 필터링된 그룹 데이터를 각기 다른 특성을 가진 객체를 구분하기 위하여 특성이 유사한 데이터끼리 그룹핑하는 레이블링을 처리한다. 특성이 유사하다는 기준은 레이저 스캔 데이터의 데이터(점 군집) 형태에 따라 사람의 신체 부위를 판단할 수 있다. 또한, 군집된 형태에 따라 각각 사람을 구분하고 그룹화할 수 있다. The object tracking unit 119 processes the labeling of group data collected from the laser scanner 103 and filtered into the scan area to group data having similar characteristics to distinguish objects having different characteristics. The criterion that the characteristics are similar can determine a human body part according to the form of data (point cluster) of laser scan data. In addition, people can be classified and grouped according to the clustered form.
객체 추적부(119)는 신뢰할 수 있는 레이저 스캔 데이터로부터 특징 후보의 라벨링을 위해 고려할 수 있는 변수를 표 7과 같이 정의한다. 이러한 변수는 도 12에 나타내었다.The object tracking unit 119 defines variables that can be considered for labeling feature candidates from reliable laser scan data, as shown in Table 7. These variables are shown in Figure 12.
Figure PCTKR2020008496-appb-T000007
Figure PCTKR2020008496-appb-T000007
표 7에서, 그룹의 너비(d), 폭(h), p값으로 객체를 식별할 수 있으므로, 특징 변수는 d, h, p이다. 이를 유도하면, 수학식 4와 같다.In Table 7, since an object can be identified by the group's width (d), width (h), and p value, the characteristic variables are d, h, and p. If this is derived, it is equal to Equation 4.
[수학식 4][Equation 4]
Figure PCTKR2020008496-appb-I000004
Figure PCTKR2020008496-appb-I000004
다시, 도 8을 참조하면, 객체 추적부(119)는 지정된 높이에서 인접한 두 점의 최단 거리(Δd, D)를 기준점으로 그룹 데이터를 분할한다(S403). Again, referring to FIG. 8, the object tracking unit 119 divides group data into reference points based on the shortest distances Δd and D of two adjacent points at a specified height (S403).
도 13을 참조하면, D는 연속된 점들 중 인접한 두 점의 거리를 의미하며 각 높이에서 두 점의 거리 한계치(Dleg, Dhip, Dback)을 넘는 경우 데이터 분할, 즉, 다른 객체로 검출하기 위한 데이터 분할이 이루어진다.Referring to FIG. 13, D denotes the distance between two adjacent points among consecutive points, and data division, i.e., detection as a different object, when exceeding the distance limits (D leg , D hip , D back) of two points at each height Data partitioning is performed for this purpose.
즉, 인접한 두 점 사이의 거리(D)가 한계치를 넘는 경우, 다른 그룹에 속한 것으로 인지한다. 여기서, 한계치(Dleg, Dhip, Dback)는 사람의 신체 중 인접한 부분이 최대로 떨어질 수 있는 거리로 설정한다. 이렇게 분류된 데이터 그룹 안에서 신뢰할 수 있는 데이터를 추출한다(S405). 즉, 분할된 데이터 그룹 내에서 필터링 기준을 적용하여 불필요한 데이터를 제거하는 2차 필터링을 한다(S405). 이에 대하여 설명하면, 다음과 같다.That is, when the distance D between two adjacent points exceeds the threshold, it is recognized as belonging to another group. Here, the limit values (D leg , D hip , D back ) are set as the distance at which an adjacent part of a person's body can fall to the maximum. Reliable data is extracted from the classified data group (S405). That is, secondary filtering is performed in which unnecessary data is removed by applying a filtering criterion within the divided data group (S405). This will be described as follows.
도 11을 참조하면, 각 높이 별 그룹 데이터에서 신뢰할 수 있는 데이터를 추출하는 과정을 나타낸다. 여기서, 삼각형은 불필요한 데이터를 나타내고, 중앙에 있는 센서 데이터는 A이다. 이렇게 추출된 레이저 스캔 데이터를 이용하여 객체 추적부(119)는 객체 식별을 위한 특징으로 고려할 변수 값을 도출한다.Referring to FIG. 11, a process of extracting reliable data from group data for each height is shown. Here, the triangle represents unnecessary data, and the sensor data in the center is A. Using the extracted laser scan data, the object tracking unit 119 derives a variable value to be considered as a feature for object identification.
특징 분석과 라벨링의 시작은 라벨링된 레이저 스캔 데이터의 중앙에 있는 센서 데이터로부터 신뢰할 수 있는 데이터의 수만큼을 추출하는 것이다.The start of feature analysis and labeling is to extract the number of reliable data from the sensor data in the center of the labeled laser scan data.
이때, 객체 추적부(119)는 신뢰할 수 있는 데이터의 수를 선정할 때, 신체 부위마다 다르게 적용한다. 즉, 추적 대상의 다리와 등의 경우 그룹 데이터 내에 불필요한 데이터가 존재한다. 따라서, 객체 추적부(119)는 그룹 데이터의 양 끝단에서 일정 개수의 데이터를 제거한 후 남은 데이터에서, 중심으로부터 신뢰할 수 있는 데이터 개수가 정해진다. 반면, 엉덩이의 경우, 팔과 엉덩이 데이터가 같이 추출되는 경우가 많다. 팔이 같은 그룹으로 분류되는 경우 팔의 움직임에 따라 불필요한 데이터의 수가 크기 때문에 객체 추적부(119)는 불필요한 데이터의 최대값을 적용하여 그룹 데이터의 수에서 뺀 후 남은 데이터의 수를 토대로 신뢰할 수 있는 데이터 수를 선정한다. In this case, the object tracking unit 119 applies differently to each body part when selecting the number of reliable data. That is, in the case of the leg and the back of the tracking target, unnecessary data exists in the group data. Accordingly, the object tracking unit 119 determines a reliable number of data from the center of the remaining data after removing a certain number of data from both ends of the group data. On the other hand, in the case of hips, arm and hip data are often extracted together. When the arms are classified into the same group, since the number of unnecessary data is large according to the movement of the arm, the object tracking unit 119 applies the maximum value of unnecessary data and subtracts it from the number of group data, and is reliable based on the number of remaining data. Select the number of data.
그룹 데이터에서 양 끝단은 수평 방향이다. 도 11에서 불필요한 데이터의 개수가 6개이고 엉덩이 부분의 스캔 데이터와 팔이 같이 스캔되면서 불필요한 데이터수가 증가할 수 있으므로, 미리 제거해야하는 데이터 개수를 크게 설정할 수 있다. 수학식 5는 다리와 등 높이의 데이터 분석을 정의하는 수식이다.In the group data, both ends are in the horizontal direction. In FIG. 11, since the number of unnecessary data is 6 and the number of unnecessary data may increase as the scan data of the hip portion and the arm are scanned together, the number of data to be removed in advance can be set large. Equation 5 is an equation for defining data analysis of leg and back height.
[수학식 5][Equation 5]
Figure PCTKR2020008496-appb-I000005
Figure PCTKR2020008496-appb-I000005
여기서, Ntrust는 선정된 신뢰할 수 있는 데이터의 개수이고, Ngroup은 그룹 데이터의 전체 개수이며, Noutloer은 그룹 데이터에서 제외된 데이터의 개수이다. 즉, 수학식 4는 고정적으로 나타나는 불필요한 데이터의 수를 그룹 전체 데이터에서 뺌으로써 신뢰할 수 있는 데이터를 선정함을 나타낸다.Here, N trust is the number of selected reliable data, N group is the total number of group data, and N outloer is the number of data excluded from group data. In other words, Equation 4 indicates that reliable data is selected by subtracting the number of unnecessary data that appears fixedly from the entire group data.
수학식 5를 사용하면, 고정적으로 나타나는 불필요한 데이터의 수를 그룹 전체 데이터에서 뺌으로써 신뢰할 수 있는 데이터를 기반으로 분석이 가능하다. 그리고 신뢰할 수 있는 데이터의 수가 가변적이기 때문에 각 사람마다 특성을 반영할 수 있다.Using Equation 5, it is possible to analyze based on reliable data by subtracting the number of unnecessary data that appears fixedly from the entire group data. And because the number of reliable data varies, each person can reflect their characteristics.
수학식 6은 사람의 엉덩이 높이의 데이터 분석을 정의하는 수식이다.Equation 6 is an equation defining data analysis of the height of a person's hips.
[수학식 6][Equation 6]
Figure PCTKR2020008496-appb-I000006
Figure PCTKR2020008496-appb-I000006
여기서, Ntrust는 선정된 신뢰할 수 있는 데이터의 개수이고, Ngroup은 그룹 데이터의 전체 개수이며, outliermax는 제외시킬 수 있는 데이터의 최대 개수이다.Here, N trust is the number of selected reliable data, N group is the total number of group data, and outlier max is the maximum number of data that can be excluded.
즉, 수학식 6은 신뢰할 수 있는 데이터의 최대 개수를 고정적으로 사용함을 나타낸다. 이 경우 불필요한 데이터가 발생되는 상황으로부터 일관된 특징을 도출할 수 있다.That is, Equation 6 indicates that the maximum number of reliable data is fixedly used. In this case, consistent features can be derived from situations in which unnecessary data is generated.
다시, 도 7을 참고하면, 객체 추적부(119)의 S325, S327에 대해 설명하면, 도 14와 같다.Again, referring to FIG. 7, S325 and S327 of the object tracking unit 119 will be described as in FIG. 14.
도 14를 참고하면, 객체 추적부(119)는 프레임간의 매칭 기술을 통해 검출된 객체의 연속적인 추적을 수행한다. 객체 추적부(119)의 프레임 매칭 코드 예시는 표 8과 같다.Referring to FIG. 14, the object tracking unit 119 continuously tracks an object detected through a frame-to-frame matching technique. Examples of frame matching codes of the object tracking unit 119 are shown in Table 8.
Figure PCTKR2020008496-appb-T000008
Figure PCTKR2020008496-appb-T000008
객체 추적부(119)는 이전 프레임(P10)과 현재 프레임(P20)이 서로 교차하는 영역(overlapping area)을 산출한다. 이때, ①, ②, ③, ④의 좌표를 통해 delta_x(a)와 delta_y(b)를 측정하고, delta_x(a)와 delta_y(b)의 곱으로 교차 영역을 산출한다. 이를 수식화하면, 수학식 7과 같다. The object tracking unit 119 calculates an overlapping area where the previous frame P10 and the current frame P20 intersect each other. At this time, delta_x(a) and delta_y(b) are measured through the coordinates of ①, ②, ③, and ④, and the intersection area is calculated by the product of delta_x(a) and delta_y(b). If this is formulated, it is as shown in Equation 7.
여기서, ①, ②, ③, ④의 좌표는 공지된 수학적 이론을 통해 산출되는 것으로서, 사전에 제공되는 값인 것을 전제로 한다.Here, the coordinates of ①, ②, ③, and ④ are calculated through a well-known mathematical theory, and are assumed to be values provided in advance.
[수학식 7][Equation 7]
Figure PCTKR2020008496-appb-I000007
Figure PCTKR2020008496-appb-I000007
현재 검출된 객체 중에 이전 프레임에서 검출된 객체와의 intersection over union이 가장 크게 매칭된 객체를 추적할 객체로 선정하며, 이를 수식화하면, 수학식 8과 같다.Among the currently detected objects, the object whose intersection over union with the object detected in the previous frame has the largest match is selected as the object to be tracked, and this is expressed as Equation 8.
[수학식 8][Equation 8]
Figure PCTKR2020008496-appb-I000008
Figure PCTKR2020008496-appb-I000008
수학식 8의 "intersection over union"은 교차 영역의 비율을 두 객체 영역에서 교차되는 영역을 뺀 값으로 나눠서 구한다. The "intersection over union" of Equation 8 is obtained by dividing the ratio of the intersection area by a value obtained by subtracting the intersection area from the two object areas.
이상에서 설명한 본 발명의 실시예는 장치 및 방법을 통해서만 구현이 되는 것은 아니며, 본 발명의 실시예의 구성에 대응하는 기능을 실현하는 프로그램 또는 그 프로그램이 기록된 기록 매체를 통해 구현될 수도 있다.The embodiments of the present invention described above are not implemented only through an apparatus and a method, but may be implemented through a program that realizes a function corresponding to the configuration of the embodiment of the present invention or a recording medium in which the program is recorded.
이상에서 본 발명의 실시예에 대하여 상세하게 설명하였지만 본 발명의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 발명의 기본 개념을 이용한 당업자의 여러 변형 및 개량 형태 또한 본 발명의 권리범위에 속하는 것이다.Although the embodiments of the present invention have been described in detail above, the scope of the present invention is not limited thereto, and various modifications and improvements by those skilled in the art using the basic concept of the present invention defined in the following claims are also provided. It belongs to the scope of rights.

Claims (17)

  1. 이동 로봇으로서,As a mobile robot,
    상기 이동 로봇의 소정 위치에 설치되는 적어도 하나의 영상 촬영부,At least one image capture unit installed at a predetermined position of the mobile robot,
    상기 이동 로봇의 소정 위치에 설치되는 레이저 스캐너, A laser scanner installed at a predetermined position of the mobile robot,
    상기 적어도 하나의 영상 촬영부 및 상기 레이저 스캐너를 이용한 객체 인식 및 객체 추적을 수행하는 프로그램이 저장된 메모리, 그리고A memory in which a program for performing object recognition and object tracking using the at least one image capturing unit and the laser scanner is stored, and
    상기 프로그램을 실행하는 적어도 하나의 프로세서를 포함하고, Including at least one processor to execute the program,
    상기 프로그램은, The above program,
    상기 적어도 하나의 영상 촬영부가 생성한 영상을 기초로 추적 대상인 객체를 인식하고, 상기 레이저 스캔 데이터를 통해 추적 대상인 객체를 검출하여 추적하며, 객체 추적에 실패할 경우, 상기 적어도 하나의 영상 촬영부를 구동하여 현 시점에 촬영된 영상과, 기 저장된 영상을 비교하고, 현 시점에 출력된 레이저 스캔 데이터를 기초로 객체를 재검출하는 명령어들을 포함하는, 이동 로봇.Recognizes an object to be tracked based on the image generated by the at least one image capture unit, detects and tracks the object to be tracked through the laser scan data, and drives the at least one image capture unit when object tracking fails Thus, the mobile robot comprising commands for comparing the image captured at the current time and the pre-stored image, and re-detecting the object based on the laser scan data output at the current time.
  2. 제1항에서,In claim 1,
    상기 프로그램은,The above program,
    제1 시점에 수집된 레이저 스캔 데이터로부터 검출된 복수의 객체 영역과 상기 제1 시점보다 나중 시점인 제2 시점에 수집된 레이저 스캔 데이터로부터 검출된 복수의 객체 영역을 교차시키고, 교차 영역의 비율이 가장 큰 객체를 추적 객체로 인식하는 명령어를 포함하는, 이동 로봇.The plurality of object areas detected from the laser scan data collected at the first time point and the plurality of object areas detected from the laser scan data collected at a second time point later than the first time point are intersected, and the ratio of the intersection area is Mobile robot containing a command for recognizing the largest object as a tracking object.
  3. 제2항에서,In paragraph 2,
    상기 프로그램은,The above program,
    2차원 배열된 레이저 스캔 데이터에 모폴로지 연산을 적용하여 노이즈를 제거하는 명령어를 포함하는, 이동 로봇.A mobile robot comprising a command for removing noise by applying a morphology operation to the two-dimensionally arranged laser scan data.
  4. 제3항에서,In paragraph 3,
    상기 프로그램은,The above program,
    노이즈가 제거된 레이저 스캔 데이터를 레이블링하여, 동일한 레이블을 가진 적어도 하나의 영역을 각각의 객체로 검출하고, By labeling the laser scan data from which noise has been removed, at least one area with the same label is detected as each object,
    상기 각각의 객체로 검출된 영역을 상기 영상과 비교하여 일치하는 객체를 추적 객체로 선별하는 명령어를 포함하는, 이동 로봇.A mobile robot comprising a command for comparing an area detected as each object with the image and selecting a matching object as a tracking object.
  5. 제1항에서,In claim 1,
    상기 프로그램은,The above program,
    정해진 스캔 영역에서 수집한 레이저 센서 데이터를 기초로, 객체를 검출하는 명령어를 포함하고,Includes a command to detect an object based on the laser sensor data collected in a predetermined scan area,
    상기 정해진 스캔 영역은,The predetermined scan area,
    지면으로부터 사람의 신체 부위별 설정된 높이에 해당하는 범위를 가지는, 이동 로봇.A mobile robot that has a range corresponding to the height set for each body part of a person from the ground.
  6. 제5항에서,In clause 5,
    상기 프로그램은,The above program,
    상기 정해진 스캔 영역 별로 그룹핑된 레이저 센서 데이터에서, 상기 그룹핑된 레이저 센서 데이터를 구성하는 점군 간의 최단 거리를 기준으로 데이터 그룹을 분할하고, 분할한 데이터 그룹 내에서 스캔 영역 별로 특징 변수를 계산하는 명령어들을 포함하는, 이동 로봇In the laser sensor data grouped by the predetermined scan area, commands for dividing a data group based on the shortest distance between point groups constituting the grouped laser sensor data, and calculating a feature variable for each scan area within the divided data group Containing mobile robot
  7. 제6항에서,In paragraph 6,
    상기 특징 변수는,The characteristic variable is,
    상기 점군 들이 포함된 그룹 데이터의 전체 너비, 인접한 두 점군 사이의 거리들, 상기 그룹의 폭 및 상기 거리들의 평균을 포함하는, 이동 로봇A mobile robot including the total width of the group data including the point groups, the distances between two adjacent point groups, the width of the group, and the average of the distances
  8. 이동 로봇의 제어 방법으로서,As a control method of a mobile robot,
    레이저 스캔 데이터를 수집하는 단계,Collecting laser scan data,
    극좌표계로 구성된 레이저 스캔 데이터를 직교 좌표계의 위치 값을 가지는 복수개의 점군 형태로 변환하는 단계,Converting laser scan data composed of a polar coordinate system into a plurality of point groups having position values in a Cartesian coordinate system,
    점군 형태로 변환된 레이저 스캔 데이터를 카메라로 촬영한 영상 데이터와 비교하여, 점군 별로 해당하는 영상의 픽셀값을 이용하여 이진화하는 단계,Comparing the laser scan data converted in the form of a point cloud with image data captured by a camera, and binarizing using pixel values of the image corresponding to each point group,
    상기 레이저 스캔 데이터를 레이블링하여 복수개의 객체를 검출하는 단계,Labeling the laser scan data to detect a plurality of objects,
    검출된 복수개의 객체를 상기 영상 데이터와 비교하여 일치하는 객체를 추적 객체로 선별하는 단계, 그리고Comparing a plurality of detected objects with the image data and selecting matching objects as tracking objects, and
    상기 추적 객체를 향하여 주행하는 단계Driving toward the tracking object
    를 포함하는, 제어 방법.Containing, the control method.
  9. 제8항에서,In clause 8,
    상기 레이저 스캔 데이터를 수집하는 단계는,Collecting the laser scan data,
    모폴로지 마스크를 적용하여 노이즈를 제거하는 단계를 더 포함하고,Further comprising the step of removing noise by applying a morphology mask,
    상기 모폴로지 마스크는,The morphology mask,
    사람의 보폭을 커버할 수 있는 크기로 사전에 설정되는, 제어 방법.A control method that is preset to a size that can cover a person's stride.
  10. 제8항에서,In clause 8,
    상기 주행하는 단계는,The driving step,
    제1 스캔 시점의 레이저 스캔 데이터로부터 검출된 복수개의 객체를 상기 제1 스캔 시점보다 나중인 제2 스캔 시점의 레이저 스캔 데이터로부터 검출된 복수개의 객체와 비교하여 교차하는 영역의 비율이 가장 큰 객체를 추적 객체로 인식하는 단계The object with the largest ratio of the intersecting area is determined by comparing the plurality of objects detected from the laser scan data at the first scan time to the plurality of objects detected from the laser scan data at the second scan time later than the first scan time. Steps to recognize as tracking object
    를 더 포함하는, 제어 방법.Further comprising a, control method.
  11. 제10항에서,In claim 10,
    상기 교차하는 영역의 비율이 가장 큰 객체를 추적 객체로 인식하는 단계는,Recognizing the object having the largest ratio of the intersecting regions as a tracking object,
    사전에 설정된 조건에 따라 선택적으로 구동되는, 제어 방법.A control method that is selectively driven according to a preset condition.
  12. 제8항에서,In clause 8,
    상기 레이저 스캔 데이터를 수집하는 단계는,Collecting the laser scan data,
    상기 레이저 스캔 데이터 중에서 사람의 신체 부위 별로 설정된 높이에 해당하는 스캔 데이터를 필터링하는 단계Filtering the scan data corresponding to the height set for each body part of the person among the laser scan data
    를 더 포함하는, 제어 방법.Further comprising a, control method.
  13. 제8항에서,In clause 8,
    상기 검출하는 단계는,The detecting step,
    상기 이진화된 레이저 스캔 데이터를 레이블링 대상으로 적용하는 단계,Applying the binarized laser scan data as a labeling target,
    상기 레이블링 대상으로 적용된 상기 데이터에서, 각 점군 사이의 거리를 기준으로 상기 데이터를 분할하여 그룹핑하는 단계, 그리고In the data applied as the labeling target, dividing and grouping the data based on the distance between each point group, and
    그룹핑된 점군들을 동일한 객체로 구분하기 위한 레이블링을 수행하는 단계Step of performing labeling to classify grouped point clouds into the same object
    를 포함하는, 제어 방법Containing, control method
  14. 이동 로봇의 동작 방법으로서,As a method of operating a mobile robot,
    원격지 서버와 연동하여 객체 추적을 위한 인증을 수행하는 단계,The step of performing authentication for object tracking in connection with a remote server,
    상기 인증에 성공하면, 전방을 촬영하여 영상을 생성하고, 생성한 영상을 스켈레톤화하여 스켈레톤 영상을 생성하는 단계,If the authentication is successful, generating an image by photographing the front side, and generating a skeleton image by skeletonizing the generated image,
    사용자 데이터베이스에 사전에 등록된 제1 사용자들의 스켈레톤 영상과 상기 생성한 스켈레톤 영상을 비교하여 상기 생성한 스켈레톤 영상이 제1 사용자인지 판별하는 단계,Comparing the skeleton image of first users registered in advance in a user database with the generated skeleton image to determine whether the generated skeleton image is a first user,
    제1 사용자로 판단되면, 레이저 스캐너를 이용하여 상기 제1 사용자를 추적하면서 제1 사용자의 요청에 따른 작업을 수행하고, 작업 이력을 저장하는 단계, 그리고If it is determined as the first user, performing a task according to the request of the first user while tracking the first user using a laser scanner, and storing the work history, and
    제1 사용자가 아니라고 판단되면, 제2 사용자 정보 및 목적지 정보의 입력을 요구하여 입력받고, 상기 레이저 스캐너를 이용하여 제2 사용자를 추적하면서 입력받은 목적지까지 길 안내를 수행하는 단계If it is determined that it is not the first user, requesting and receiving the input of second user information and destination information, and performing a route guidance to the input destination while tracking the second user using the laser scanner
    를 포함하는, 동작 방법.Containing, operating method.
  15. 제14항에서,In clause 14,
    상기 레이저 스캐너를 이용한 제1 사용자 추적 또는 제2 사용자 추적은,The first user tracking or the second user tracking using the laser scanner,
    극좌표계로 구성된 레이저 스캔 데이터를 직교 좌표계의 위치 값을 가지는 복수개의 점군 형태로 변환하고, Convert laser scan data composed of polar coordinates into a plurality of point groups having position values in a Cartesian coordinate system,
    점군 형태로 변환된 레이저 스캔 데이터를 카메라로 촬영한 영상 데이터와 비교하여 점군 별로 해당하는 영상의 픽셀값을 이용하여 이진화하며, The laser scan data converted in the form of a point cloud is compared with the image data captured by the camera and binarized using the pixel values of the image corresponding to each point group,
    이진화된 레이저 스캔 데이터를 레이블링하여 검출한 복수개의 객체를 상기 영상 데이터와 비교하여 일치하는 객체를 추적 객체로 선별하는 방식으로 이루어지는, 동작 방법.An operation method comprising a method of comparing a plurality of objects detected by labeling the binarized laser scan data with the image data to select matching objects as tracking objects.
  16. 제15항에서,In paragraph 15,
    상기 저장하는 단계는,The storing step,
    상기 제1 사용자로 판단되면, 촬영 영상으로부터 특징점 정보를 추출하여 등록하는 단계,If it is determined as the first user, extracting and registering feature point information from a captured image,
    제1 사용자가 요청한 작업을 수행하고 작업 이력을 저장하는 단계,Performing the operation requested by the first user and storing the operation history,
    상기 레이저 스캐너를 이용하여 제1 사용자를 추적하는 단계,Tracking a first user using the laser scanner,
    상기 제1 사용자의 추적에 실패하면, 현 시점의 전방을 촬영하여 획득한 영상으로부터 특징점 정보를 추출하고, 추출한 특징점 정보를 상기 등록한 특징점 정보와 비교하여 일치하는 지점의 객체를 제1 사용자로 판별하는 단계, 그리고If the tracking of the first user fails, feature point information is extracted from the image acquired by photographing the front of the current point of view, and the extracted feature point information is compared with the registered feature point information to determine the object of the matching point as the first user. Step, and
    판별한 제1 사용자의 위치로 이동하여 이전 작업 이력을 안내하고 추적 의사가 있는지 질의하여, 추적을 수락하면 추적을 지속하고 추적을 거절하면 객체 추적을 종료하고 시작 위치로 이동하는 단계Step of moving to the determined first user's location, guiding the previous work history, inquiring if there is a willingness to follow, and continuing tracking if tracking is accepted, and terminating object tracking if tracking is rejected, and moving to the starting location
    를 포함하는, 동작 방법.Containing, operating method.
  17. 제15항에서,In paragraph 15,
    상기 길 안내를 수행하는 단계 이후, 상기 제2 사용자의 추적에 실패하면, 로봇의 전방을 촬영하여 생성한 영상을 스켈레톤화하여 현 시점의 스켈레톤 영상을 생성하는 단계, 그리고After the step of performing the route guidance, if the tracking of the second user fails, generating a skeleton image at the current time by skeletonizing an image generated by photographing the front of the robot, and
    기 생성한 스켈레톤 영상과 상기 현 시점의 스켈레톤 영상을 비교하여 일치하는 객체를 탐색하고, 탐색된 객체를 추적하는 단계Comparing a previously generated skeleton image with the skeleton image at the present time to search for a matching object, and tracking the searched object
    를 더 포함하는, 동작 방법.Further comprising, the operation method.
PCT/KR2020/008496 2019-08-27 2020-06-30 Mobile robot and method for controlling same WO2021040214A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190105211A KR102369062B1 (en) 2019-08-27 2019-08-27 Moving robot and method for control of the moving robot
KR10-2019-0105211 2019-08-27

Publications (1)

Publication Number Publication Date
WO2021040214A1 true WO2021040214A1 (en) 2021-03-04

Family

ID=74684506

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/008496 WO2021040214A1 (en) 2019-08-27 2020-06-30 Mobile robot and method for controlling same

Country Status (2)

Country Link
KR (1) KR102369062B1 (en)
WO (1) WO2021040214A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422712A (en) * 2022-03-29 2022-04-29 深圳市海清视讯科技有限公司 Sphere detection tracking method and device, electronic equipment and storage medium
CN117021117A (en) * 2023-10-08 2023-11-10 电子科技大学 Mobile robot man-machine interaction and positioning method based on mixed reality

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102546156B1 (en) * 2021-12-21 2023-06-21 주식회사 트위니 Autonomous logistics transport robot
KR102474868B1 (en) * 2022-02-25 2022-12-06 프로그라운드 주식회사 Real-time exercise support system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4413957B2 (en) * 2007-08-24 2010-02-10 株式会社東芝 Moving object detection device and autonomous moving object
KR101188584B1 (en) * 2007-08-28 2012-10-05 주식회사 만도 Apparatus for Discriminating Forward Objects of Vehicle by Using Camera And Laser Scanner
KR101486308B1 (en) * 2013-08-20 2015-02-04 인하대학교 산학협력단 Tracking moving objects for mobile robots control devices, methods, and its robot
KR20180080498A (en) * 2017-01-04 2018-07-12 엘지전자 주식회사 Robot for airport and method thereof
KR20180083569A (en) * 2017-01-13 2018-07-23 주식회사 웨이브엠 Transpotation robot and method of operating transpotation robot based on internet of things

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4413957B2 (en) * 2007-08-24 2010-02-10 株式会社東芝 Moving object detection device and autonomous moving object
KR101188584B1 (en) * 2007-08-28 2012-10-05 주식회사 만도 Apparatus for Discriminating Forward Objects of Vehicle by Using Camera And Laser Scanner
KR101486308B1 (en) * 2013-08-20 2015-02-04 인하대학교 산학협력단 Tracking moving objects for mobile robots control devices, methods, and its robot
KR20180080498A (en) * 2017-01-04 2018-07-12 엘지전자 주식회사 Robot for airport and method thereof
KR20180083569A (en) * 2017-01-13 2018-07-23 주식회사 웨이브엠 Transpotation robot and method of operating transpotation robot based on internet of things

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422712A (en) * 2022-03-29 2022-04-29 深圳市海清视讯科技有限公司 Sphere detection tracking method and device, electronic equipment and storage medium
CN114422712B (en) * 2022-03-29 2022-06-24 深圳市海清视讯科技有限公司 Sphere detection tracking method and device, electronic equipment and storage medium
CN117021117A (en) * 2023-10-08 2023-11-10 电子科技大学 Mobile robot man-machine interaction and positioning method based on mixed reality
CN117021117B (en) * 2023-10-08 2023-12-15 电子科技大学 Mobile robot man-machine interaction and positioning method based on mixed reality

Also Published As

Publication number Publication date
KR102369062B1 (en) 2022-02-28
KR20210025327A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
WO2021040214A1 (en) Mobile robot and method for controlling same
WO2018135870A1 (en) Mobile robot system and control method thereof
WO2015183005A1 (en) Mobile device, robot cleaner, and method for controlling the same
WO2018038488A1 (en) Mobile robot and control method therefor
WO2021006677A2 (en) Mobile robot using artificial intelligence and controlling method thereof
WO2020241930A1 (en) Method for estimating location using multi-sensor and robot for implementing same
WO2015194867A1 (en) Device for recognizing position of mobile robot by using direct tracking, and method therefor
WO2015194866A1 (en) Device and method for recognizing location of mobile robot by means of edge-based readjustment
WO2018230852A1 (en) Method for identifying moving object in three-dimensional space and robot for implementing same
WO2020139064A1 (en) Cleaning robot and method of performing task thereof
WO2018070687A1 (en) Airport robot and airport robot system comprising same
WO2018186583A1 (en) Method for identifying obstacle on driving ground and robot for implementing same
WO2020241934A1 (en) Method for estimating position by synchronizing multi-sensor, and robot for implementing same
AU2018216517B9 (en) Cleaner
WO2017188800A1 (en) Mobile robot and control method therefor
WO2018143620A2 (en) Robot cleaner and method of controlling the same
WO2018117616A1 (en) Mobile robot
WO2020230931A1 (en) Robot generating map on basis of multi-sensor and artificial intelligence, configuring correlation between nodes and running by means of map, and method for generating map
WO2020141900A1 (en) Mobile robot and driving method thereof
WO2020046038A1 (en) Robot and control method therefor
WO2016048077A1 (en) Cleaning robot and method for controlling cleaning robot
WO2019199112A1 (en) Autonomous work system and method, and computer-readable recording medium
WO2020027515A1 (en) Mobile robot for configuring attribute block
WO2017188708A2 (en) Mobile robot, system for multiple mobile robots, and map learning method of mobile robot
WO2020256370A1 (en) Moving robot and method of controlling the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20856241

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20856241

Country of ref document: EP

Kind code of ref document: A1