WO2019216673A1 - Object guidance system and method for unmanned moving body - Google Patents

Object guidance system and method for unmanned moving body Download PDF

Info

Publication number
WO2019216673A1
WO2019216673A1 PCT/KR2019/005581 KR2019005581W WO2019216673A1 WO 2019216673 A1 WO2019216673 A1 WO 2019216673A1 KR 2019005581 W KR2019005581 W KR 2019005581W WO 2019216673 A1 WO2019216673 A1 WO 2019216673A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
image
distance
unmanned moving
moving object
Prior art date
Application number
PCT/KR2019/005581
Other languages
French (fr)
Korean (ko)
Inventor
김경욱
유근태
Original Assignee
주식회사 아이피엘
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 아이피엘 filed Critical 주식회사 아이피엘
Publication of WO2019216673A1 publication Critical patent/WO2019216673A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0225Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Definitions

  • the present invention relates to an object guidance system, and to an object guidance system and method installed in a mobile robot device and the like to control the direction of movement of the device to a destination, and a docking system for an unmanned mobile object using the same.
  • an unmanned moving object such as a drone or a robot has a built-in battery and operates as a power source of the battery, and thus, a task of charging the battery after a predetermined time after driving is required.
  • the work is performed while driving a range of working areas within a driving period without a user's operation, so when the battery is discharged below a certain level, the docking station must be found and automatically returned to perform charging. do.
  • a docking station for receiving a commercial AC power through the terminal contact to charge the battery on one side of the workspace of the unmanned moving object is installed.
  • the docking station continuously transmits a signal using an infrared ray (IR) transmitter to inform its position, and the unmanned vehicle moves to the infrared ray of the docking station through the infrared receiver when the battery becomes a low battery state.
  • IR infrared ray
  • the existing docking system is configured to measure the position and distance of the docking station to return to the docking station based on the angle of incidence or signal intensity of the infrared signal transmitted from the infrared transmitter of the docking station and received by the infrared receiver of the unmanned moving object. Can be.
  • the prior art controls the moving direction and speed of the unmanned moving object based on the incident angle of the received infrared signal or whether the signal overlaps, but in particular, the moving speed is based on the intensity of the infrared signal detected by the infrared receiver. Since controlling the moving speed of the, there is a problem that detailed control of the moving speed and direction according to the position and distance of the docking station which is the unmanned moving object and the target point is difficult.
  • the present invention has been made to solve the above-described problems, the present invention provides an unmanned moving object guidance system and method for allowing an unmanned moving object such as a robot cleaner to automatically move and dock to a target point such as a docking station. There is a challenge.
  • Another object of the present invention is to provide an object guidance system and method for an unmanned moving object by applying a marker recognition technique to calculate an optimal moving direction and speed of the unmanned moving object using only a minimum sensor.
  • the object guidance system for an unmanned moving object mounted on the unmanned moving object and the docking station, an induction apparatus including a marker identified according to the lighting of the light source; And a detector for detecting the marker and calculating a distance and a direction with the induction apparatus at a current position of the unmanned moving object to generate first and second position information with different precisions.
  • the movement of the unmanned moving object may be requested according to the movement route included in the first or second location information.
  • the induction device the marker unit including the at least one LED light source, the marker is formed on the light emitting surface of the LED light source and the shape of the rectangle to form a rectangle, and receives the LED control signal transmitted from the detection device,
  • the LED control signal may include an infrared receiver for outputting a drive signal to the LED light source.
  • the detection device may include an infrared transmitter for transmitting an LED control signal to the induction device, an image sensor for acquiring a detection image of the induction device, and when the marker in the detection image is detected, the induction based on the detected marker.
  • a marker extractor for generating first position information by calculating a distance and an angle with a device; and if the marker in the detection image is not detected, the induction based on the induced light emission signal detected by the induction light emission driving of the induction device.
  • the target detector may be configured to calculate distance and angle from the apparatus to generate second position information, and a controller to request movement of the unmanned moving object according to the first or second position information.
  • the marker extracting unit obtains three-dimensional coordinates of the marker through homography operation using four-point vertex coordinates and a preset actual marker size included in the marker image, and calculates the distance and angle corresponding to the three-dimensional coordinates. can do.
  • the target detector may be configured to transmit an LED control signal for requesting the inductive light emission driving through the infrared transmitter, and the distance to the target region set based on a plurality of images continuously photographed for a predetermined period of time through the image sensor.
  • the second position information may be generated by calculating an angle.
  • the target detector when the image sensor acquires a plurality of first images of the surroundings during the output period of the induction light emission signal, applies one or more change regions in the plurality of first images by applying a difference image processing technique.
  • the number of change for each change area may be detected, and a registration procedure for registering one or more change areas whose change number exceeds a threshold may be performed as a target area.
  • the induced light emission signal may be a signal that causes the LED light source to repeatedly turn on and blink in a cycle of 1 second or less.
  • the target detection unit calculates a distance and an angle with the induction apparatus in response to the size and the center coordinate of the target area when the target area is one, and weights each target area when the target area is two or more.
  • the image sensor acquires a plurality of second images photographing the surroundings during the non-output period of the induced light emission signal, detects one or more change regions in the second image by applying a difference image processing technique.
  • the target area overlapping the changed change area is deleted, and the process of detecting the change area using the first image and the second image is repeatedly repeated a predetermined number of times until the target area becomes one. By increasing, one target area having the highest weight can be set.
  • tl is the diagonal length of the image relative to the target area
  • Real_M is the maximum detectable distance
  • Real_m is the minimum detectable distance
  • Img_M is the diagonal length of the image at the maximum detectable distance
  • Img_m is the detection Image diagonal length at the minimum possible distance
  • the object induction method by the object induction system for an unmanned moving object comprising the steps of (a) controlling the lighting of the light source, (b) ) Detecting the marker by executing the marker extracting unit, (c) calculating a distance and an angle between the detection apparatus and the marker when the marker is detected, outputting a search completion, and (d) not detecting the marker; Or (e) calculating the distance and angle between the detection device and the induction device, requesting to move the unmanned moving object in the direction of the induction device, if the induction device is detected.
  • step (a) proceeding to the step, (f) if the induction apparatus is not detected, extinguishing the light source, outputting a search failure, (g) requesting the unmanned moving object to change the search direction, and It may include the step of proceeding to step (a).
  • the step (b) includes (b1) acquiring a marker image from an image sensor, and (b2) extracting a marker
  • the step (c) includes: (c1) if a marker is not detected, searching Outputting a failure, proceeding to step (d), (c2) if a marker is detected, acquiring four-point coordinates from the detected marker, (c3) obtaining four-point coordinates and a preset actual marker; Obtaining a three-dimensional coordinate of the marker through a homography operation using a size, (c4) calculating a distance and an angle with the marker, and (c5) outputting the distance and an angle can do.
  • the step (d) may include: (d1) turning on and controlling the light source; and (d2) acquiring a first image from the image sensor for a predetermined period of time and searching for a change area in the first image to set a target area. (D3) controlling the flashing of the light source, (d4) acquiring a second image from the image sensor for a predetermined period of time, searching for a change region according to the second image in the first image, and at least one target region.
  • step (d5) assigning a weight to the target region
  • steps (d2) and (d4) If the target area does not exist, outputting a target search failure, and if the step (f) and the step (d7) (d2) and (d4), there is only one target area, Calculate the distance and angle with the induction device, Outputting the output distance and angle, and proceeding to the step (e) and (d8) in the step (d7), if the target area is two or more and does not exceed the designated number of times, the current target area is weighted.
  • step (d1) and (d9) in step (d7), if the target area is two or more and the specified number of times is exceeded, one of the current target areas has the highest weight. It may include selecting a target area and proceeding to the step (d7).
  • the detection apparatus for detecting the position of the marker mounted on the unmanned moving object and the docking station and the guide device for displaying the marker through the first position information of the docking station at the remote position of the unmanned moving object By approaching within a certain distance, and generating a second position information accurate in close proximity, there is an effect that the unmanned moving object can be seated in the docking station.
  • the first positional information is generated until the near-field approach, and then the second positional information is generated precisely, thereby reducing the low resolution.
  • Even low-cost image sensors have the effect of precisely inducing docking stations.
  • FIG. 1 is a view showing an application example of the object guidance system for an unmanned moving object according to an embodiment of the present invention.
  • FIG. 2 is a view showing the structure of a thing guidance system according to an embodiment of the present invention.
  • 3 to 5 are diagrams illustrating a method of inducing objects according to an embodiment of the present invention.
  • the various techniques described herein may be implemented with hardware or software, or where appropriate, with a combination of both.
  • terms such as “part”, “module”, “device” and “system” likewise refer to computer-related entities, such as hardware, a combination of hardware and software, software or software at run time. Can be treated equivalently.
  • the application program executed in the user terminal may be configured in "units” and may be recorded in one physical memory in a form that can be read, written, and erased or distributed between two or more memories or recording media. Can be recorded.
  • FIG. 1 is a view showing an application example of the object guidance system for an unmanned moving object according to an embodiment of the present invention.
  • an object guidance system for an unmanned moving object may be applied to an unmanned moving object 10 such as a drone or a robot and a docking station 20 to which the unmanned moving object 10 is docked.
  • the unmanned mobile body 10 may be a mobile robot or the like moving on the ground, and a motherboard equipped with circuits such as a controller and a power supply unit may be built into the unmanned mobile body 10.
  • the lower portion of the unmanned moving object 10 may be provided with a moving part 11 including a motor, and a display 12 for displaying a driving screen may be provided on a front surface thereof.
  • the docking station 20 is a device that allows the unmanned moving object to be docked to perform charging and the like, and may be fixedly installed in one region.
  • the front side of the talking station 20 may be provided with an output terminal 21 for connecting to the charging terminal installed in the unmanned moving object (10).
  • the unmanned moving object 10 and the docking station 20 to which the object guidance system according to the embodiment of the present invention is applied are equipped with means for detecting and inducing movement of each other.
  • the image sensor 220 photographs the surroundings and generates an image on the front or rear surface, and the infrared transmitter 210 that transmits the LED control signal for driving the marker unit 320 to the docking station 20 is exposed to the outside. Can be.
  • the docking station 20 includes a marker unit 320 including an infrared receiver 310 for receiving an LED control signal transmitted from the unmanned moving object 10 and a marker which is an identification means for guiding the unmanned moving object 10. May be exposed.
  • the unmanned moving object 10 and the docking station 20 to which the object guidance system is applied transmits the LED control signal from the infrared transmitter 210 of the unmanned moving object 10 when the docking is induced.
  • the infrared receiver 310 of the docking station 20 receives the light and turns on the marker 320.
  • the image sensor 220 of the unmanned moving object 10 photographs the surroundings, an image of the marker unit 320 is acquired, and the distance and angle with the docking station 20 are calculated based on the image.
  • the movement path can be obtained.
  • the unmanned moving object 10 attempts to approach and dock the docking station 20 automatically by controlling the moving part 11 according to the obtained movement path.
  • the object guidance system may selectively execute the position coordinate calculation algorithm adaptively according to the current separation distance between the unmanned moving object 10 and the docking station 20.
  • the object guidance system of the present invention is within 10 m, which is a distance capable of accurately recognizing the shape of the marker 320 in terms of the performance of the image sensor between the unmanned moving object 10 and the docking station 20, and the recognition performance is At a distance of 10 m or more, which is significantly lowered, a separate algorithm is applied to generate first or second location information, and the movement of the unmanned moving object 10 is controlled.
  • the position information can be calculated by precisely calculating the distance and angle with the docking station 20 of the unmanned moving object 10 based on the marker shape. It generates and moves the unmanned moving object 10.
  • the distance corresponding to the marker of the marker 320 may be determined at a long distance, since it is difficult to calculate the exact distance and angle, the distance and angle are roughly calculated in the direction of the area estimated as the marker.
  • the information is generated, the unmanned moving object 10 is moved to a short distance of the docking station 20, and the search of the marker part 320 is performed again to precisely move the unmanned moving object 10.
  • the object guidance system uses an inexpensive image sensor having a recognition accuracy of 10 m or less by being mounted on an unmanned moving object such as a mobile robot and a docking station and applying an algorithm differently according to its distance. Nevertheless, it is possible to precisely move the unmanned vehicle to a short distance as well as to a remote destination.
  • FIG. 2 is a view showing the structure of a thing guidance system according to an embodiment of the present invention.
  • the object guidance system 100 by detecting the marker 340 to calculate the distance and direction with the guidance device 300 at the current position of the unmanned moving object to achieve different precision It may include a detection device 200 for generating the first and second position information having, and the induction device 300 including a marker 340 displayed according to the lighting of the light source 330, the detection device 200 ) May request the movement of the unmanned moving object according to the movement path included in the first or second location information according to different distances.
  • the detection apparatus 200 and the induction apparatus 300 may be implemented as independent circuit members, respectively, and may be mounted on the unmanned moving object (10 of FIG. 1) and the docking station (20 of FIG. 1), respectively.
  • the detection apparatus 200 is mounted on an unmanned moving object to control the docking station, that is, the induction apparatus, to display or not display the marker 340 which is a reference for determining the moving direction, and based on the marker 340.
  • the detection apparatus 200 can determine the movement path.
  • the detection apparatus 200 detects an infrared ray transmitter 210 for transmitting an LED control signal to the induction apparatus 300, an image sensor 220 for acquiring a detection image for the induction apparatus 300, and detection.
  • the marker extractor 230 generating the first position information by calculating the distance and angle with the induction apparatus 300 based on the detected marker 340, the marker in the detected image ( If the detection unit 340 is not detected, the target detector 240 generates the second position information by calculating a distance and an angle with the induction device 300 based on the induction light emission signal detected by the induction light emission driving of the induction device 300.
  • a controller 250 for requesting movement of the unmanned moving object according to the first or second location information.
  • the infrared transmitting unit 210 may be exposed on the surface of the unmanned moving object, and under the control of the control unit 250, the infrared transmitting unit 210 transmits an infrared waveform signal to the surroundings to the infrared receiving unit 310 of the induction apparatus 300 mounted in the docking station. Can transmit
  • the detection apparatus 200 is the induction apparatus 300.
  • the LED control signal for controlling the lighting and blinking of the LED light source 330 provided in the induction apparatus 300
  • the detection apparatus 200 is the induction apparatus 300.
  • the image sensor 220 is a predetermined camera member, which is exposed to the surface of the unmanned moving object and photographs a surrounding to generate an image.
  • the image sensor 220 may detect the marker 340 according to the lighting of the LED light source 330 when photographing the direction of the induction apparatus 300, and may generate a detection image including the marker 340. have.
  • the low-resolution low-cost image sensing member may be used as the image sensor 220. However, at least 10 m away from each other, a sensor capable of implementing the object shape relatively clearly may be used.
  • the aforementioned separation distance may be changed according to conditions such as the performance and size of the image sensor 220 and the size of the marker 340.
  • the marker extractor 230 and the target detector 240 may calculate the position information of the unmanned moving object by analyzing the photographing result of the image sensor 220 according to the distance between the unmanned moving object and the docking station.
  • the marker extractor 230 may determine whether the marker 340 in the detection image generated by the image sensor 220 appears. In the present invention, if the detection apparatus 200 enters within 10 m of the induction apparatus 300, an area corresponding to the marker will necessarily exist even if noise is taken into account.
  • the marker extractor 230 when the marker extractor 230 extracts the marker 340 from the detection image, the marker extractor 230 may be regarded as located in the near field, and the distance from the docking station through image analysis of the marker 340. And the angle can be calculated.
  • the marker extractor 230 uses the four-point vertex coordinates included in the marker appearing in the detection image and the size of the preset actual marker 340 to perform three-dimensional coordinates of the marker through a homography operation. Obtaining the distance and the angle corresponding to the three-dimensional coordinates.
  • the marker 340 is assumed to be installed in the same direction as the front of the docking station and its output terminal (21 in Figure 1).
  • the marker 340 may be a figure in which a specific mark is surrounded by a white border, and a shape in which the edge of the marker forms a closed curve in a rectangular shape by a binarization process may be recognized.
  • the marker extractor 230 may recognize the marker 340 by binarizing the detected image by a built-in binarizer and detecting a region corresponding to the figure.
  • the marker extractor 230 extracts the coordinates on the two-dimensional plane as the marker has a quadrangular shape having four vertices, and projects the coordinates on the three-dimensional plane through a homography operation, thereby the three-dimensional coordinates of the markers. Can be extracted.
  • the three-dimensional coordinates of the marker is proportional to the distance from the detection device 200 to the actual marker 340 of the induction device 300 and the distance of the actual marker 340 viewed from the detection device 200.
  • the direction and location of the detection apparatus 200 may be specified on three-dimensional coordinates, and first position information may be generated.
  • the marker extractor 230 calculates the distance and angle with the induction apparatus 300 to obtain a movement route.
  • the target detector 240 may be driven when the marker 340 is not extracted by the marker extractor 230 as the detector 200 is out of the guide device 300 by more than 10 m. have.
  • the controller 250 of the present invention can control the infrared transmitter 210 to transmit an LED control signal for driving the LED light source 330 in a specific lighting pattern.
  • the LED control signal the LED light source 330 is driven with light having a specific lighting pattern-hereinafter, referred to as an 'induction light emitting signal'.
  • the induced light emission signal may be a lighting pattern in which the LED light source 330 repeats lighting and blinking at one second intervals.
  • the shape of the marker 340 is not clearly displayed in the detection image detected by the image sensor 220 according to the induced light emission signal, but at least includes an area that repeats the appearance and exit of the detection image. It can be used to calculate the distance and angle of the induction device.
  • the target detector 240 may extract a region in which the induced light emission signal is detected in the detection image by photographing the surrounding area for a predetermined period of time, preferably 5 seconds, through the image sensor 220.
  • the controller 250 may control the moving unit (11 of FIG. 1) to change the direction of the unmanned moving object at an angle. .
  • one or more images of the first image may be applied by applying a difference image processing technique.
  • the area where the change has occurred-hereinafter referred to as 'change area'-and the number of change for each change area are detected, and the registration procedure of registering one or more change areas whose change number exceeds the threshold as a 'target area' is performed. Can be.
  • the target detector 240 may generate second position information by calculating a distance and an angle with the induction apparatus corresponding to the size and the center coordinate of the target area, respectively. .
  • the target detector 240 obtains the movement path by calculating the distance and the angle with the induction apparatus 300 using the target detector 240.
  • the target weighting unit 240 weights each target area and detects the image captured by the image sensor 220 during the non-output period of the induced light emission signal.
  • the second image is obtained, one or more change areas in the second image are detected by applying a difference image processing technique.
  • the target detector 240 deletes the target region overlapping the detected change region, and repeats the process of detecting the change region using the first image and the second image for a predetermined number of times until the target region becomes one. Can be done.
  • the weight is increased when the target area that is not deleted is duplicated.
  • the weight is increased by using one target area having the highest weight.
  • the movement path is obtained by calculating the distance and the angle.
  • the angle A between the detection device 200 and the induction device 300 calculated by the target detection unit 240 may satisfy the following Equation 1.
  • Equation 1 'x' is the center abscissa of the image for the target area, 'wh' is 1/2 of the horizontal size of the image for the target area, and 'a' indicates the angle of view of the image sensor.
  • the distance L between the detection device 200 and the induction device 300 calculated by the target detection unit 240 may satisfy the following equation (2).
  • Equation 2 'tl' is the diagonal length of the image with respect to the target area, 'Real_M' is the maximum detectable distance, 'Real_m' is the minimum detectable distance, and 'Img_M' is the maximum detectable distance of the image. Diagonal length, 'Img_m' indicates the diagonal length of the image at the minimum detectable distance.
  • the controller 250 may control the overall driving of the detector 200 and the unmanned moving body on which the detector 200 is mounted.
  • the controller 250 controls the infrared transmitter 210 and the image sensor 220 to control the transmission of the LED control signal and the generation of the detection image, and according to the detection image marker extractor 230 or the target detection unit ( By analyzing the detection image through 240 to generate the first or second position information, and thereby obtain the movement path of the unmanned moving object to control the moving unit (11 in Fig. 1), the distance between the unmanned moving object and the docking station According to the algorithm, it is possible to calculate the movement path and control the movement through an appropriate algorithm.
  • the induction apparatus 300 is mounted on the docking station and, under the control of the detection apparatus 200, displays or hides the marker 340 which is a moving direction reference, and moves the unmanned moving object based on the marker 340. Make decisions and move them around.
  • the induction apparatus 300 receives an LED control signal transmitted from the detection apparatus 200, and outputs a driving signal to the LED light source 330 according to the LED control signal, and an infrared receiver 310. It may include a marker unit 320 including at least one LED light source 330, and a marker 340 is formed on the light emitting surface of the LED light source 330 to form a rectangular border.
  • the infrared receiver 310 may receive the LED control signal transmitted from the detection apparatus 200, and control the lighting and blinking of the LED light source 330 of the marker 320 in response thereto.
  • the marker unit 320 outputs an induced light emission signal in the form of an optical signal, and may include an LED light source 330 that emits light and a marker 340 in which a specific figure is formed.
  • the LED light source 330 may be composed of one or more LED lamps, and may emit light having a predetermined brightness toward the marker 340 attached to the front surface.
  • the marker 340 may be formed of a substrate formed of a transparent resin material and a figure formed on one surface of the substrate.
  • the figure may have a shape in which a specific mark is surrounded by a white border, and an edge thereof may be displayed in a rectangular shape forming a closed curve in a specific shape.
  • the marker 340 may be displayed to the outside as the light incident from the rear surface by the LED light source 330 is transmitted through the transparent region.
  • the induction apparatus 300 displays the marker 340 under the control of the detection apparatus 200 so that the detection apparatus 200 calculates the current position information based on the marker 340, and accordingly, A reference is made to approach the guidance device 300.
  • each step execution subject may be the above-described detection apparatus and induction apparatus and their components even if there is no separate description.
  • the controller of the induction device may control an LED control signal through an infrared transmitter.
  • the lighting source is controlled (S100).
  • the infrared receiver of the induction apparatus receives the LED control signal and outputs the induced light emission signal through the marker unit, and the image sensor of the detection apparatus captures the same to generate a detection image.
  • the controller executes the marker extraction unit (S200) and attempts to detect the marker (S300).
  • step S300 When the marker is detected in step S300, the precise distance and angle between the detection device and the marker are calculated (S400), and the search completion (S800) is output to the controller (S800).
  • the controller executes the target detector (S500) and attempts to detect the induction apparatus (S600). At this time, attempting to detect the guidance device substantially corresponds to attempting to detect the marker.
  • step S600 when the guidance device is detected in step S600, the target detection unit calculates an approximate distance and angle between the detection device and the guidance device (S700), and the controller requests the moving unit to move the unmanned vehicle in the direction of the guidance device (S1100). In step S100, the procedure is resumed.
  • control unit turns off the LED light source (S900), and receives the search failure by the target detection unit (S1000).
  • the control unit requests the moving unit to change the direction of the current unmanned moving object (S1200), and proceeds the procedure again from the step S100.
  • the step of calculating the precise distance and angle can be subdivided into the following procedure.
  • the step of executing the marker extracting unit (S200) is a step of extracting a marker from the marker extracting unit by the image sensor (S210) and extracting the marker from the detected image (S220). Can be broken down.
  • a search failure is output (S310).
  • the marker extractor obtains four-point coordinates corresponding to each vertex in the detected marker.
  • the two-dimensional marker is projected on the three-dimensional coordinates through a homography operation (S330).
  • the marker extracting unit calculates the distance and angle with the induction apparatus based on the three-dimensional coordinates of the marker (S340), and outputs the distance and angle (S350). Thereafter, the process proceeds to step S800 described above.
  • the step S500 of executing the target detector may be divided into the following steps S510 to S550.
  • the controller transmits the LED control signal through the image sensor and controls the lighting to output the induced light emission signal (S510)
  • the image sensor of the detection device receives the induced light emission signal to generate a detection image, that is, the first image.
  • One image is generated, and the target detector searches for a change region of the detected image for a predetermined period of time (S520).
  • step S540 is a noise removing step.
  • the region is not a region corresponding to the marker, but is a simple noise. It is determined to be removed from the set target area.
  • the second image is a period in which the marker is not displayed and the first image is an image in which the marker for 5 seconds is displayed in a flashing cycle of 1 second, the second image is changed at least three times. This is because the area below it is obvious that the noise.
  • weights are assigned to two or more target areas (S550).
  • the target detector determines whether a target area exists (S610), and if the target area does not exist, outputs a search failure to the controller (S100). Otherwise, the target detector determines whether there is one currently set target area (S620), and if so, proceeds with the rough distance and angle calculation step S700 and the distance and angle output step S7010. .
  • step S620 if it is determined in step S620 that two or more target areas exist, it is determined whether the preset number of times is exceeded (S630). When the number of times is exceeded, the maximum weight is determined according to the weights assigned to the plurality of target areas. By selecting a target area having a (S640), and proceeds to the steps S700 and S710 based on this.
  • step S650 is a step for removing through the above-described noise removing steps by repetitive search procedure, since all but one of two or more set target areas are noise, and setting only one of them as the target area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

An object guidance system is disclosed. More specifically, the present invention relates to: an object guidance system and method provided to a mobile robot device, etc. so as to guide the device to a destination by controlling the moving direction of the device; and an unmanned moving body docking system using same. According to one embodiment of the present invention, by means of a detection device for detecting the location of a marker mounted on a docking station and an unmanned moving body, and a guidance device for displaying the marker, an effect is achieved whereby, when the unmanned moving body is located at a far distance, the unmanned moving body is made to approach within a predetermined distance from the docking station through approximate first location information, and when the unmanned moving body approaches a near distance, precise second location information is generated so that the unmanned moving body may be seated in the docking station.

Description

무인 이동체용 사물 유도 시스템 및 방법Object guidance system and method for unmanned moving object
본 발명은 사물 유도 시스템에 관한 것으로, 이동로봇 장치 등에 설치되어 장치의 이동방향을 제어하여 목적지까지 유도하는 사물 유도 시스템 및 방법, 이를 이용한 무인 이동체의 도킹 시스템에 관한 것이다.The present invention relates to an object guidance system, and to an object guidance system and method installed in a mobile robot device and the like to control the direction of movement of the device to a destination, and a docking system for an unmanned mobile object using the same.
통상적으로, 드론 또는 로봇과 같은 무인 이동체는 배터리를 내장하여 배터리의 전원으로 동작함에 따라, 구동 후 일정 시간이 경과되면 배터리를 충전시키는 작업이 요구된다.In general, an unmanned moving object such as a drone or a robot has a built-in battery and operates as a power source of the battery, and thus, a task of charging the battery after a predetermined time after driving is required.
특히, 로봇 청소기와 같은 무인 이동체의 경우 사용자의 조작 없이 구동 기간 내 일정범위의 작업 영역을 주행하면서 작업을 수행하기 때문에 배터리가 일정수준 이하로 방전되면 도킹 스테이션을 찾아 자동으로 복귀하여 충전을 수행해야 한다.Particularly, in the case of an unmanned moving object such as a robot cleaner, the work is performed while driving a range of working areas within a driving period without a user's operation, so when the battery is discharged below a certain level, the docking station must be found and automatically returned to perform charging. do.
이를 위해, 무인 이동체의 작업공간 일측에는 단자접촉을 통해 상용 교류 전원을 입력받아 배터리를 충전시키기 위한 도킹 스테이션이 설치되어 있다. 도킹 스테이션은 자신의 위치를 알리기 위해 적외선(IR) 발신기 등을 이용하여 신호를 지속적으로 발신하고 있고, 무인 이동체는 배터리가 저전압(Low Battery) 상태가 되면 적외선(IR) 수신기를 통해 도킹 스테이션의 적외선 발신기로부터 신호를 수신받아 도킹 스테이션으로 이동한 후 도킹(Docking) 절차에 따라 도킹 스테이션에 도킹하여 배터리를 충전하게 된다.To this end, a docking station for receiving a commercial AC power through the terminal contact to charge the battery on one side of the workspace of the unmanned moving object is installed. The docking station continuously transmits a signal using an infrared ray (IR) transmitter to inform its position, and the unmanned vehicle moves to the infrared ray of the docking station through the infrared receiver when the battery becomes a low battery state. Receiving a signal from the transmitter to the docking station and docked (docking) the docking station according to the docking procedure to charge the battery.
이때, 기존의 도킹 시스템은 도킹 스테이션의 적외선 발신기로부터 발신되어 무인 이동체의 적외선 수신기로 수신되는 적외선 신호의 입사각 또는 신호의 세기를 바탕으로 도킹 스테이션의 위치 및 거리를 측정하여 도킹 스테이션으로 복귀하도록 구성될 수 있다.At this time, the existing docking system is configured to measure the position and distance of the docking station to return to the docking station based on the angle of incidence or signal intensity of the infrared signal transmitted from the infrared transmitter of the docking station and received by the infrared receiver of the unmanned moving object. Can be.
그러나, 이러한 종래기술은 수신되는 적외선 신호의 입사각 또는 신호 중첩 여부에 기반하여 무인 이동체의 이동 방향 및 속도를 제어하게 되나, 특히 이동 속도 있어서는 적외선 수신기를 통해 감지되는 적외선 신호의 세기를 바탕으로 무인 이동체의 이동 속도를 제어하고 있어, 무인 이동체와 목표 지점인 도킹 스테이션의 위치 및 거리에 따른 이동 속도 및 방향에 대한 세밀한 제어가 힘든 문제점이 있다.However, the prior art controls the moving direction and speed of the unmanned moving object based on the incident angle of the received infrared signal or whether the signal overlaps, but in particular, the moving speed is based on the intensity of the infrared signal detected by the infrared receiver. Since controlling the moving speed of the, there is a problem that detailed control of the moving speed and direction according to the position and distance of the docking station which is the unmanned moving object and the target point is difficult.
또한, 종래기술에 따르면 무인 이동체의 이동 방향 및 속도 산출을 위한 고가의 연산수단 및, 적외선 송수신기를 비롯하여 복수의 센서를 이용해야 함에 따라 무인 이동체 및 도킹 스테이션의 제조단가가 상승하는 단점이 있다.In addition, according to the related art, expensive manufacturing means for calculating the moving direction and the speed of the unmanned moving object and a plurality of sensors, including an infrared transceiver, have to be used, thereby increasing the manufacturing cost of the unmanned moving object and the docking station.
본 발명은 전술한 문제점을 해결하기 위해 안출된 것으로, 본 발명은 로봇 청소기와 같은 무인 이동체가 도킹 스테이션과 같은 목표 지점에 자동으로 이동하여 도킹할 수 있도록 하는 무인 이동체용 사물 유도 시스템 및 방법을 제공하는 데 과제가 있다.The present invention has been made to solve the above-described problems, the present invention provides an unmanned moving object guidance system and method for allowing an unmanned moving object such as a robot cleaner to automatically move and dock to a target point such as a docking station. There is a challenge.
또한, 본 발명은 마커 인식 기법을 적용하여 최소한의 센서만으로도 최적의 무인 이동체의 이동 방향 및 속도를 산출할 수 있도록 한 무인 이동체용 사물 유도 시스템 및 방법을 제공하는 데 과제가 있다.Another object of the present invention is to provide an object guidance system and method for an unmanned moving object by applying a marker recognition technique to calculate an optimal moving direction and speed of the unmanned moving object using only a minimum sensor.
전술한 과제를 해결하기 위해, 본 발명의 실시예에 따른 무인 이동체용 사물 유도 시스템은, 무인 이동체 및 도킹 스테이션에 탑재되는 사물 유도 시스템으로서, 광원의 점등에 따라 식별되는 마커를 포함하는 유도장치와, 상기 마커를 검출하여 상기 무인 이동체의 현재위치에서 상기 유도장치와의 거리 및 방향을 산출하여 서로 다른 정밀도를 갖는 제1 및 제2 위치정보를 생성하는 탐지장치를 포함하고, 상기 탐지장치는, 상기 제1 또는 제2 위치정보에 포함된 이동경로에 따라, 상기 무인 이동체의 이동을 요청할 수 있다.In order to solve the above problems, the object guidance system for an unmanned moving object according to an embodiment of the present invention, the object guidance system mounted on the unmanned moving object and the docking station, an induction apparatus including a marker identified according to the lighting of the light source; And a detector for detecting the marker and calculating a distance and a direction with the induction apparatus at a current position of the unmanned moving object to generate first and second position information with different precisions. The movement of the unmanned moving object may be requested according to the movement route included in the first or second location information.
상기 유도장치는, 하나 이상의 LED 광원 및, 상기 LED 광원의 발광면에 배치되어 테두리가 사각형을 이루는 형상의 상기 마커를 포함하는 마커부와, 상기 탐지장치부터 송출되는 LED 제어신호를 수신하고, 상기 LED 제어신호에 따라 상기 LED 광원에 구동신호를 출력하는 적외선 수신부를 포함할 수 있다.The induction device, the marker unit including the at least one LED light source, the marker is formed on the light emitting surface of the LED light source and the shape of the rectangle to form a rectangle, and receives the LED control signal transmitted from the detection device, In accordance with the LED control signal may include an infrared receiver for outputting a drive signal to the LED light source.
상기 탐지장치는, 상기 유도장치로 LED 제어신호를 송출하는 적외선 송신부와, 상기 유도장치에 대한 검출영상을 획득하는 이미지 센서와, 상기 검출영상 내 마커가 검출되면, 검출된 마커에 기초하여 상기 유도장치와의 거리 및 각도를 산출하여 제1 위치정보를 생성하는 마커 추출부와, 상기 검출영상 내 마커가 검출되지 않으면, 상기 유도장치의 유도발광 구동에 따라 검출된 유도발광 신호에 기초하여 상기 유도장치와의 거리 및 각도를 산출하여 제2 위치정보를 생성하는 목표 탐지부와, 상기 제1 또는 제2 위치정보에 따라 상기 무인 이동체의 이동을 요청하는 제어부를 포함할 수 있다.The detection device may include an infrared transmitter for transmitting an LED control signal to the induction device, an image sensor for acquiring a detection image of the induction device, and when the marker in the detection image is detected, the induction based on the detected marker. A marker extractor for generating first position information by calculating a distance and an angle with a device; and if the marker in the detection image is not detected, the induction based on the induced light emission signal detected by the induction light emission driving of the induction device. The target detector may be configured to calculate distance and angle from the apparatus to generate second position information, and a controller to request movement of the unmanned moving object according to the first or second position information.
상기 마커 추출부는, 마커 이미지에 포함된 4점 꼭지점 좌표 및 기 설정된 실제 마커 크기를 이용하여 호모그래피 연산을 통해 마커의 3차원 좌표를 획득하고, 상기 3차원 좌표에 대응하여 상기 거리 및 각도를 산출할 수 있다.The marker extracting unit obtains three-dimensional coordinates of the marker through homography operation using four-point vertex coordinates and a preset actual marker size included in the marker image, and calculates the distance and angle corresponding to the three-dimensional coordinates. can do.
상기 목표 탐지부는, 상기 적외선 송신부를 통해 상기 유도발광 구동을 요청하는 LED 제어신호를 송출하도록 하고, 상기 이미지 센서를 통해 일정기간 동안 지속적으로 촬영된 복수의 영상에 기초하여 설정된 목표영역에 대한 거리 및 각도를 산출하여 상기 제2 위치정보를 생성할 수 있다.The target detector may be configured to transmit an LED control signal for requesting the inductive light emission driving through the infrared transmitter, and the distance to the target region set based on a plurality of images continuously photographed for a predetermined period of time through the image sensor. The second position information may be generated by calculating an angle.
상기 목표 탐지부는, 상기 유도발광 신호의 출력기간 동안 상기 이미지 센서가 주변을 촬영한 복수의 제1 영상을 획득하면, 차영상 처리기법을 적용하여 상기 복수의 제1 영상 내 하나 이상의 변화영역 및 상기 변화영역 별 변화 횟수를 검출하고, 상기 변화 횟수가 임계치를 초과한 하나 이상의 변화영역을 목표영역으로 등록하는 등록절차를 수행할 수 있다.The target detector, when the image sensor acquires a plurality of first images of the surroundings during the output period of the induction light emission signal, applies one or more change regions in the plurality of first images by applying a difference image processing technique. The number of change for each change area may be detected, and a registration procedure for registering one or more change areas whose change number exceeds a threshold may be performed as a target area.
상기 유도발광 신호는, 1초 이하의 주기로 LED 광원이 점등 및 점멸을 반복하도록 하는 신호일 수 있다.The induced light emission signal may be a signal that causes the LED light source to repeatedly turn on and blink in a cycle of 1 second or less.
상기 목표 탐지부는, 상기 목표영역이 하나인 경우, 상기 목표영역의 크기 및 중심좌표에 각각 대응하여 상기 유도장치와의 거리 및 각도를 산출하고, 상기 목표영역이 둘 이상인 경우, 각 목표영역에 가중치를 부여하고, 상기 유도발광 신호의 비출력기간 동안 상기 이미지 센서가 주변을 촬영한 복수의 제2 영상을 획득하면, 차영상 처리 기법을 적용하여 제2 영상 내 하나 이상의 변화영역을 검출하고, 검출된 변화영역과 중복되는 목표영역을 삭제하되, 상기 목표영역이 하나가 될 때까지 상기 제1 영상 및 제2 영상을 이용한 변화영역 검출과정을 지정횟수 반복 수행하여 남아있는 목표영역이 중복 검출되면 가중치를 증가시켜 최고 가중치를 갖는 하나의 목표영역을 설정할 수 있다.The target detection unit calculates a distance and an angle with the induction apparatus in response to the size and the center coordinate of the target area when the target area is one, and weights each target area when the target area is two or more. When the image sensor acquires a plurality of second images photographing the surroundings during the non-output period of the induced light emission signal, detects one or more change regions in the second image by applying a difference image processing technique. The target area overlapping the changed change area is deleted, and the process of detecting the change area using the first image and the second image is repeatedly repeated a predetermined number of times until the target area becomes one. By increasing, one target area having the highest weight can be set.
상기 유도장치와의 각도(A)는 이하의 수학식, The angle A with the induction apparatus is represented by the following equation,
Figure PCTKR2019005581-appb-I000001
Figure PCTKR2019005581-appb-I000001
에 의해 산출될 수 있다(단, x는 목표영역에 대한 이미지의 중심 가로좌표, wh는 목표영역에 대한 이미지의 가로크기의 1/2, a는 이미지 센서의 화각).(Where x is the center abscissa of the image for the target area, wh is 1/2 of the horizontal size of the image for the target area, and a is the angle of view of the image sensor).
상기 유도장치와의 거리(L)는 이하의 수학식,Distance (L) with the induction apparatus is the following equation,
Figure PCTKR2019005581-appb-I000002
Figure PCTKR2019005581-appb-I000002
에 의해 산출될 수 있다(단, tl은 목표영역에 대한 이미지의 대각선 길이, Real_M은 탐지가능 최대거리, Real_m은 탐지가능 최소거리, Img_M은 탐지가능 최대거리에서의 이미지의 대각선 길이, Img_m은 탐지가능 최소 거리에서의 이미지 대각선 길이).Where tl is the diagonal length of the image relative to the target area, Real_M is the maximum detectable distance, Real_m is the minimum detectable distance, Img_M is the diagonal length of the image at the maximum detectable distance, and Img_m is the detection Image diagonal length at the minimum possible distance).
또한, 전술한 과제를 해결하기 위한 본 발명의 다른 양태의 실시예에 따른 사물 유도 방법은, 무인 이동체용 사물 유도 시스템에 의한 사물 유도 방법으로서, (a) 광원을 점등 제어하는 단계와, (b) 마커 추출부를 실행하여 마커를 검출하는 단계와, (c) 상기 마커가 검출되면, 탐지장치와 마커간 거리 및 각도를 산출하고, 검색완료를 출력하는 단계와, (d) 상기 마커가 검출되지 않으면, 목표 탐지부를 실행하여 유도장치를 검출하는 단계와, (e) 상기 유도장치가 검출되면, 탐지장치 및 유도장치간 거리 및 각도를 산출하고, 무인 이동체를 유도장치 방향으로 이동 요청하고, 상기 (a) 단계를 진행하는 단계와, (f) 상기 유도장치가 검출되지 않으면, 상기 광원을 소등 제어하고, 검색실패를 출력하는 단계와, (g) 상기 무인 이동체에 검색방향 변경을 요청하고, 상기 (a) 단계를 진행하는 단계를 포함할 수 있다.In addition, the object induction method according to an embodiment of another aspect of the present invention for solving the above problems, the object induction method by the object induction system for an unmanned moving object, comprising the steps of (a) controlling the lighting of the light source, (b) ) Detecting the marker by executing the marker extracting unit, (c) calculating a distance and an angle between the detection apparatus and the marker when the marker is detected, outputting a search completion, and (d) not detecting the marker; Or (e) calculating the distance and angle between the detection device and the induction device, requesting to move the unmanned moving object in the direction of the induction device, if the induction device is detected. a) proceeding to the step, (f) if the induction apparatus is not detected, extinguishing the light source, outputting a search failure, (g) requesting the unmanned moving object to change the search direction, and It may include the step of proceeding to step (a).
상기 (b) 단계는, (b1) 이미지 센서로부터 마커 이미지를 획득하는 단계와, (b2) 마커를 추출하는 단계를 포함하고, 상기 (c) 단계는, (c1) 마커가 검출되지 않으면, 검색 실패를 출력하고, 상기 (d) 단계를 진행하는 단계와, (c2) 마커가 검출되면, 검출된 마커에서 4점 좌표를 획득하는 단계와, (c3) 획득한 4점 좌표와 기 설정된 실제 마커 크기를 이용하여 호모그래피 연산을 통해 상기 마커의 3차원 좌표를 획득하는 단계와, (c4) 상기 마커와의 거리 및 각도를 산출하는 단계와, (c5) 상기 거리 및 각도를 출력하는 단계를 포함할 수 있다.The step (b) includes (b1) acquiring a marker image from an image sensor, and (b2) extracting a marker, and the step (c) includes: (c1) if a marker is not detected, searching Outputting a failure, proceeding to step (d), (c2) if a marker is detected, acquiring four-point coordinates from the detected marker, (c3) obtaining four-point coordinates and a preset actual marker; Obtaining a three-dimensional coordinate of the marker through a homography operation using a size, (c4) calculating a distance and an angle with the marker, and (c5) outputting the distance and an angle can do.
상기 (d) 단계는, (d1) 상기 광원을 점등 제어하는 단계와, (d2) 이미지 센서로부터 일정기간 동안 제1 영상을 획득하고, 상기 제1 영상 내 변화영역을 탐색하여 목표영역을 설정하는 단계와, (d3) 상기 광원을 점멸 제어하는 단계와, (d4) 이미지 센서로부터 일정기간 동안 제2 영상을 획득하고, 상기 제1 영상에서 제2 영상에 따른 변화영역을 탐색하여 하나 이상의 목표영역을 설정하고, 상기 목표영역 중 지속적으로 변화가 발생하는 영역을 삭제하는 단계와, (d5) 상기 목표영역에 가중치를 부여하는 단계와, (d6) 상기 (d2) 단계 및 (d4) 단계에서, 상기 목표영역이 존재하지 않으면 목표 검색 실패를 출력하고, 상기 (f) 단계를 진행하는 단계와, (d7) 상기 (d2) 단계 및 (d4) 단계에서, 상기 목표영역이 하나만이 존재하면, 상기 유도장치와의 거리 및 각도를 산출하고, 산출된 거리 및 각도를 출력하고, 상기 (e) 단계를 진행하는 단계와, (d8) 상기 (d7) 단계에서, 상기 목표영역이 둘 이상이고, 지정횟수를 초과하지 않았으면, 현재 목표영역에 가중치를 부여하고, 상기 (d1) 단계를 진행하는 단계와, (d9) 상기 (d7) 단계에서, 상기 목표영역이 둘 이상이고, 지정횟수를 초과했으면, 현재 목표영역 중, 최고 가중치를 갖는 하나의 목표영역을 선택하고 상기 (d7) 단계를 진행하는 단계를 포함할 수 있다.The step (d) may include: (d1) turning on and controlling the light source; and (d2) acquiring a first image from the image sensor for a predetermined period of time and searching for a change area in the first image to set a target area. (D3) controlling the flashing of the light source, (d4) acquiring a second image from the image sensor for a predetermined period of time, searching for a change region according to the second image in the first image, and at least one target region. In the following steps, deleting a region in which the change continuously occurs in the target region, (d5) assigning a weight to the target region, (d6) steps (d2) and (d4), If the target area does not exist, outputting a target search failure, and if the step (f) and the step (d7) (d2) and (d4), there is only one target area, Calculate the distance and angle with the induction device, Outputting the output distance and angle, and proceeding to the step (e) and (d8) in the step (d7), if the target area is two or more and does not exceed the designated number of times, the current target area is weighted. In step (d1) and (d9), in step (d7), if the target area is two or more and the specified number of times is exceeded, one of the current target areas has the highest weight. It may include selecting a target area and proceeding to the step (d7).
본 발명의 실시예에 따르면, 무인 이동체 및 도킹 스테이션에 탑재되는 마커의 위치를 탐지하는 탐지장치 및 마커를 표시하는 유도장치를 통해 무인 이동체의 원거리 위치시 대략적인 제1 위치정보를 통해 도킹 스테이션의 일정거리 이내로 접근토록 하고, 근거리 접근시 정밀한 제2 위치정보를 생성함으로써 무인 이동체가 도킹 스테이션에 안착할 수 있도록 하는 효과가 있다.According to an embodiment of the present invention, the detection apparatus for detecting the position of the marker mounted on the unmanned moving object and the docking station and the guide device for displaying the marker through the first position information of the docking station at the remote position of the unmanned moving object By approaching within a certain distance, and generating a second position information accurate in close proximity, there is an effect that the unmanned moving object can be seated in the docking station.
또한, 본 발명의 실시예에 따르면, 원거리 피사체에 마커 식별이 불가능한 저가의 이미지 센서를 이용하여 근거리 접근시까지 대략적인 제1 위치정보를 생성하고, 이후, 정밀한 제2 위치정보를 생성함으로써, 저해상도의 저가 이미지 센서로도 도킹 스테이션까지 정밀하게 유도할 수 있는 효과가 있다.In addition, according to an embodiment of the present invention, by using a low-cost image sensor that is impossible to identify markers on a distant subject, the first positional information is generated until the near-field approach, and then the second positional information is generated precisely, thereby reducing the low resolution. Even low-cost image sensors have the effect of precisely inducing docking stations.
도 1은 본 발명의 실시예에 따른 무인 이동체용 사물 유도 시스템의 적용 예를 나타낸 도면이다.1 is a view showing an application example of the object guidance system for an unmanned moving object according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 사물 유도 시스템의 구조를 나타낸 도면이다.2 is a view showing the structure of a thing guidance system according to an embodiment of the present invention.
도 3 내지 도 5는 본 발명의 실시예에 따른 사물 유도 방법을 나타낸 도면이다.3 to 5 are diagrams illustrating a method of inducing objects according to an embodiment of the present invention.
설명에 앞서, 본 명세서 전체에서 어떤 부분이 어떤 구성요소를 "구비" 또는 "포함" 한다고 할 때, 이는 특별히 반대되는 기재가 없는 한, 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다.Prior to the description, when any part of the present specification is said to "include" or "include" any component, unless otherwise stated, this will not include other components, but will further include other components. It means you can.
본 명세서에서 "실시예"라는 용어는 예시, 사례 또는 도해의 역할을 하는 것을 의미하나, 발명의 대상은 그러한 예에 의해 제한되지 않는다. 그리고, "포함하는", "구비하는" 및 "갖는" 등의 다른 유사한 용어가 사용되고 있으나, 청구범위에서 사용되는 경우 임의의 추가적인 또는 다른 구성요소를 배제하지 않는 개방적인 전환어(transition word)로서 "포함하는(Comprising)"이라는 용어와 유사한 방식으로 포괄적으로 사용된다. The term "embodiments" herein is meant to serve as illustrations, examples or illustrations, but the subject matter of the invention is not limited by such examples. And other similar terms such as "comprising," "including," and "having" are used, but as used in the claims, as an open transition word that does not exclude any additional or other component. It is used generically in a similar way to the term "comprising".
또한, 본 명세서의 전반에 걸쳐 기재된 "...부(unit), "...모듈(module)", "...장치(device)" 및 "...시스템(system)" 등의 용어는 하나 또는 둘 이상의 기능이 조합된 동작을 처리하는 단위를 의미하며, 이는 하드웨어, 소프트웨어 또는, 하드웨어 및 소프트웨어의 결합으로 구현될 수 있다.Also, terms such as "... unit," ... module "," ... device "and" ... system "described throughout this specification. Means a unit for processing an operation in which one or more functions are combined, which may be implemented in hardware, software, or a combination of hardware and software.
본 명세서에 설명된 다양한 기법은 하드웨어 또는 소프트웨어와 함께 구현될 수 있거나, 적합한 경우에 이들 모두의 조합과 함께 구현될 수 있다. 본 명세서에 사용된 바와 같은 "부", "모듈", "장치" 및 "시스템" 등의 용어는 마찬가지로 컴퓨터 관련 엔티티(Entity), 즉 하드웨어, 하드웨어 및 소프트웨어의 조합, 소프트웨어 또는 실행 시의 소프트웨어와 등가로 취급할 수 있다. 또한, 본 발명에서 사용자 단말에서 실행되는 어플리케이션 프로그램은 "부" 단위로 구성될 수 있고, 읽기, 쓰기 및 지우기가 가능한 형태로 하나의 물리적 메모리에 기록되거나, 둘 이상의 메모리 또는 기록매체 사이에 분산되어 기록될 수 있다.The various techniques described herein may be implemented with hardware or software, or where appropriate, with a combination of both. As used herein, terms such as "part", "module", "device" and "system" likewise refer to computer-related entities, such as hardware, a combination of hardware and software, software or software at run time. Can be treated equivalently. In addition, in the present invention, the application program executed in the user terminal may be configured in "units" and may be recorded in one physical memory in a form that can be read, written, and erased or distributed between two or more memories or recording media. Can be recorded.
이하, 도면을 참조하여 본 발명의 실시예에 따른 무인 이동체용 사물 유도 시스템 및 방법을 설명한다.Hereinafter, an object guidance system and method for an unmanned moving object according to an embodiment of the present invention will be described with reference to the drawings.
도 1은 본 발명의 실시예에 따른 무인 이동체용 사물 유도 시스템의 적용 예를 나타낸 도면이다.1 is a view showing an application example of the object guidance system for an unmanned moving object according to an embodiment of the present invention.
도 1은 참조하면 본 발명의 실시예에 따른 무인 이동체용 사물 유도 시스템은 드론 또는 로봇과 같은 무인 이동체(10) 및 무인 이동체(10)가 도킹하는 도킹 스테이션(20)에 적용될 수 있다. Referring to FIG. 1, an object guidance system for an unmanned moving object according to an exemplary embodiment of the present invention may be applied to an unmanned moving object 10 such as a drone or a robot and a docking station 20 to which the unmanned moving object 10 is docked.
무인 이동체(10)는 특히 지상에서 이동하는 이동로봇 등일 수 있고, 무인 이동체(10)의 내부로는 제어부 및 전원부 등의 회로가 실장된 마더보드가 내장될 수 있다. 무인 이동체(10)의 하부는 모터를 포함하는 이동부(11)가 구비될 수 있고, 전면으로는 구동화면을 표시하는 디스플레이(12)가 구비될 수 있다.In particular, the unmanned mobile body 10 may be a mobile robot or the like moving on the ground, and a motherboard equipped with circuits such as a controller and a power supply unit may be built into the unmanned mobile body 10. The lower portion of the unmanned moving object 10 may be provided with a moving part 11 including a motor, and a display 12 for displaying a driving screen may be provided on a front surface thereof.
도킹 스테이션(20)은 무인 이동체가 도킹 되어 충전 등을 수행할 수 있도록 하는 장치로서, 일 영역에 고정설치 될 수 있다. 토킹 스테이션(20)의 전면으로는 무인 이동체(10)에 설치된 충전단자와 접속하는 출력단자(21)가 구비될 수 있다.The docking station 20 is a device that allows the unmanned moving object to be docked to perform charging and the like, and may be fixedly installed in one region. The front side of the talking station 20 may be provided with an output terminal 21 for connecting to the charging terminal installed in the unmanned moving object (10).
특히, 본 발명의 실시예에 따른 사물 유도 시스템이 적용되는 무인 이동체(10) 및 도킹 스테이션(20)은 서로의 위치를 검출 및 이동유도하기 위한 수단이 탑재되어 있으며, 이중 무인 이동체(10)의 전면 또는 후면으로는 주변을 촬영하여 영상을 생성하는 이미지 센서(220)와, 도킹 스테이션(20)으로 마커부(320)의 구동을 위한 LED 제어신호를 송출하는 적외선 송신부(210)가 외부로 노출될 수 있다.In particular, the unmanned moving object 10 and the docking station 20 to which the object guidance system according to the embodiment of the present invention is applied are equipped with means for detecting and inducing movement of each other. The image sensor 220 photographs the surroundings and generates an image on the front or rear surface, and the infrared transmitter 210 that transmits the LED control signal for driving the marker unit 320 to the docking station 20 is exposed to the outside. Can be.
또한, 도킹 스테이션(20)은 무인 이동체(10)로부터 송출되는 LED 제어신호를 수신하는 적외선 수신부(310) 및 무인 이동체(10)를 유도하기 위한 식별수단인 마커를 포함하는 마커부(320)가 노출될 수 있다.In addition, the docking station 20 includes a marker unit 320 including an infrared receiver 310 for receiving an LED control signal transmitted from the unmanned moving object 10 and a marker which is an identification means for guiding the unmanned moving object 10. May be exposed.
이러한 구조에 따라, 본 발명의 실시예에 따른 사물 유도 시스템이 적용된 무인 이동체(10) 및 도킹 스테이션(20)은 도킹 유도시, 무인 이동체(10)의 적외선 송신부(210)에서 LED 제어신호를 송출함에 따라, 도킹 스테이션(20)의 적외선 수신부(310)가 이를 수신하여 마커부(320)를 점등하게 된다.According to this structure, the unmanned moving object 10 and the docking station 20 to which the object guidance system is applied according to the embodiment of the present invention transmits the LED control signal from the infrared transmitter 210 of the unmanned moving object 10 when the docking is induced. As a result, the infrared receiver 310 of the docking station 20 receives the light and turns on the marker 320.
또한, 무인 이동체(10)의 이미지 센서(220)가 주변을 촬영함에 따라, 마커부(320)의 대한 영상을 획득하고, 그 영상에 기초하여 도킹 스테이션(20)과의 거리 및 각도를 산출함으로써 이동경로를 획득할 수 있다. 무인 이동체(10)는 획득한 이동경로에 따라, 이동부(11)를 제어하여 자동으로 도킹 스테이션(20)에 접근 및 도킹을 시도하게 된다.In addition, as the image sensor 220 of the unmanned moving object 10 photographs the surroundings, an image of the marker unit 320 is acquired, and the distance and angle with the docking station 20 are calculated based on the image. The movement path can be obtained. The unmanned moving object 10 attempts to approach and dock the docking station 20 automatically by controlling the moving part 11 according to the obtained movement path.
이때, 본 발명의 실시예에 따른 사물 유도 시스템은, 무인 이동체(10)와 도킹 스테이션(20)간 현재 이격거리에 따라, 적응적으로 위치좌표 산출 알고리즘을 선택적으로 실행할 수 있다.In this case, the object guidance system according to the embodiment of the present invention may selectively execute the position coordinate calculation algorithm adaptively according to the current separation distance between the unmanned moving object 10 and the docking station 20.
상세하게는, 본 발명의 사물 유도 시스템은 무인 이동체(10)와 도킹 스테이션(20)간 이미지 센서의 성능상 마커부(320)의 형태를 정확히 인식할 수 있는 거리인 10 m 이내와, 인식 성능이 현저히 낮아지는 10 m 이상의 거리에서 각각 별도의 알고리즘을 적용하여 제1 또는 제2 위치정보를 생성하고, 무인 이동체(10)의 이동을 제어하게 된다. Specifically, the object guidance system of the present invention is within 10 m, which is a distance capable of accurately recognizing the shape of the marker 320 in terms of the performance of the image sensor between the unmanned moving object 10 and the docking station 20, and the recognition performance is At a distance of 10 m or more, which is significantly lowered, a separate algorithm is applied to generate first or second location information, and the movement of the unmanned moving object 10 is controlled.
즉, 근거리에서는 마커부(320)의 마커형태를 비교적 정확하게 인식할 수 있음에 따라 마커형태에 기초하여 무인 이동체(10)의 도킹 스테이션(20)과의 거리 및 각도를 정밀하게 계산하여 위치정보를 생성하여 무인 이동체(10)를 이동시킨다.That is, at a short distance, since the marker shape of the marker part 320 can be recognized relatively accurately, the position information can be calculated by precisely calculating the distance and angle with the docking station 20 of the unmanned moving object 10 based on the marker shape. It generates and moves the unmanned moving object 10.
또한, 원거리에서는 마커부(320)의 마커에 대응하는 방향을 판단할 수 있으나, 그 정확한 거리 및 각도를 계산하기는 어려움에 따라, 마커로 추정되는 영역 방향으로 거리 및 각도를 개략적으로 계산하여 위치정보를 생성하고, 무인 이동체(10)를 도킹 스테이션(20)의 근거리까지 이동시키고, 재차 마커부(320)의 검색을 수행하여 정밀하게 무인 이동체(10)를 이동시키게 된다.In addition, although the distance corresponding to the marker of the marker 320 may be determined at a long distance, since it is difficult to calculate the exact distance and angle, the distance and angle are roughly calculated in the direction of the area estimated as the marker. The information is generated, the unmanned moving object 10 is moved to a short distance of the docking station 20, and the search of the marker part 320 is performed again to precisely move the unmanned moving object 10.
이러한 무인 이동체(10)와 도킹 스테이션(20)간 현재 이격거리에 따라, 적응적으로 위치좌표 산출 알고리즘을 선택적으로 실행하는 것에 대한 상세한 설명은 후술한다.According to the current separation distance between the unmanned moving object 10 and the docking station 20, a detailed description of selectively executing the position coordinate calculation algorithm will be described later.
전술한 바와 같이, 본 발명의 실시예에 따른 사물 유도 시스템은 이동로봇과 같은 무인 이동체 및 도킹 스테이션에 탑재되어 그 거리에 따라 알고리즘을 달리 적용함으로써, 인식 정확도가 10 m 이내인 저가 이미지 센서를 이용함에도 불구하고, 근거리 뿐만 아니라 원거리의 목적지까지 무인 이동체를 정밀하게 이동시킬 수 있다.As described above, the object guidance system according to the embodiment of the present invention uses an inexpensive image sensor having a recognition accuracy of 10 m or less by being mounted on an unmanned moving object such as a mobile robot and a docking station and applying an algorithm differently according to its distance. Nevertheless, it is possible to precisely move the unmanned vehicle to a short distance as well as to a remote destination.
이하, 도면을 참조하여 본 발명의 실시예에 따른 사물 유도 시스템의 구조를 상세히 설명한다.Hereinafter, the structure of the thing guidance system according to an embodiment of the present invention with reference to the drawings in detail.
도 2는 본 발명의 실시예에 따른 사물 유도 시스템의 구조를 나타낸 도면이다.2 is a view showing the structure of a thing guidance system according to an embodiment of the present invention.
도 2를 참조하면, 본 발명의 실시예에 따른 사물 유도 시스템(100)은, 마커(340)를 검출하여 무인 이동체의 현재위치에서 유도장치(300)와의 거리 및 방향을 산출하여 서로 다른 정밀도를 갖는 제1 및 제2 위치정보를 생성하는 탐지장치(200) 및, 광원(330)의 점등에 따라 표시되는 마커(340)를 포함하는 유도장치(300)를 포함할 수 있고, 탐지장치(200)는 서로 다른 거리에 따른 제1 또는 제2 위치정보에 포함된 이동경로에 따라 무인 이동체의 이동을 요청할 수 있다. 2, the object guidance system 100 according to an embodiment of the present invention, by detecting the marker 340 to calculate the distance and direction with the guidance device 300 at the current position of the unmanned moving object to achieve different precision It may include a detection device 200 for generating the first and second position information having, and the induction device 300 including a marker 340 displayed according to the lighting of the light source 330, the detection device 200 ) May request the movement of the unmanned moving object according to the movement path included in the first or second location information according to different distances.
여기서, 탐지장치(200) 및 유도장치(300)는 각각 독립적인 회로부재로 구현되어 각각 무인 이동체(도 1의 10) 및 도킹 스테이션(도 1의 20)에 탑재될 수 있다.Here, the detection apparatus 200 and the induction apparatus 300 may be implemented as independent circuit members, respectively, and may be mounted on the unmanned moving object (10 of FIG. 1) and the docking station (20 of FIG. 1), respectively.
상세하게는, 탐지장치(200)는 무인 이동체에 탑재되어 도킹 스테이션 즉 유도장치를 제어하여 이동 방향을 결정하는 기준이 되는 마커(340)의 표시 또는 비표시 되도록 하고, 그 마커(340)를 기준으로 하여 도킹 스테이션으로부터 상대적인 현재 무인 이동체의 위치를 판단하여 이동경로를 결정할 수 있다. In detail, the detection apparatus 200 is mounted on an unmanned moving object to control the docking station, that is, the induction apparatus, to display or not display the marker 340 which is a reference for determining the moving direction, and based on the marker 340. By determining the relative position of the current unmanned moving object from the docking station can determine the movement path.
이러한 기능을 구현하기 위해, 탐지장치(200)는 유도장치(300)로 LED 제어신호를 송출하는 적외선 송신부(210), 유도장치(300)에 대한 검출영상을 획득하는 이미지 센서(220), 검출영상 내 마커(340)가 검출되면, 검출된 마커(340)에 기초하여 유도장치(300)와의 거리 및 각도를 산출하여 제1 위치정보를 생성하는 마커 추출부(230), 검출영상 내 마커(340)가 검출되지 않으면, 유도장치(300)의 유도발광 구동에 따라 검출된 유도발광 신호에 기초하여 유도장치(300)와의 거리 및 각도를 산출하여 제2 위치정보를 생성하는 목표 탐지부(240) 및 제1 또는 제2 위치정보에 따라 무인 이동체의 이동을 요청하는 제어부(250)를 포함할 수 있다.In order to implement such a function, the detection apparatus 200 detects an infrared ray transmitter 210 for transmitting an LED control signal to the induction apparatus 300, an image sensor 220 for acquiring a detection image for the induction apparatus 300, and detection. When the marker 340 in the image is detected, the marker extractor 230 generating the first position information by calculating the distance and angle with the induction apparatus 300 based on the detected marker 340, the marker in the detected image ( If the detection unit 340 is not detected, the target detector 240 generates the second position information by calculating a distance and an angle with the induction device 300 based on the induction light emission signal detected by the induction light emission driving of the induction device 300. ) And a controller 250 for requesting movement of the unmanned moving object according to the first or second location information.
적외선 송신부(210)는 무인 이동체의 표면에 노출될 수 있고, 제어부(250)의 제어에 따라 적외선 파형의 신호를 주변에 송출하여 도킹 스테이션에 탑재된 유도장치(300)의 적외선 수신부(310)로 전송할 수 있다.The infrared transmitting unit 210 may be exposed on the surface of the unmanned moving object, and under the control of the control unit 250, the infrared transmitting unit 210 transmits an infrared waveform signal to the surroundings to the infrared receiving unit 310 of the induction apparatus 300 mounted in the docking station. Can transmit
이러한 적외선 송신부(210)를 통해 송출되는 신호는 유도장치(300)에 구비되는 LED 광원(330)의 점등 및 점멸을 제어하는 LED 제어신호임에 따라, 탐지장치(200)는 유도장치(300)의 위치를 탐지하고자 할 때, 적외선 송신부(210)를 통해 LED 제어신호를 출력하게 된다.As the signal transmitted through the infrared transmitter 210 is an LED control signal for controlling the lighting and blinking of the LED light source 330 provided in the induction apparatus 300, the detection apparatus 200 is the induction apparatus 300. When you want to detect the location of the, via the infrared transmitter 210 will output the LED control signal.
이미지 센서(220)는 소정의 카메라 부재로서, 무인 이동체의 표면으로 노출되어 주변을 촬영하여 영상을 생성할 수 있다. The image sensor 220 is a predetermined camera member, which is exposed to the surface of the unmanned moving object and photographs a surrounding to generate an image.
특히, 이미지 센서(220)는 유도장치(300) 방향을 촬영시, LED 광원(330)의 점등에 따라 마커(340)를 감지할 수 있고, 마커(340)를 포함하는 검출영상을 생성할 수 있다.In particular, the image sensor 220 may detect the marker 340 according to the lighting of the LED light source 330 when photographing the direction of the induction apparatus 300, and may generate a detection image including the marker 340. have.
이러한 이미지 센서(220)는 저해상도의 저가 이미지 센싱부재가 이용될 수 있으나, 적어도 이격거리가 10 m 이내에서는 피사체의 형태를 비교적 뚜렷하게 구현할 수 있는 센서가 이용될 수 있다.The low-resolution low-cost image sensing member may be used as the image sensor 220. However, at least 10 m away from each other, a sensor capable of implementing the object shape relatively clearly may be used.
여기서, 전술한 이격거리는 이미지 센서(220)의 성능 및 크기, 마커(340)의 크기 등의 조건에 따라 변경될 수 있다.Here, the aforementioned separation distance may be changed according to conditions such as the performance and size of the image sensor 220 and the size of the marker 340.
마커 추출부(230) 및 목표 탐지부(240)는 무인 이동체 및 도킹 스테이션의 의 거리에 따라 이미지 센서(220)의 촬영결과를 분석하여 무인 이동체의 위치정보를 산출할 수 있다.The marker extractor 230 and the target detector 240 may calculate the position information of the unmanned moving object by analyzing the photographing result of the image sensor 220 according to the distance between the unmanned moving object and the docking station.
먼저, 마커 추출부(230)는 이미지 센서(220)가 생성한 검출영상 내 마커(340)의 등장여부를 판단할 수 있다. 본 발명에서 탐지장치(200)가 유도장치(300)의 10 m 이내에 진입한 상태이면, 노이즈를 감안하더라도 마커에 대응하는 영역이 반드시 존재하게 된다.First, the marker extractor 230 may determine whether the marker 340 in the detection image generated by the image sensor 220 appears. In the present invention, if the detection apparatus 200 enters within 10 m of the induction apparatus 300, an area corresponding to the marker will necessarily exist even if noise is taken into account.
이에 따라, 마커 추출부(230)는 검출영상에서 마커(340)를 추출하면 근거리 내에 유도장치(300)가 위치한 것으로 간주할 수 있고, 마커(340)에 대한 이미지 분석을 통해 도킹 스테이션과의 거리 및 각도를 산출할 수 있다.Accordingly, when the marker extractor 230 extracts the marker 340 from the detection image, the marker extractor 230 may be regarded as located in the near field, and the distance from the docking station through image analysis of the marker 340. And the angle can be calculated.
이를 위해, 마커 추출부(230)는 검출영상 내 등장하는 마커에 포함된 4점 꼭지점 좌표 및 기 설정된 실제 마커(340)의 크기를 이용하여 호모그래피(Homography) 연산을 통해 마커의 3차원 좌표를 획득하고, 상기 3차원 좌표에 대응하여 상기 거리 및 각도를 산출하게 된다.To this end, the marker extractor 230 uses the four-point vertex coordinates included in the marker appearing in the detection image and the size of the preset actual marker 340 to perform three-dimensional coordinates of the marker through a homography operation. Obtaining the distance and the angle corresponding to the three-dimensional coordinates.
여기서, 마커(340)는 도킹 스테이션의 전면 및 그의 출력단자(도 1의 21)과 동일한 방향으로 설치되는 것이 전제된다. 또한, 마커(340)는 특정한 마크가 흰색 테두리에 둘러싸인 도형일 수 있고, 이진화 과정에 의해 해당 마커의 에지는 사각형상으로 폐곡선을 이루는 형태가 인식될 수 있다. 마커 추출부(230)는 내장된 이진화기에 의해 검출영상을 이진화하고, 도형에 해당하는 영역을 검출함으로써 마커(340)를 인식할 수 있다.Here, the marker 340 is assumed to be installed in the same direction as the front of the docking station and its output terminal (21 in Figure 1). In addition, the marker 340 may be a figure in which a specific mark is surrounded by a white border, and a shape in which the edge of the marker forms a closed curve in a rectangular shape by a binarization process may be recognized. The marker extractor 230 may recognize the marker 340 by binarizing the detected image by a built-in binarizer and detecting a region corresponding to the figure.
또한, 마커 추출부(230)는 마커가 4개의 꼭지점을 갖는 사각형태임에 따라, 2차원 평면상의 좌표를 추출하고, 호모그래피 연산을 통해 3차원 평면상에 투영함으로써, 마커에 대한 3차원 좌표를 추출할 수 있다.In addition, the marker extractor 230 extracts the coordinates on the two-dimensional plane as the marker has a quadrangular shape having four vertices, and projects the coordinates on the three-dimensional plane through a homography operation, thereby the three-dimensional coordinates of the markers. Can be extracted.
여기서, 마커의 3차원 좌표는 탐지장치(200)에서 유도장치(300)의 실제 마커(340)까지의 거리 및 탐지장치(200)에서 바라본 실제 마커(340)의 거리에 대응함에 따라 이에 비례하는 3차원 좌표상에서 탐지장치(200)의 방향 및 위치를 특정할 수 있고, 제1 위치정보를 생성할 수 있다. 이를 이용하여 마커 추출부(230)는 유도장치(300)와의 거리 및 각도를 계산하여 이동경로를 획득하게 된다.Here, the three-dimensional coordinates of the marker is proportional to the distance from the detection device 200 to the actual marker 340 of the induction device 300 and the distance of the actual marker 340 viewed from the detection device 200. The direction and location of the detection apparatus 200 may be specified on three-dimensional coordinates, and first position information may be generated. Using this, the marker extractor 230 calculates the distance and angle with the induction apparatus 300 to obtain a movement route.
목표 탐지부(240)는 탐지장치(200)가 유도장치(300)의 10 m를 초과하여 벗어난 상태임에 따라, 마커 추출부(230)에 의해 마커(340)가 추출되지 않을 경우 구동할 수 있다. The target detector 240 may be driven when the marker 340 is not extracted by the marker extractor 230 as the detector 200 is out of the guide device 300 by more than 10 m. have.
이러한 경우, 이미지 센서(220)에 의한 검출영상 내 마커(340)에 대응하는 영역이 존재한다 하더라도 노이즈와의 구분이 어렵게 된다. 이러한 한계를 극복하기 위해, 본원발명의 제어부(250)는 적외선 송신부(210)가 특정 점등패턴으로 LED 광원(330)을 구동시키기 위한 LED 제어신호를 송출하도록 제어할 수 있다. 이러한 LED 제어신호에 따라 LED 광원(330)은 특정 점등패턴을 갖는 광 - 이하, '유도발광 신호'라 한다 - 으로 구동하게 된다.In this case, even if there is an area corresponding to the marker 340 in the detection image by the image sensor 220, it is difficult to distinguish from the noise. In order to overcome this limitation, the controller 250 of the present invention can control the infrared transmitter 210 to transmit an LED control signal for driving the LED light source 330 in a specific lighting pattern. According to the LED control signal, the LED light source 330 is driven with light having a specific lighting pattern-hereinafter, referred to as an 'induction light emitting signal'.
바람직하게는, 유도발광 신호는 LED 광원(330)이 1초 주기로 점등 및 점멸을 반복하는 점등패턴일 수 있다.Preferably, the induced light emission signal may be a lighting pattern in which the LED light source 330 repeats lighting and blinking at one second intervals.
이러한 유도발광 신호에 따라 이미지 센서(220)가 감지한 검출영상에는 마커(340)의 형태가 명확하게 표시되지는 않으나, 적어도 검출 영상내 등장 및 퇴장을 반복하는 영역이 포함되게 되며, 이는 개략적인 유도장치의 거리 및 각도를 산출하는 데 이용될 수 있다.The shape of the marker 340 is not clearly displayed in the detection image detected by the image sensor 220 according to the induced light emission signal, but at least includes an area that repeats the appearance and exit of the detection image. It can be used to calculate the distance and angle of the induction device.
상세하게는, 목표 탐지부(240)는 이미지 센서(220)를 통해 일정기간, 바람직하게는 5초 동안, 주변을 촬영하여 검출영상내 유도발광 신호가 검출되는 영역을 추출할 수 있다.In detail, the target detector 240 may extract a region in which the induced light emission signal is detected in the detection image by photographing the surrounding area for a predetermined period of time, preferably 5 seconds, through the image sensor 220.
이때, 목표 탐지부(240)에 의해 유도발광 신호로 추정되는 영역이 추출되지 않을 경우, 제어부(250)는 무인 이동체의 방향을 일정각도 전환하도록 이동부(도 1의 11)를 제어할 수 있다.In this case, when the area estimated as the guided emission signal is not extracted by the target detector 240, the controller 250 may control the moving unit (11 of FIG. 1) to change the direction of the unmanned moving object at an angle. .
상기 유도발광 신호의 출력기간 동안 이미지 센서(220)가 주변을 촬영한 복수의 검출영상 - 이하, '제1 영상'이라 한다 - 을 획득하면, 차영상 처리기법을 적용하여 제1 영상 내 하나 이상의 변화가 발생한 영역 - 이하, '변화영역'이라 한다 - 과, 변화영역별 변화횟수를 검출하고, 그 변화횟수가 임계치를 초과한 하나 이상의 변화영역을 '목표영역'으로 등록하는 등록절차를 수행할 수 있다.When the image sensor 220 acquires a plurality of detection images (hereinafter, referred to as 'first images') of the surroundings during the output period of the induced light emission signal, one or more images of the first image may be applied by applying a difference image processing technique. The area where the change has occurred-hereinafter referred to as 'change area'-and the number of change for each change area are detected, and the registration procedure of registering one or more change areas whose change number exceeds the threshold as a 'target area' is performed. Can be.
이때, 목표 탐지부(240)는 등록된 목표영역이 하나인 경우, 목표영역의 크기 및 중심좌표에 각각 대응하여 유도장치와의 거리 및 각도를 산출하여 개략적인 제2 위치정보를 생성할 수 있다. 목표 탐지부(240)는 이를 이용하여 유도장치(300)와의 거리 및 각도를 계산하여 이동경로를 획득하게 된다.In this case, when there is only one registered target area, the target detector 240 may generate second position information by calculating a distance and an angle with the induction apparatus corresponding to the size and the center coordinate of the target area, respectively. . The target detector 240 obtains the movement path by calculating the distance and the angle with the induction apparatus 300 using the target detector 240.
또한, 목표 탐지부(240)는 등록된 목표영역이 둘 이상인 경우, 각 목표영역에 가중치를 부여하고, 유도발광 신호의 비출력기간 동안 이미지 센서(220)가 주변을 촬영한 검출영상 - 이하, '제2 영상'이라 한다 - 을 획득하면, 차영상 처리 기법을 적용하여 제2 영상 내 하나 이상의 변화영역을 검출하게 된다.In addition, when the target detection unit 240 has two or more registered target areas, the target weighting unit 240 weights each target area and detects the image captured by the image sensor 220 during the non-output period of the induced light emission signal. When the second image is obtained, one or more change areas in the second image are detected by applying a difference image processing technique.
이때, 목표 탐지부(240)는 검출된 변화영역과 중복되는 목표영역을 삭제하되, 목표영역이 하나가 될 때까지 전술한 제1 영상 및 제2 영상을 이용한 변화영역 검출과정을 지정횟수 동안 반복 수행할 수 있다.At this time, the target detector 240 deletes the target region overlapping the detected change region, and repeats the process of detecting the change region using the first image and the second image for a predetermined number of times until the target region becomes one. Can be done.
이러한 검출과정의 반복 수행에 따라, 삭제되지 않고 남아있는 목표영역이 중복 검출되면 가중치를 증가시키고, 검출과정의 반복이 종료되면, 최고 가중치를 갖는 하나의 목표영역을 이용하여 유도장치(300)와의 거리 및 각도를 계산하여 이동경로를 획득하게 된다.According to the repetition of the detection process, the weight is increased when the target area that is not deleted is duplicated. When the repetition of the detection process is completed, the weight is increased by using one target area having the highest weight. The movement path is obtained by calculating the distance and the angle.
이러한 목표 탐지부(240)에 의해 계산되는 탐지장치(200)와 유도장치(300)간 각도(A)는, 이하의 수학식 1을 만족할 수 있다.The angle A between the detection device 200 and the induction device 300 calculated by the target detection unit 240 may satisfy the following Equation 1.
Figure PCTKR2019005581-appb-M000001
Figure PCTKR2019005581-appb-M000001
상기의 수학식 1에서, 'x'는 목표영역에 대한 이미지의 중심 가로좌표, 'wh'는 목표영역에 대한 이미지의 가로크기의 1/2, 'a'는 이미지 센서의 화각을 가리킨다.In Equation 1, 'x' is the center abscissa of the image for the target area, 'wh' is 1/2 of the horizontal size of the image for the target area, and 'a' indicates the angle of view of the image sensor.
또한, 목표 탐지부(240)에 의해 계산되는 탐지장치(200)와 유도장치(300)간 거리(L)는, 이하의 수학식 2를 만족할 수 있다.In addition, the distance L between the detection device 200 and the induction device 300 calculated by the target detection unit 240 may satisfy the following equation (2).
Figure PCTKR2019005581-appb-M000002
Figure PCTKR2019005581-appb-M000002
상기의 수학식 2에서, 'tl'은 목표영역에 대한 이미지의 대각선 길이, 'Real_M'은 탐지가능 최대거리, 'Real_m'은 탐지가능 최소거리, 'Img_M'은 탐지가능 최대거리에서의 이미지의 대각선 길이, 'Img_m'은 탐지가능 최소 거리에서의 이미지 대각선 길이를 가리킨다.In Equation 2, 'tl' is the diagonal length of the image with respect to the target area, 'Real_M' is the maximum detectable distance, 'Real_m' is the minimum detectable distance, and 'Img_M' is the maximum detectable distance of the image. Diagonal length, 'Img_m' indicates the diagonal length of the image at the minimum detectable distance.
제어부(250)는 탐지장치(200) 및 탐지장치(200)가 탑재된 무인 이동체의 전반적인 구동을 제어할 수 있다.The controller 250 may control the overall driving of the detector 200 and the unmanned moving body on which the detector 200 is mounted.
특히, 제어부(250)는 적외선 송신부(210) 및 이미지 센서(220)를 제어하여 LED 제어신호의 송출 및 검출영상의 생성을 제어하고, 검출영상에 따라 마커 추출부(230) 또는 목표 탐지부(240)를 통해 검출영상 분석을 수행하여 제1 또는 제2 위치정보를 생성하고, 이에 따라 무인 이동체의 이동경로를 획득하여 이동부(도 1의 11)을 제어함으로써, 무인 이동체 및 도킹 스테이션간 거리에 따라 적절한 알고리즘을 통해 이동경로를 산출 및 이동을 제어할 수 있다.In particular, the controller 250 controls the infrared transmitter 210 and the image sensor 220 to control the transmission of the LED control signal and the generation of the detection image, and according to the detection image marker extractor 230 or the target detection unit ( By analyzing the detection image through 240 to generate the first or second position information, and thereby obtain the movement path of the unmanned moving object to control the moving unit (11 in Fig. 1), the distance between the unmanned moving object and the docking station According to the algorithm, it is possible to calculate the movement path and control the movement through an appropriate algorithm.
이하, 탐지장치(200)에 대응하는 본 발명의 사물 유도 시스템의 유도장치(300)의 구조를 설명한다.Hereinafter, the structure of the induction apparatus 300 of the object guidance system of the present invention corresponding to the detection apparatus 200 will be described.
유도장치(300)는 도킹 스테이션에 탑재되어 탐지장치(200)의 제어에 따라, 이동 방향 기준인 마커(340)의 표시 또는 비표시하고, 그 마커(340)를 기준으로 하여 무인 이동체가 이동경로를 결정 및 이동할 수 있도록 한다.The induction apparatus 300 is mounted on the docking station and, under the control of the detection apparatus 200, displays or hides the marker 340 which is a moving direction reference, and moves the unmanned moving object based on the marker 340. Make decisions and move them around.
이러한 기능을 구현하기 위해, 유도장치(300)는 탐지장치(200)부터 송출되는 LED 제어신호를 수신하고, LED 제어신호에 따라 LED 광원(330)에 구동신호를 출력하는 적외선 수신부(310) 및, 하나 이상의 LED 광원(330) 및, LED 광원(330)의 발광면에 배치되어 테두리가 사각형을 이루는 형상의 마커(340)를 포함하는 마커부(320)를 포함할 수 있다.In order to implement such a function, the induction apparatus 300 receives an LED control signal transmitted from the detection apparatus 200, and outputs a driving signal to the LED light source 330 according to the LED control signal, and an infrared receiver 310. It may include a marker unit 320 including at least one LED light source 330, and a marker 340 is formed on the light emitting surface of the LED light source 330 to form a rectangular border.
적외선 수신부(310)는 탐지장치(200)로부터 송출되는 LED 제어신호를 수신하고, 이에 대응하여 마커부(320)의 LED 광원(330)의 점등 및 점멸을 제어할 수 있다. The infrared receiver 310 may receive the LED control signal transmitted from the detection apparatus 200, and control the lighting and blinking of the LED light source 330 of the marker 320 in response thereto.
마커부(320)는 광신호 형태의 유도발광 신호를 출력하는 것으로, 빛을 방출하는 LED 광원(330) 및 특정 도형이 형성된 마커(340)를 포함할 수 있다.The marker unit 320 outputs an induced light emission signal in the form of an optical signal, and may include an LED light source 330 that emits light and a marker 340 in which a specific figure is formed.
LED 광원(330)은 하나 이상의 LED 램프로 구성될 수 있고, 소정 휘도의 빛을 전면에 부착된 마커(340) 방향으로 방출할 수 있다.The LED light source 330 may be composed of one or more LED lamps, and may emit light having a predetermined brightness toward the marker 340 attached to the front surface.
마커(340)는 투명수지재질의 기판 및 기판의 일면에 형성되는 도형으로 이루어질 수 있다. 상기 도형은 특정한 마크가 흰색 테두리에 둘러싸인 형태일 수 있고, 그의 에지는 특정 형상으로 폐곡선을 이루는 사각형태로 표시될 수 있다. 이러한 마커(340)는 LED 광원(330)에 의해 배면으로부터 입사되는 빛이 투명영역을 통해 투과됨에 따라 외부로 표시될 수 있다.The marker 340 may be formed of a substrate formed of a transparent resin material and a figure formed on one surface of the substrate. The figure may have a shape in which a specific mark is surrounded by a white border, and an edge thereof may be displayed in a rectangular shape forming a closed curve in a specific shape. The marker 340 may be displayed to the outside as the light incident from the rear surface by the LED light source 330 is transmitted through the transparent region.
전술한 구조에 따라, 유도장치(300)는 탐지장치(200)의 제어에 따라 마커(340)를 표시함으로써 탐지장치(200)가 마커(340)를 기준으로 현재 위치정보를 산출하고, 이에 따라 유도장치(300)로 접근하도록 기준을 제시하게 된다. According to the above-described structure, the induction apparatus 300 displays the marker 340 under the control of the detection apparatus 200 so that the detection apparatus 200 calculates the current position information based on the marker 340, and accordingly, A reference is made to approach the guidance device 300.
이하, 도면을 참조하여 본 발명의 실시예에 따른 사물 유도 시스템에 의한 사물 유도 방법을 설명한다.Hereinafter, an object induction method by an object induction system according to an exemplary embodiment of the present invention will be described with reference to the accompanying drawings.
도 3 내지 도 5는 본 발명의 실시예에 따른 사물 유도 방법을 나타낸 도면이다. 이하의 설명에서 각 단계별 실행 주체는 별도의 기재가 없더라도 전술한 탐지장치 및 유도장치와 그 구성부가 된다.3 to 5 are diagrams illustrating a method of inducing objects according to an embodiment of the present invention. In the following description, each step execution subject may be the above-described detection apparatus and induction apparatus and their components even if there is no separate description.
먼저, 도 3을 참조하면, 본 발명의 사물 유도 방법에 따르면 탐지장치가 탑재된 무인 이동체가 유도장치가 탑재된 도킹 스테이션으로 도킹을 시도하고자 하면, 유도장치의 제어부는 적외선 송신부를 통해 LED 제어신호를 송출함으로써 광원을 점등 제어한다(S100).First, referring to FIG. 3, according to the object guidance method of the present invention, when an unmanned moving object equipped with a detection device attempts to dock to a docking station equipped with an induction device, the controller of the induction device may control an LED control signal through an infrared transmitter. By controlling the lighting source is controlled (S100).
이에 따라, 유도장치의 적외선 수신부는 LED 제어신호를 수신하여 마커부를 통해 유도발광 신호를 출력하고, 이를 탐지장치의 이미지 센서가 촬영하여 검출영상을 생성하게 된다. 제어부는 검출영상이 생성되면, 마커 추출부를 실행하여(S200), 마커의 검출을 시도한다(S300).Accordingly, the infrared receiver of the induction apparatus receives the LED control signal and outputs the induced light emission signal through the marker unit, and the image sensor of the detection apparatus captures the same to generate a detection image. When the detection image is generated, the controller executes the marker extraction unit (S200) and attempts to detect the marker (S300).
S300 단계에서 마커가 검출되면, 탐지장치와, 마커간 정밀한 거리 및 각도를 계산하고(S400), 검색완료(S800)를 제어부로 출력하게 된다(S800). When the marker is detected in step S300, the precise distance and angle between the detection device and the marker are calculated (S400), and the search completion (S800) is output to the controller (S800).
또한, S300 단계에서, 마커가 검출되지 않으면, 제어부는 목표 탐지부를 실행하여(S500), 유도장치를 검출을 시도한다(S600). 이때, 유도장치의 검출을 시도하는 것은 실질적으로 마커의 검출을 시도하는 것에 대응한다.In operation S300, when the marker is not detected, the controller executes the target detector (S500) and attempts to detect the induction apparatus (S600). At this time, attempting to detect the guidance device substantially corresponds to attempting to detect the marker.
다음으로, S600 단계에서 유도장치가 검출되면, 목표 탐지부는 탐지장치 및 유도장치간 대략적인 거리 및 각도를 계산하고(S700), 제어부는 이동부에 무인 이동체를 유도장치 방향으로 이동 요청하고(S1100), 상기 S100 단계부터 절차를 재 진행하게 된다.Next, when the guidance device is detected in step S600, the target detection unit calculates an approximate distance and angle between the detection device and the guidance device (S700), and the controller requests the moving unit to move the unmanned vehicle in the direction of the guidance device (S1100). In step S100, the procedure is resumed.
또한, S600 단계에서 유도장치가 검출되지 않으면, 제어부는 LED 광원을 소등 제어하고(S900), 목표 탐지부에 의한 검색실패를 출력받게 된다(S1000). 또한, 제어부는 이동부에 현재 무인 이동체의 방향을 전환할 것으로 요청하고(S1200), 상기 S100 단계부터 절차를 재 진행하게 된다.In addition, if the induction apparatus is not detected in step S600, the control unit turns off the LED light source (S900), and receives the search failure by the target detection unit (S1000). In addition, the control unit requests the moving unit to change the direction of the current unmanned moving object (S1200), and proceeds the procedure again from the step S100.
한편, 전술한 단계에서 본 발명의 마커 검출부에 의해 무인 이동체가 도킹 스테이션과 근거리에 위치한 경우, 정밀한 거리 및 각도를 계산하는 단계는 다음과 같은 절차로 세분화될 수 있다.Meanwhile, when the unmanned moving object is located at a short distance from the docking station by the marker detection unit of the present invention in the above-described step, the step of calculating the precise distance and angle can be subdivided into the following procedure.
상세하게는, 도 4를 참조하면, 상기 마커 추출부를 실행하는 단계(S200)는 마커 추출부가 이미지 센서에 의해 검출영상을 획득하고(S210), 검출영상 내에서 마커를 추출하는 단계(S220)로 세분화될 수 있다.In detail, referring to FIG. 4, the step of executing the marker extracting unit (S200) is a step of extracting a marker from the marker extracting unit by the image sensor (S210) and extracting the marker from the detected image (S220). Can be broken down.
이후, 마커 검출 단계(S300)에서 마커가 검출되지 않은 경우, 검색 실패를 출력하고(S310), 마커가 검출된 경우, 마커 추출부는 검출된 마커에서 각 꼭지점에 해당하는 4점 좌표를 획득하고(S320), 호모그래피 연산을 통해 2차원 마커를 3차원 좌표에 투영하게 된다(S330).Subsequently, when a marker is not detected in the marker detection step S300, a search failure is output (S310). When the marker is detected, the marker extractor obtains four-point coordinates corresponding to each vertex in the detected marker. In operation S320, the two-dimensional marker is projected on the three-dimensional coordinates through a homography operation (S330).
이어서, 마커 추출부는 마커의 3차원 좌표에 기초하여 유도장치와의 거리 및 각도를 계산하고(S340), 거리 및 각도를 출력하게 된다(S350). 이후, 전술한 S800 단계로 진행한다. Subsequently, the marker extracting unit calculates the distance and angle with the induction apparatus based on the three-dimensional coordinates of the marker (S340), and outputs the distance and angle (S350). Thereafter, the process proceeds to step S800 described above.
또한, 도 5를 참조하면, 목표 탐지부를 실행하는 단계(S500)는 이하의 S510 단계 내지 S550 단계로 세분화될 수 있다.In addition, referring to FIG. 5, the step S500 of executing the target detector may be divided into the following steps S510 to S550.
상세하게는, 제어부가 이미지 센서를 통해 LED 제어신호를 송출하여 마커부가 유도발광 신호를 출력하도록 점등을 제어하면(S510), 탐지장치의 이미지 센서는 그 유도발광 신호를 수신하여 검출영상 즉, 제1 영상을 생성하고, 목표 탐지부는 일정기간 검출영상의 변화영역을 탐색한다(S520).In detail, when the controller transmits the LED control signal through the image sensor and controls the lighting to output the induced light emission signal (S510), the image sensor of the detection device receives the induced light emission signal to generate a detection image, that is, the first image. One image is generated, and the target detector searches for a change region of the detected image for a predetermined period of time (S520).
이어서, 제어부가 LED 제어신호를 송출하여 마커부가 소등하도록 제어하고(S530), 이에 따라 목표 탐지부는 일정기간 검출영상 즉, 제2 영상을 이용하여 제1 영상의 변화영역상에서 제2 영상 대비 변화지속영역을 삭제한다(S540). 여기서, S540 단계는 노이즈 제거 단계로서, 제1 영상에 의해 설정된 둘 이상의 목표영역에 대하여, 제2 영상에서 동일 영역에 변화횟수가 3회 이하일 경우 그 영역은 마커에 대응하는 영역이 아닌, 단순 노이즈로 판단하여 설정된 목표영역에서 제거하는 단계이다.Subsequently, the control unit transmits the LED control signal to control the marker unit to be turned off (S530). Accordingly, the target detection unit maintains the change compared to the second image on the change region of the first image by using the detection image for the predetermined time, that is, the second image. The region is deleted (S540). Here, step S540 is a noise removing step. When two or more target areas set by the first image are less than three times in the same region in the second image, the region is not a region corresponding to the marker, but is a simple noise. It is determined to be removed from the set target area.
즉, 제2 영상은 마커가 표시되지 않는 기간에 대한 것으로 제1 영상이 5초동안의 마커가 1초 주기로 깜박이는 형태로 표시되는 영상임에 따라 마커인 경우, 변화영역이 적어도 3회가 됨에 따라 그 이하의 영역은 노이즈임이 명백하기 때문이다.That is, since the second image is a period in which the marker is not displayed and the first image is an image in which the marker for 5 seconds is displayed in a flashing cycle of 1 second, the second image is changed at least three times. This is because the area below it is obvious that the noise.
다음으로, 둘 이상의 목표영역에 대하여 가중치를 부여한다(S550).Next, weights are assigned to two or more target areas (S550).
이어서, 유도장치를 검출을 시도하는 단계(S600)로서, 목표 탐지부는 목표 영역이 존재하는지 판단하여(S610), 목표영역이 존재하지 않는 경우, 제어부에 검색 실패를 출력한다(S100). 그렇지 않으면, 목표 탐지부는 현재 설정된 목표영역이 하나인지 여부를 판단하여(S620), 하나인 경우, 개략적인 거리 및 각도 계산 단계(S700) 및 거리 및 각도 출력 단계(S7010)를 순차적을 진행하게 된다.Subsequently, as a step (S600) of detecting an induction apparatus, the target detector determines whether a target area exists (S610), and if the target area does not exist, outputs a search failure to the controller (S100). Otherwise, the target detector determines whether there is one currently set target area (S620), and if so, proceeds with the rough distance and angle calculation step S700 and the distance and angle output step S7010. .
또한, S620 단계에서, 둘 이상의 목표영역이 존재한다고 판단되면, 기 설정된 지정횟수의 초과여부를 판단하여(S630), 지정횟수를 초과한 경우, 복수의 목표영역에 부여된 가중치에 따라, 최대 가중치를 갖는 목표영역을 선택함으로써(S640), 이에 기초하여 상기 S700 단계 및 S710 단계를 진행한다.In addition, if it is determined in step S620 that two or more target areas exist, it is determined whether the preset number of times is exceeded (S630). When the number of times is exceeded, the maximum weight is determined according to the weights assigned to the plurality of target areas. By selecting a target area having a (S640), and proceeds to the steps S700 and S710 based on this.
또한, S630 단계에서 지정횟수가 아직 초과하지 않았으면, 현재 설정된 목표영역 중, 어느 하나가 재검출되면, 그 목표영역에 가중치를 증가시키고(S650), S510 단계부터 재 수행한다. 전술한 S650 단계는 설정된 둘 이상의 목표영역 중, 어느 하나 이외에는 모두 노이즈임에 따라, 반복적인 탐색절차에 의해 전술한 노이즈 제거 단계들을 통해 제거시도하고, 이중 하나만을 목표영역으로 설정하기 위한 단계이다.In addition, if the specified number of times has not yet exceeded in step S630, if any one of the currently set target areas is redetected, the weight is increased to the target area (S650), and the process is performed again from step S510. The above-described step S650 is a step for removing through the above-described noise removing steps by repetitive search procedure, since all but one of two or more set target areas are noise, and setting only one of them as the target area.
상기한 설명에 많은 사항이 구체적으로 기재되어 있으나 이것은 발명의 범위를 한정하는 것이라기보다 바람직한 실시예의 예시로서 해석되어야 한다. 따라서, 발명은 설명된 실시예에 의하여 정할 것이 아니고 특허청구범위와 특허청구범위에 균등한 것에 의하여 정하여져야 한다.Many details are set forth in the foregoing description but should be construed as illustrative of preferred embodiments rather than to limit the scope of the invention. Therefore, the invention should not be defined by the described embodiments, but should be defined by the claims and their equivalents.

Claims (13)

  1. 무인 이동체 및 도킹 스테이션에 탑재되는 사물 유도 시스템으로서,An object guidance system mounted on an unmanned moving object and a docking station,
    광원의 점등에 따라 식별되는 마커를 포함하는 유도장치; 및An induction device including a marker identified according to the lighting of the light source; And
    상기 마커를 검출하여 상기 무인 이동체의 현재위치에서 상기 유도장치와의 거리 및 방향을 산출하여 서로 다른 정밀도를 갖는 제1 및 제2 위치정보를 생성하는 탐지장치를 포함하고,Detecting the marker to calculate the distance and direction with respect to the induction apparatus at the current position of the unmanned moving object includes a detection device for generating first and second position information with different precision,
    상기 탐지장치는,The detection device,
    상기 제1 또는 제2 위치정보에 포함된 이동경로에 따라, 상기 무인 이동체의 이동을 요청하는 무인 이동체용 사물 유도 시스템.According to the movement path included in the first or second location information, the object guidance system for an unmanned moving object requesting the movement of the unmanned moving object.
  2. 제 1 항에 있어서,The method of claim 1,
    상기 유도장치는,The induction device,
    하나 이상의 LED 광원 및, 상기 LED 광원의 발광면에 배치되어 테두리가 사각형을 이루는 형상의 상기 마커를 포함하는 마커부; 및A marker unit including one or more LED light sources and the markers disposed on a light emitting surface of the LED light source and having a rectangular border; And
    상기 탐지장치부터 송출되는 LED 제어신호를 수신하고, 상기 LED 제어신호에 따라 상기 LED 광원에 구동신호를 출력하는 적외선 수신부An infrared receiver for receiving the LED control signal transmitted from the detection device, and outputs a drive signal to the LED light source according to the LED control signal.
    를 포함하는 무인 이동체용 사물 유도 시스템.Object guidance system for an unmanned moving object comprising a.
  3. 제 1 항에 있어서,The method of claim 1,
    상기 탐지장치는,The detection device,
    상기 유도장치로 LED 제어신호를 송출하는 적외선 송신부;An infrared transmitter for transmitting an LED control signal to the induction apparatus;
    상기 유도장치에 대한 검출영상을 획득하는 이미지 센서;An image sensor acquiring a detection image of the induction apparatus;
    상기 검출영상 내 마커가 검출되면, 검출된 마커에 기초하여 상기 유도장치와의 거리 및 각도를 산출하여 제1 위치정보를 생성하는 마커 추출부;A marker extractor configured to generate first position information by calculating a distance and an angle with the induction apparatus based on the detected marker when the marker in the detected image is detected;
    상기 검출영상 내 마커가 검출되지 않으면, 상기 유도장치의 유도발광 구동에 따라 검출된 유도발광 신호에 기초하여 상기 유도장치와의 거리 및 각도를 산출하여 제2 위치정보를 생성하는 목표 탐지부; 및A target detector configured to generate second position information by calculating a distance and an angle with the induction apparatus based on the induction emission signal detected by the induction emission driving of the induction apparatus when the marker in the detection image is not detected; And
    상기 제1 또는 제2 위치정보에 따라 상기 무인 이동체의 이동을 요청하는 제어부Control unit for requesting the movement of the unmanned moving object according to the first or second location information
    를 포함하는 무인 이동체용 사물 유도 시스템.Object guidance system for an unmanned moving object comprising a.
  4. 제 3 항에 있어서,The method of claim 3, wherein
    상기 마커 추출부는,The marker extracting unit,
    마커 이미지에 포함된 4점 꼭지점 좌표 및 기 설정된 실제 마커 크기를 이용하여 호모그래피 연산을 통해 마커의 3차원 좌표를 획득하고, 상기 3차원 좌표에 대응하여 상기 거리 및 각도를 산출하는 무인 이동체용 사물 유도 시스템.An object for an unmanned moving object that obtains three-dimensional coordinates of a marker through a homography operation using four-point vertex coordinates included in a marker image and a preset actual marker size, and calculates the distance and angle corresponding to the three-dimensional coordinates. Induction system.
  5. 제 3 항에 있어서,The method of claim 3, wherein
    상기 목표 탐지부는,The target detection unit,
    상기 적외선 송신부를 통해 상기 유도발광 구동을 요청하는 LED 제어신호를 송출하도록 하고, 상기 이미지 센서를 통해 일정기간 동안 지속적으로 촬영된 복수의 영상에 기초하여 설정된 목표영역에 대한 거리 및 각도를 산출하여 상기 제2 위치정보를 생성하는 무인 이동체용 사물 유도 시스템.The LED transmitter transmits the LED control signal for requesting the induction light emission driving through the infrared transmitter, and calculates the distance and angle of the set target area based on a plurality of images continuously photographed for a predetermined period of time through the image sensor. An object guidance system for an unmanned moving object generating second position information.
  6. 제 5 항에 있어서,The method of claim 5,
    상기 목표 탐지부는,The target detection unit,
    상기 유도발광 신호의 출력기간 동안 상기 이미지 센서가 주변을 촬영한 복수의 제1 영상을 획득하면, 차영상 처리기법을 적용하여 상기 복수의 제1 영상 내 하나 이상의 변화영역 및 상기 변화영역 별 변화 횟수를 검출하고, 상기 변화 횟수가 임계치를 초과한 하나 이상의 변화영역을 목표영역으로 등록하는 등록절차를 수행하는 무인 이동체용 사물 유도 시스템.When the image sensor acquires a plurality of first images of the surroundings during the output period of the induced light emission signal, one or more change regions in the plurality of first images and the number of changes for each change region are applied by applying a difference image processing technique. And registering the at least one change area whose threshold number of changes exceeds a threshold as a target area.
  7. 제 6 항에 있어서,The method of claim 6,
    상기 유도발광 신호는, 1초 이하의 주기로 LED 광원이 점등 및 점멸을 반복하도록 하는 신호인 무인 이동체용 사물 유도 시스템.The induced light emission signal is an object guidance system for an unmanned moving object, which is a signal that causes the LED light source to repeat lighting and blinking at a cycle of 1 second or less.
  8. 제 7 항에 있어서,The method of claim 7, wherein
    상기 목표 탐지부는,The target detection unit,
    상기 목표영역이 하나인 경우, 상기 목표영역의 크기 및 중심좌표에 각각 대응하여 상기 유도장치와의 거리 및 각도를 산출하고, If there is only one target region, the distance and angle with the induction apparatus are calculated corresponding to the size and the center coordinate of the target region, respectively.
    상기 목표영역이 둘 이상인 경우, 각 목표영역에 가중치를 부여하고, 상기 유도발광 신호의 비출력기간 동안 상기 이미지 센서가 주변을 촬영한 복수의 제2 영상을 획득하면, 차영상 처리 기법을 적용하여 제2 영상 내 하나 이상의 변화영역을 검출하고,If the target area is more than one, weighting is applied to each target area, and when the image sensor acquires a plurality of second images of the surroundings during the non-output period of the induced light emission signal, the difference image processing method is applied. Detect one or more change regions in the second image,
    검출된 변화영역과 중복되는 목표영역을 삭제하되, 상기 목표영역이 하나가 될 때까지 상기 제1 영상 및 제2 영상을 이용한 변화영역 검출과정을 지정횟수 반복 수행하여 남아있는 목표영역이 중복 검출되면 가중치를 증가시켜 최고 가중치를 갖는 하나의 목표영역을 설정하는 무인 이동체용 사물 유도 시스템.If the target area overlapping with the detected change area is deleted, the change area detection process using the first image and the second image is repeatedly performed a predetermined number of times until the target area becomes one, and the remaining target area is repeatedly detected. An object guidance system for an unmanned moving object which increases the weight and sets one target area having the highest weight.
  9. 제 8 항에 있어서,The method of claim 8,
    상기 유도장치와의 각도(A)는 이하의 수학식,The angle A with the induction apparatus is represented by the following equation,
    Figure PCTKR2019005581-appb-I000003
    Figure PCTKR2019005581-appb-I000003
    에 의해 산출되는 무인 이동체용 사물 유도 시스템(단, x는 목표영역에 대한 이미지의 중심 가로좌표, wh는 목표영역에 대한 이미지의 가로크기의 1/2, a는 이미지 센서의 화각).The object guidance system for an unmanned moving object calculated by (where x is the center abscissa of the image for the target area, wh is 1/2 of the horizontal size of the image for the target area, and a is the angle of view of the image sensor).
  10. 제 8 항에 있어서,The method of claim 8,
    상기 유도장치와의 거리(L)는 이하의 수학식,Distance (L) with the induction apparatus is the following equation,
    Figure PCTKR2019005581-appb-I000004
    Figure PCTKR2019005581-appb-I000004
    에 의해 산출되는 무인 이동체용 사물 유도 시스템(단, tl은 목표영역에 대한 이미지의 대각선 길이, Real_M은 탐지가능 최대거리, Real_m은 탐지가능 최소거리, Img_M은 탐지가능 최대거리에서의 이미지의 대각선 길이, Img_m은 탐지가능 최소 거리에서의 이미지 대각선 길이).The object guidance system for an unmanned moving object calculated by (where tl is the diagonal length of the image with respect to the target area, Real_M is the maximum detectable distance, Real_m is the minimum detectable distance, and Img_M is the diagonal length of the image at the maximum detectable distance). , Img_m is the diagonal image length at the minimum detectable distance).
  11. 청구항 1에 기재된 무인 이동체용 사물 유도 시스템에 의한 사물 유도 방법으로서,An object induction method by the object induction system for an unmanned moving object according to claim 1,
    (a) 광원을 점등 제어하는 단계;(a) controlling the lighting of the light source;
    (b) 마커 추출부를 실행하여 마커를 검출하는 단계;(b) detecting the marker by executing the marker extracting unit;
    (c) 상기 마커가 검출되면, 탐지장치와 마커간 거리 및 각도를 산출하고, 검색완료를 출력하는 단계;(c) if the marker is detected, calculating a distance and an angle between the detection device and the marker and outputting a search completion;
    (d) 상기 마커가 검출되지 않으면, 목표 탐지부를 실행하여 유도장치를 검출하는 단계;(d) if the marker is not detected, detecting a guide device by executing a target detector;
    (e) 상기 유도장치가 검출되면, 탐지장치 및 유도장치간 거리 및 각도를 산출하고, 무인 이동체를 유도장치 방향으로 이동 요청하고, 상기 (a) 단계를 진행하는 단계;(e) if the induction apparatus is detected, calculating a distance and an angle between the detection apparatus and the induction apparatus, requesting to move the unmanned moving object in the direction of the induction apparatus, and performing the step (a);
    (f) 상기 유도장치가 검출되지 않으면, 상기 광원을 소등 제어하고, 검색실패를 출력하는 단계; 및(f) if the induction apparatus is not detected, turning off the light source and outputting a search failure; And
    (g) 상기 무인 이동체에 검색방향 변경을 요청하고, 상기 (a) 단계를 진행하는 단계(g) requesting the unmanned moving object to change the search direction and proceeding to step (a)
    를 포함하는 사물 유도 방법.Object induction method comprising a.
  12. 제 11 항에 있어서,The method of claim 11,
    상기 (b) 단계는,In step (b),
    (b1) 이미지 센서로부터 마커 이미지를 획득하는 단계;(b1) obtaining a marker image from the image sensor;
    (b2) 마커를 추출하는 단계를 포함하고,(b2) extracting the marker,
    상기 (c) 단계는,In step (c),
    (c1) 마커가 검출되지 않으면, 검색 실패를 출력하고, 상기 (d) 단계를 진행하는 단계;(c1) if no marker is detected, outputting a search failure and proceeding to step (d);
    (c2) 마커가 검출되면, 검출된 마커에서 4점 좌표를 획득하는 단계;(c2) if the marker is detected, obtaining four-point coordinates from the detected marker;
    (c3) 획득한 4점 좌표와 기 설정된 실제 마커 크기를 이용하여 호모그래피 연산을 통해 상기 마커의 3차원 좌표를 획득하는 단계;(c3) acquiring three-dimensional coordinates of the marker using a homography operation using the acquired four-point coordinates and a preset actual marker size;
    (c4) 상기 마커와의 거리 및 각도를 산출하는 단계; 및(c4) calculating a distance and an angle with the marker; And
    (c5) 상기 거리 및 각도를 출력하는 단계(c5) outputting the distance and angle
    를 포함하는 사물 유도 방법.Object induction method comprising a.
  13. 제 11 항에 있어서,The method of claim 11,
    상기 (d) 단계는,In step (d),
    (d1) 상기 광원을 점등 제어하는 단계;(d1) controlling lighting of the light source;
    (d2) 이미지 센서로부터 일정기간 동안 제1 영상을 획득하고, 상기 제1 영상 내 변화영역을 탐색하여 목표영역을 설정하는 단계;(d2) acquiring a first image from the image sensor for a predetermined period of time, searching for a change region in the first image, and setting a target region;
    (d3) 상기 광원을 점멸 제어하는 단계;(d3) controlling the flashing of the light source;
    (d4) 이미지 센서로부터 일정기간 동안 제2 영상을 획득하고, 상기 제1 영상에서 제2 영상에 따른 변화영역을 탐색하여 하나 이상의 목표영역을 설정하고, 상기 목표영역 중 지속적으로 변화가 발생하는 영역을 삭제하는 단계;(d4) a second image is acquired from the image sensor for a predetermined period, one or more target regions are set by searching for a change region according to the second image in the first image, and a region continuously changing among the target regions; Deleting;
    (d5) 상기 목표영역에 가중치를 부여하는 단계;(d5) assigning a weight to the target area;
    (d6) 상기 (d2) 단계 및 (d4) 단계에서, 상기 목표영역이 존재하지 않으면 목표 검색 실패를 출력하고, 상기 (f) 단계를 진행하는 단계;(d6) outputting a target search failure if the target area does not exist in steps (d2) and (d4), and proceeding to step (f);
    (d7) 상기 (d2) 단계 및 (d4) 단계에서, 상기 목표영역이 하나만이 존재하면, 상기 유도장치와의 거리 및 각도를 산출하고, 산출된 거리 및 각도를 출력하고, 상기 (e) 단계를 진행하는 단계;(d7) In steps (d2) and (d4), if there is only one target region, the distance and angle with the induction apparatus are calculated, the calculated distance and angle are output, and the step (e) Proceeding with;
    (d8) 상기 (d7) 단계에서, 상기 목표영역이 둘 이상이고, 지정횟수를 초과하지 않았으면, 현재 목표영역에 가중치를 부여하고, 상기 (d1) 단계를 진행하는 단계; 및(d8) in step (d7), if the target area is two or more and does not exceed a predetermined number of times, assigning a weight to the current target area, and performing the step (d1); And
    (d9) 상기 (d7) 단계에서, 상기 목표영역이 둘 이상이고, 지정횟수를 초과했으면, 현재 목표영역 중, 최고 가중치를 갖는 하나의 목표영역을 선택하고 상기 (d7) 단계를 진행하는 단계(d9) in step (d7), if the target area is two or more and exceeds a specified number of times, selecting one target area having the highest weight among the current target areas and proceeding to step (d7).
    를 포함하는 사물 유도 방법.Object induction method comprising a.
PCT/KR2019/005581 2018-05-11 2019-05-09 Object guidance system and method for unmanned moving body WO2019216673A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0054361 2018-05-11
KR1020180054361A KR20190129551A (en) 2018-05-11 2018-05-11 System and method for guiding object for unmenned moving body

Publications (1)

Publication Number Publication Date
WO2019216673A1 true WO2019216673A1 (en) 2019-11-14

Family

ID=68468375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/005581 WO2019216673A1 (en) 2018-05-11 2019-05-09 Object guidance system and method for unmanned moving body

Country Status (2)

Country Link
KR (1) KR20190129551A (en)
WO (1) WO2019216673A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102305328B1 (en) * 2019-12-24 2021-09-28 한국도로공사 System and method of Automatically Generating High Definition Map Based on Camera Images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004151924A (en) * 2002-10-30 2004-05-27 Sony Corp Autonomous mobile robot and control method for the same
KR20070061079A (en) * 2005-12-08 2007-06-13 한국전자통신연구원 Localization system of mobile robot based on camera and landmarks and method there of
KR100811887B1 (en) * 2006-09-28 2008-03-10 한국전자통신연구원 Apparatus and method for providing selectively position information having steps accuracy in autonomous mobile robot
KR20090081236A (en) * 2008-01-23 2009-07-28 삼성전자주식회사 Returning Method of Robot Cleaner System
JP2017505254A (en) * 2014-08-08 2017-02-16 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd UAV battery power backup system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100782863B1 (en) 2007-06-29 2007-12-06 (주)다사로봇 Docking guide apparatus for moving robot and docking method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004151924A (en) * 2002-10-30 2004-05-27 Sony Corp Autonomous mobile robot and control method for the same
KR20070061079A (en) * 2005-12-08 2007-06-13 한국전자통신연구원 Localization system of mobile robot based on camera and landmarks and method there of
KR100811887B1 (en) * 2006-09-28 2008-03-10 한국전자통신연구원 Apparatus and method for providing selectively position information having steps accuracy in autonomous mobile robot
KR20090081236A (en) * 2008-01-23 2009-07-28 삼성전자주식회사 Returning Method of Robot Cleaner System
JP2017505254A (en) * 2014-08-08 2017-02-16 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd UAV battery power backup system and method

Also Published As

Publication number Publication date
KR20190129551A (en) 2019-11-20

Similar Documents

Publication Publication Date Title
WO2011052826A1 (en) Map generating and updating method for mobile robot position recognition
WO2011013862A1 (en) Control method for localization and navigation of mobile robot and mobile robot using same
KR100669250B1 (en) System and method for real-time calculating location
WO2016200098A1 (en) Mobile robot and method of controlling same
WO2018070686A1 (en) Airport guide robot and operation method therefor
WO2019225817A1 (en) Vehicle position estimation device, vehicle position estimation method, and computer-readable recording medium for storing computer program programmed to perform said method
WO2016065627A1 (en) Location-based control method and apparatus, mobile machine and robot
WO2018230845A1 (en) Method for positioning on basis of vision information and robot implementing same
WO2016002986A1 (en) Gaze tracking device and method, and recording medium for performing same
WO2020004817A1 (en) Apparatus and method for detecting lane information, and computer-readable recording medium storing computer program programmed to execute same method
WO2020159076A1 (en) Landmark location estimation apparatus and method, and computer-readable recording medium storing computer program programmed to perform method
WO2020036295A1 (en) Apparatus and method for acquiring coordinate conversion information
WO2019240452A1 (en) Method and system for automatically collecting and updating information related to point of interest in real space
JPH10177414A (en) Device for recognizing traveling state by ceiling picture
WO2020046038A1 (en) Robot and control method therefor
WO2020075954A1 (en) Positioning system and method using combination of results of multimodal sensor-based location recognition
WO2020067751A1 (en) Device and method for data fusion between heterogeneous sensors
WO2020071619A1 (en) Apparatus and method for updating detailed map
WO2021158062A1 (en) Position recognition method and position recognition system for vehicle
WO2020251099A1 (en) Method for calling a vehicle to current position of user
JPH11272328A (en) Color mark, moving robot and method for guiding moving robot
WO2016209029A1 (en) Optical homing system using stereoscopic camera and logo and method thereof
WO2019216673A1 (en) Object guidance system and method for unmanned moving body
WO2017111201A1 (en) Night image display apparatus and image processing method thereof
WO2016047890A1 (en) Walking assistance method and system, and recording medium for performing same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19800061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19800061

Country of ref document: EP

Kind code of ref document: A1