KR101747350B1 - Method for recognizing coordinates of object for visual servoing - Google Patents

Method for recognizing coordinates of object for visual servoing Download PDF

Info

Publication number
KR101747350B1
KR101747350B1 KR1020150143922A KR20150143922A KR101747350B1 KR 101747350 B1 KR101747350 B1 KR 101747350B1 KR 1020150143922 A KR1020150143922 A KR 1020150143922A KR 20150143922 A KR20150143922 A KR 20150143922A KR 101747350 B1 KR101747350 B1 KR 101747350B1
Authority
KR
South Korea
Prior art keywords
camera
angle
plane
target object
coordinates
Prior art date
Application number
KR1020150143922A
Other languages
Korean (ko)
Other versions
KR20170044346A (en
Inventor
이장명
황요섭
김윤기
이동혁
윤하늘
하현욱
김덕수
Original Assignee
부산대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 부산대학교 산학협력단 filed Critical 부산대학교 산학협력단
Priority to KR1020150143922A priority Critical patent/KR101747350B1/en
Publication of KR20170044346A publication Critical patent/KR20170044346A/en
Application granted granted Critical
Publication of KR101747350B1 publication Critical patent/KR101747350B1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Abstract

There is provided a method of recognizing coordinates of an object for visual surveillance comprising the steps of acquiring an image by a predetermined camera installed in at least one arm of a manipulator at a position spaced a predetermined distance from the object for visual surveillance, ; Extracting a feature point or feature region corresponding to the object from the acquired image; Identifying a target object based on the extracted feature points or feature regions; Setting a predetermined XY plane with the origin of the camera as an origin; Measuring an angle between the camera and the object based on X and Y axes on the XY plane; And estimating coordinates of the object in the XY plane based on the measured angle.
Accordingly, as long as the object to be subjected to the visual surveillance is included in the imaging range of the camera installed on the manipulator, the coordinates of the object can be estimated regardless of the number of cameras and the mounting position.

Description

[0001] METHOD FOR RECOGNIZING COORDINATES OF OBJECT FOR VISUAL SERVOING [0002]

The present invention relates to a method of recognizing coordinates of an object for visual surveillance, and more particularly, to a method of acquiring x and y coordinates of an object located at a predetermined distance from each other using a camera installed on both arms of a manipulator .

A method of measuring the distance to an object using a camera includes a measurement method in which a distance is calculated according to an object size or a focus using a single camera, a structured illumination method in which a laser is projected onto an object, And a triangulation method using a dual camera. In particular, the field of stereo vision using triangulation of a stereo camera to obtain a relative three-dimensional coordinate between a robot and an object has been improved, and in recent years, a technique of visual servoing has attracted much attention.

In general, visual sur- veying uses position-based visual servoing (PBVS), which estimates and minimizes the 3D relative position of the unmanned system using the target position on the 2D image plane, Based visual servoing (IBVS) technique that calculates 3D relative motion based on the unmanned system and uses it to control it.

Among them, PBVS is a method to acquire and control the position information of the objects on the camera. Most stereoscopic vision currently used uses this technique. The PBVS method has the advantage of obtaining accurate three-dimensional coordinates by minimizing the positional difference between the two cameras, while complicating the process, limiting the viewing angle, and slowing the operation speed due to the real-time calibration operation.

On the other hand, IBVS is based on the difference between the reference image and the observation image for the object to be tracked, so that when the person looks at the object, the two pupils are focused on one point, And the distance is measured using the rotation angle. In contrast to the PBVS method in which the optical axis of the camera is parallel, the IBVS method does not restrict the viewing angle because the optical axis moves independently.

KR 10-1275823 B1 KR 10-0504215 B1

SUMMARY OF THE INVENTION The object of the present invention is to provide an image processing apparatus and a method thereof, which are capable of performing coordinate transformation by using the relative polar coordinates when two cameras are at the same point in time or by using an IBVS method using a coordinate difference when one camera is at a different point, To be estimated.

The technical objects of the present invention are not limited to the technical matters mentioned above, and other technical subjects which are not mentioned can be understood by those skilled in the art from the following description.

According to an aspect of the present invention, there is provided a method of recognizing coordinates of an object for visual surveillance, the method comprising: acquiring an image by a predetermined camera installed on at least one arm of a manipulator at a position spaced a predetermined distance from the object for visual sur- ; Extracting a feature point or feature region corresponding to the object from the acquired image; Identifying a target object based on the extracted feature points or feature regions; Setting a predetermined X-Y plane having a position of the camera as an origin; Measuring an angle between the camera and the object based on X and Y axes on the X-Y plane; And estimating coordinates of the object in the X-Y plane based on the measured angle.

According to the present invention, relative polar coordinate information of a camera included in an image acquired at the same time from two cameras capturing a predetermined target object for visual surveillance, or cameras captured at different points in time from one camera, There is an effect that the coordinates of the target object can be simply estimated.

According to the present invention, the coordinates of the object can be estimated independently of the number of cameras and the installation position, as long as the object to be subjected to visual surveillance is included in the imaging range of the camera installed on the manipulator .

1 is a flowchart illustrating a method of recognizing coordinates of an object for visual surveillance according to an embodiment of the present invention,
2 is a flowchart illustrating a method of recognizing coordinates of an object for visual surveillance according to another embodiment of the present invention,
FIG. 3 is a view showing a state in which at least a part of characteristic points or characteristic regions of an object extracted from the images obtained by the first camera and the second camera shown in FIG. 2 are matched,
FIG. 4 is a view for explaining the distance between the first camera and the second camera of FIG. 2 and the measured first angle, second angle and third angle,
FIG. 5 is a flowchart illustrating a method of recognizing coordinates of an object for visual surveillance according to another embodiment of the present invention,
FIG. 6 is a view for explaining the phase difference of the camera and the first and second angles measured at the first and second time points of FIG. 5;

The foregoing and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. Like reference numerals refer to like elements throughout the specification.

1 is a flowchart illustrating a method of recognizing coordinates of an object for visual surveillance according to an embodiment of the present invention.

Referring to FIG. 1, a method for recognizing coordinates of an object for visual surveillance according to an embodiment of the present invention includes a step S110 of acquiring an image by a camera installed in at least one arm of a manipulator, (S120) of extracting a feature point or a feature region corresponding to an object from an image, specifying a target object based on the extracted feature point or feature region (S130), and determining an XY plane (S150) of measuring the angle between the camera and the target object on the basis of the X-axis and the Y-axis on the XY plane, and calculating the angle of the target object in the XY plane And estimating coordinates (S160).

Since the coordinate recognition method according to the present invention estimates the coordinate information of the object using the relative coordinate information, the camera in step S110 can be installed not only in the end-effector of the manipulator, .

Here, the feature point or feature region in step S120 may be extracted according to a Harris corner detection method (Harris Corner Detection) applying a predetermined threshold value T i , or may be extracted according to an edge such as a second derivative using a Laplacian image Can be extracted based on the contour information of the object detected using the detection algorithm. Such a detection algorithm will be obvious to those skilled in the art and will not be described any further.

FIG. 2 is a flowchart illustrating a method of recognizing coordinates of an object for visual surveillance according to another embodiment of the present invention. FIG. 3 is a flowchart illustrating a method of recognizing coordinates of an object extracted from an image obtained by the first camera and the second camera, FIG. 4 is a view showing a state in which at least a part of feature points or characteristic regions of the first camera and the second camera coincide with each other; FIG. 4 illustrates the distance between the first camera and the second camera in FIG. 2 and the measured first angle, Fig.

Hereinafter, a method of recognizing coordinates of an object for visual surveillance according to various embodiments of the present invention will be described in detail with reference to FIG. 2 to FIG.

First, an image is acquired by a first camera and a second camera installed in both arms of the manipulator (S210).

Here, the first camera and the second camera may be installed at any position on each arm of the manipulator.

Since the coordinate recognition method according to the present invention estimates the coordinate information of the object using the relative coordinate information, the first camera and the second camera in step S210 respectively determine not only the end-effector of the manipulator, It can be installed anywhere on the arm.

Next, feature points or feature regions corresponding to the objects are extracted from the images obtained in step S210 (S220).

Next, if at least a part of the extracted feature points or feature regions match each other in step S220, it is specified as a target object (S230).

For example, referring to FIG. 3, when the first camera is installed on the left arm of the manipulator and the second camera is installed on the right arm, the left image acquired by the first camera is included in the lower right The second feature point (star shape) included in the upper left corner is extracted, and the second feature point (star shape) included in the upper left corner is extracted from the right image obtained by the second camera, (Star shape) corresponding to the shape of the minutiae point is specified as a target object for visual surveillance.

Next, an X-Y plane with the origin of the first camera is set (S240).

Next, a first angle between the first camera and the target object, a second angle between the second camera and the target object are measured based on the Y-axis on the XY plane, and the first angle between the first camera and the target object is measured based on the X- A third angle between the camera and the second camera is measured (S250).

4, when a first camera (left circle) installed on the left arm of the manipulator is located at the origin (0, 0), a target object (star) a first camera (left circle) the first angle (θ 1) and the target object (star) and the second camera (right circle), the second measurement each (θ 2), and, x-axis between the (x, 0) between the The third angle? Between the first camera (left circle) and the second camera (right circle) is measured on the basis of the first angle?

Next, the coordinates of the object in the XY plane are estimated based on the distance between the first camera and the second camera and the first angle, the second angle and the third angle measured in step S250 (S260 ).

Here, the estimating may be performed based on the distance d between the first camera and the second camera and the measured first angle? 1 , the second angle? 2 , and the third angle? To estimate the coordinates of the object in the XY plane.

For example, as shown in FIG. 4, when forming one triangle having the positions of the first camera (left circle), the second camera (right circle), and the target object (star) as respective vertexes, One side is the distance l 1 between the first camera (left circle) and the target object (star), and the second side of the triangle is the distance l 2 between the second camera (right circle) , And the third side of the triangle is the distance d 1 between the first camera (left circle) and the second camera (right circle).

At this time, applying the sin law that 'the values obtained by dividing the sides of the triangle by the sine values of the respective angles facing each other are constant' can be expressed by the following equations (1) and (2). Where? 1 is the first angle,? 2 is the second angle,? Is the third angle, and d is the distance between the first camera and the second camera.

Figure 112015099649142-pat00001

Figure 112015099649142-pat00002

The above equations (1) and (2) can be summarized as Equation ( 3 ) with respect to the first side of the triangle, i.e., the distance l 1 between the first camera (left circle) and the target object (star).

Figure 112015099649142-pat00003

Using Equation (3), Equation (4) representing the x-coordinate of the target object (star) and Equation (5) representing the y-coordinate of the target object (star) are as follows. Here, x is the x-coordinate of the target object, and y is the y-coordinate of the target object.

Figure 112015099649142-pat00004

Figure 112015099649142-pat00005

According to the present invention, the relative polar coordinate information of each camera included in the images obtained at the same time from the two cameras for picking up a predetermined target object for visual surveillance, the y coordinate can be easily estimated.

FIG. 5 is a flowchart illustrating a method of recognizing coordinates of an object for visual surveillance according to another embodiment of the present invention. FIG. 6 is a flowchart illustrating a method of recognizing coordinates of a camera at a first point of time and a second point of time, Fig. 5 is a view for explaining the measured first angle and second angle; Fig.

Hereinafter, a method for recognizing coordinates of an object for visual surveillance according to another embodiment of the present invention will be described with reference to FIG. 5 and FIG.

First, an image when the camera installed in one arm of the manipulator is in the first position and an image when it is in the second position are acquired (S310).

In step S320, feature points or feature areas corresponding to the objects are extracted from the images obtained in step S310.

Next, if at least a part of the extracted feature points or feature regions coincide with each other in step S320, the target object is specified as a target object (S330).

Next, an X-Y plane is set such that the first position of the camera is the origin and the second position of the camera is located on the Y axis (S340).

Next, a first angle (? 1 ) between the camera and the target object in the first position and a second angle (? 2 ) between the camera and the target object in the second position with respect to the Y- (S350).

6, when the first position (lower circle) of the camera is the origin (0, 0), the target object (star) and the first position of the camera W) is the first measuring a second angle (θ 2) between each of (θ 1) and the target object (star) and the camera of the second position (upper circle) between.

Next, based on the phase difference d 2 between the first position and the second position and the coordinates of the target object in the XY plane based on the first angle? 1 and the second angle? 2 measured in step S350 (S360).

For example, as shown in Fig. 6, when forming a triangle in which the positions of the first position (lower circle) of the camera, the second position of the camera (upper circle), and the object (star) , The first side of the triangle is the distance l 1 between the first position (lower circle) of the camera and the target object (star), and the second side of the triangle is the second position (upper circle) the distance (l 2) between the stars), a third side of the triangle is the phase difference (d) between the first position (lower circle of the camera) and a second position of the camera (upper circle).

At this time, as described above, by applying the sin law, it can be expressed by Equation (6) below. Here,? 1 is the first angle,? 2 is the second angle, and d is the phase difference of the camera at the first and second points of time.

Figure 112015099649142-pat00006

(6), Equation (6) can be expressed by the following Equation (7) for the first side of the triangle, i.e., the distance (l 1 ) between the first position (lower circle) of the camera and the target object (star).

Figure 112015099649142-pat00007

Using Equation (7), Equation (8) representing the x-coordinate of the target object (star) and Equation (9) representing the y-coordinate of the target object (star) are as follows. Here, x is the x-coordinate of the target object, and y is the y-coordinate of the target object.

Figure 112015099649142-pat00008

Figure 112015099649142-pat00009

According to the present invention, by using the relative polar coordinate information of the camera included in the images obtained at different points in time from one camera for capturing a predetermined target object for visual surveillance, x, y The coordinates can be easily estimated.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments.

Claims (7)

delete delete Acquiring an image by a predetermined camera installed in at least one arm of a manipulator at a position spaced a predetermined distance from the object for visual surveillance;
Extracting a feature point or a feature region corresponding to the object from the acquired image;
Identifying a target object based on the extracted feature points or feature regions;
Setting a predetermined XY plane having a position of the camera as an origin;
Measuring an angle between the camera and the target object based on X and Y axes on the XY plane; And
Estimating coordinates of the object in the XY plane based on the measured angle,
Wherein, in the acquiring step, the camera includes a first camera and a second camera respectively installed in both arms of the manipulator,
Wherein the specifying step identifies the at least a part of the feature points or feature areas respectively extracted from the images obtained by the first camera and the second camera as object objects,
Wherein the setting step sets a predetermined XY plane with the origin of the first camera as the origin,
Wherein the measuring step measures a first angle between the first camera and the target object and a second angle between the second camera and the target object on the basis of the Y axis on the XY plane, Measuring a third angle between the first camera and the second camera on the basis of the second angle,
The estimating step estimates the coordinates of the object in the XY plane based on the distance between the first camera and the second camera and the measured first angle, second angle and third angle A method for recognizing coordinates of an object for visual surveillance.
The method of claim 3,
Wherein the x coordinate and the y coordinate of the target object are calculated according to Equation 1 and Equation 2, respectively, in the estimating step.
(1)
Figure 112015099649142-pat00010

(2)
Figure 112015099649142-pat00011

Where x is the x coordinate of the object, y is the y coordinate of the object,? 1 is the first angle,? 2 is the second angle,? Is the third angle, d is And a distance between the first camera and the second camera.
delete Acquiring an image by a predetermined camera installed in at least one arm of a manipulator at a position spaced a predetermined distance from the object for visual surveillance;
Extracting a feature point or a feature region corresponding to the object from the acquired image;
Identifying a target object based on the extracted feature points or feature regions;
Setting a predetermined XY plane having a position of the camera as an origin;
Measuring an angle between the camera and the target object based on X and Y axes on the XY plane; And
Estimating coordinates of the object in the XY plane based on the measured angle,
Wherein the acquiring step acquires an image when the camera is at the first position and an image when the camera is at the second position,
Wherein the identifying step identifies the at least a part of the feature point or feature area extracted from the image at the first position and the image at the second position as a target object,
Wherein the setting step sets a predetermined XY plane such that the first position of the camera is the origin and the second position of the camera is located on the Y axis,
Wherein the measuring step measures a first angle between the camera and the target object in the first position and a second angle between the camera and the object in the second position with respect to the Y axis on the XY plane,
Wherein the estimating step estimates the coordinates of the object in the XY plane based on the phase difference between the first position and the second position and the measured first angle and second angle. A method for recognizing coordinates of an object.
The method according to claim 6,
Wherein the x and y coordinates of the target object are calculated according to Equation (3) and Equation (4), respectively, in the estimating step.
(3)
Figure 112015099649142-pat00012

(4)
Figure 112015099649142-pat00013

Where x is the x coordinate of the object, y is the y coordinate of the object,? 1 is the first angle,? 2 is the second angle, d is the second position, Position.
KR1020150143922A 2015-10-15 2015-10-15 Method for recognizing coordinates of object for visual servoing KR101747350B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150143922A KR101747350B1 (en) 2015-10-15 2015-10-15 Method for recognizing coordinates of object for visual servoing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150143922A KR101747350B1 (en) 2015-10-15 2015-10-15 Method for recognizing coordinates of object for visual servoing

Publications (2)

Publication Number Publication Date
KR20170044346A KR20170044346A (en) 2017-04-25
KR101747350B1 true KR101747350B1 (en) 2017-06-21

Family

ID=58703530

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150143922A KR101747350B1 (en) 2015-10-15 2015-10-15 Method for recognizing coordinates of object for visual servoing

Country Status (1)

Country Link
KR (1) KR101747350B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102102471B1 (en) * 2019-08-19 2020-04-20 유징테크주식회사 System for shape recognition based image processing

Also Published As

Publication number Publication date
KR20170044346A (en) 2017-04-25

Similar Documents

Publication Publication Date Title
US9386302B2 (en) Automatic calibration of extrinsic and intrinsic camera parameters for surround-view camera system
EP3168812B1 (en) System and method for scoring clutter for use in 3d point cloud matching in a vision system
US8265425B2 (en) Rectangular table detection using hybrid RGB and depth camera sensors
US9025009B2 (en) Method and systems for obtaining an improved stereo image of an object
KR101633620B1 (en) Feature registration apparatus for image based localization and method the same
Lins et al. Vision-based measurement for localization of objects in 3-D for robotic applications
CN106960454B (en) Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
KR100920931B1 (en) Method for object pose recognition of robot by using TOF camera
KR100930626B1 (en) Object Posture Recognition Method of Robot with Stereo Camera
JP2014013146A5 (en)
CN110555878B (en) Method and device for determining object space position form, storage medium and robot
KR101850835B1 (en) Method of estimating the location of mobile robot using ray-tracing technique
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
JP2014170368A (en) Image processing device, method and program and movable body
JP6410231B2 (en) Alignment apparatus, alignment method, and computer program for alignment
US20190313082A1 (en) Apparatus and method for measuring position of stereo camera
KR101747350B1 (en) Method for recognizing coordinates of object for visual servoing
CN106682584B (en) Unmanned aerial vehicle obstacle detection method and device
KR20180061803A (en) Apparatus and method for inpainting occlusion of road surface
KR101837269B1 (en) Coordination guide method and system based on multiple marker
Li et al. Extrinsic calibration between a stereoscopic system and a LIDAR with sensor noise models
JP2017117038A (en) Road surface estimation device
WO2014192061A1 (en) Image processing device, image processing method, and image processing program
JP2010197186A (en) Object detector
JP7334626B2 (en) Image processing device and image processing method

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right