KR101781515B1 - Camera calibration system and method - Google Patents

Camera calibration system and method Download PDF

Info

Publication number
KR101781515B1
KR101781515B1 KR1020150163440A KR20150163440A KR101781515B1 KR 101781515 B1 KR101781515 B1 KR 101781515B1 KR 1020150163440 A KR1020150163440 A KR 1020150163440A KR 20150163440 A KR20150163440 A KR 20150163440A KR 101781515 B1 KR101781515 B1 KR 101781515B1
Authority
KR
South Korea
Prior art keywords
camera
polyhedron
vertex
depth
image
Prior art date
Application number
KR1020150163440A
Other languages
Korean (ko)
Other versions
KR20170059272A (en
Inventor
김회율
최건우
김성록
Original Assignee
한양대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한양대학교 산학협력단 filed Critical 한양대학교 산학협력단
Priority to KR1020150163440A priority Critical patent/KR101781515B1/en
Publication of KR20170059272A publication Critical patent/KR20170059272A/en
Application granted granted Critical
Publication of KR101781515B1 publication Critical patent/KR101781515B1/en

Links

Images

Classifications

    • H04N13/0271
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • H04N13/0282

Abstract

The present invention relates to a system and method for camera calibration using polyhedrons, more specifically, an RGB-D camera unit including a depth camera for acquiring a depth image of a moving polyhedron and a color camera for acquiring a color image, A first vertex extraction unit for extracting vertex information of the polyhedron from the image, a second vertex extraction unit extracting vertex information of the polyhedron from the color image, and a second vertex extraction unit extracting vertices of the polyhedron extracted by the first vertex extraction unit, And a first processor for performing a calibration operation for matching the depth camera and the color camera to match the vertex information of the polyhedron extracted by the second vertex extracting unit, Utilize vertices that are the intersection points of three faces in a polyhedron As can be performed Calle calibration operation of the camera.

Description

[0001] CAMERA CALIBRATION SYSTEM AND METHOD [0002] CAMERA CALIBRATION SYSTEM AND METHOD [

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a camera calibration system and method using polyhedrons, and more particularly, to a simultaneous calibration technique in a camera system composed of RGB-D cameras.

3-D scanning technology has been studied for a long time, and since it requires a relatively expensive three-dimensional scanner device for three-dimensional scanning of an object, it has been difficult to apply it to general commercial use.

However, with the release of Microsoft's Kinect at a lower cost compared to existing devices, the range of applications of 3D scanner devices is greatly expanding.

In order to apply the 3D scanner device to various fields, it is necessary to acquire the color information simultaneously with the acquisition of the three-dimensional shape of the object.

To this end, the kinetics include color cameras and depth cameras, which are commonly referred to as RGB-D cameras.

On the other hand, research is being conducted for three-dimensional scanning of 360 degrees of an object by using several 3D RGB-D cameras at the same time.

It is difficult to reconstruct the overall shape and color information of an object because an object can be acquired only from a single viewpoint through one RGB-D camera.

On the other hand, the use of multiple RGB-D cameras can restore the entire shape of an object and can be applied to fields such as precision motion recognition, 360-degree 3D modeling, virtual fitting, and special effects .

To use an RGB-D camera, camera calibration is required to express the viewpoint of each camera in a single coordinate system or to derive the relationship between a depth image and a color image.

Currently, most of the camera calibration techniques include Zhang's technique, Herrera's technique, and DCCT (Depth Camera Calibration Toolbox) technique.

FIGS. 1A and 1B relate to the use of Zhang's technique, wherein FIG. 1A is a view in which corner points are found in a color image, and FIG. 1B is a view in which feature points are found in a depth image.

Zhang's technique uses three or more check boards to calibrate and first obtains the calibration matrix using feature points such as the corners of the check board itself that can be found in depth images and color images.

Next, using the fact that the corner points existing on the check board exist on the same plane, the vertical vector of each plane is obtained, and the correlation matrix is improved by using the association of the vertical vectors.

2A and 2B are diagrams related to using Herrera's technique. FIG. 2A is a view in which corner points are found in a color image, and FIG. 2B is a view in which feature points are found in a depth image.

Herrera's technique is similar to the Zhang's technique described above, but also involves the association between depth and disperity maps.

However, the above Zhang technique and the check board used in Herrera's technique are easy to find the corner points in the color image, but it is difficult to find in the depth image, and the black pattern of the check board absorbs the IR light, There is a problem that is being measured incorrectly.

In addition, there is a limitation that camera calibration can be performed only where the front side of the check board having a check pattern is visible.

FIGS. 3A and 3B are diagrams for using a DCCT technique. FIG. 3A is a diagram for finding a circle corresponding to a phrase in a color image, and FIG. 3B is a diagram for searching a circle corresponding to a phrase and finding the center thereof.

The DCCT technique uses a single sphere without using a check board, and the sphere retains its original shape regardless of its direction or distance. Since the center of the circle in each image is the center of the sphere, we can find the center of the circle and find the calibration matrix using the center point.

However, since the spheres used in the DCCT technique are curved surfaces, the spherical surfaces that can be seen by the ToF type depth camera, which measures the time taken to shoot and return the light, are limited. When only the limited spherical surfaces are used , The center of the sphere is often not found accurately.

Also, due to the nature of the depth image, the edge of the circle appears unstably. For this reason, there is a problem that the center of the circle can be measured inaccurately.

According to an aspect of the present invention, there is provided a camera calibration system and method using a polyhedron capable of performing a calibration operation of a camera by utilizing vertexes at which three faces intersect in a polyhedron as a three-dimensional object In the future.

According to an aspect of the present invention, there is provided a camera calibration system using a polyhedron, including a depth camera for acquiring a depth image of a moving polyhedron and a color camera for acquiring a color image, A first vertex extractor for extracting vertex information of the polyhedron from the depth image, a second vertex extractor for extracting vertex information of the polyhedron from the color image, and a second vertex extractor for extracting vertex information of the polyhedron from the color image, And a first processing unit for performing a calibration operation for deriving a relationship between the depth camera and the color camera by matching the vertex information of the polyhedron and the vertex information of the polyhedron extracted by the second vertex extracting unit .

The first vertex extracting unit and the second vertex extracting unit according to an embodiment of the present invention can extract a plurality of vertex information from the depth image and the color image, respectively.

Also, the RGB-D camera unit according to an embodiment of the present invention is configured by a plurality of RGB-D cameras, and the first processing unit concurrently derives the relationship between the depth camera and the color camera of the plurality of RGB-D cameras And a calibration operation for performing the calibration operation is performed.

According to another aspect of the present invention, there is provided a camera calibration system using a polyhedron, including a depth camera for acquiring a depth image of a moving polyhedron and a color camera for acquiring a color image, A first vertex extracting unit for extracting vertex information of the polyhedron from the depth image, a second vertex extracting unit extracting vertex information of the polyhedron from the color image, A reconstructing unit for reconstructing a virtual polygon on three-dimensional using the vertex information of the polyhedron extracted by the first vertex extracting unit and the vertex information of the polyhedron extracted by the second vertex extracting unit, And a positional relationship between the plurality of RGB-D camera units through the vertex information It characterized in that it comprises a second processing unit that performs a calibration operation for deriving.

According to another aspect of the present invention, the first vertex extractor and the second vertex extractor extract a plurality of vertex information from the depth image and the color image, respectively.

In addition, the virtual polygon according to another embodiment of the present invention is restored based on the world coordinate system.

According to another embodiment of the present invention, the vertex information further includes edge vector information formed by three intersecting points about a vertex of the polyhedron.

According to an aspect of the present invention, there is provided a method of calibrating a camera using a polyhedron, the method including: installing an RGB-D camera including a depth camera for acquiring a depth image and a color camera for acquiring a color image; A step of acquiring a depth image and a color image by moving one polyhedron in a camera field of view, extracting vertex information of the polyhedron from the depth image and the color image, extracting vertex information of the polyhedron extracted from the depth image, And performing a calibration operation to derive a relationship between the depth camera and the color camera by matching the vertex information of the polyhedron extracted from the color image.

The step of extracting the vertex information of the polyhedron according to an embodiment of the present invention extracts a plurality of vertexes corresponding to a corresponding point for obtaining the calibration matrix from the depth image and the color image.

According to another aspect of the present invention, there is provided a method of calibrating a depth camera, the method comprising: acquiring vertex information of the polyhedron extracted from the depth image; The identity of the vertex information of the polyhedron is used.

The plurality of RGB-D cameras may be installed according to an embodiment of the present invention, and the step of performing a calibration operation for deriving a relationship between the depth camera and the color camera may include: Is performed simultaneously.

The step of extracting vertex information of the polyhedron from the color image according to an exemplary embodiment of the present invention may include extracting vertex information of the polyhedron from the color image through an intersection of edges generated by color discontinuities of polyhedrons of different colors .

The step of extracting the vertex information of the polyhedron from the depth image according to an exemplary embodiment of the present invention includes obtaining a three-dimensional point cloud in the depth image, estimating a plane formed by the three- And then extracted through an intersection of each plane.

According to another aspect of the present invention, there is provided a method of calibrating a camera using a polyhedron, including a depth camera for acquiring a depth image and a color camera for acquiring a color image, A step of acquiring a depth image and a color image by moving one polyhedron in a camera field of view, extracting vertex information of the polyhedron from the depth image and the color image, extracting a vertex of the polyhedron extracted from the depth image, Reconstructing a virtual polygon on a three-dimensional plane using information of a vertex of the polyhedron extracted from the color image and using the vertex information of the polyhedron extracted from the color image, Deriving the positional relationship between the RGB-D camera units And performing a calibration operation for the calibration.

According to another aspect of the present invention, the step of extracting the vertex information of the polyhedron includes extracting a plurality of vertexes corresponding to a corresponding point for obtaining the depth image and the calibration matrix from the color image.

Further, the positional relationship between the plurality of RGB-D camera units according to another embodiment of the present invention is derived through coordinate rotation and parallel movement information of the polyhedral vertexes.

According to the present invention, by using a polyhedron as a three-dimensional object, it is possible to calibrate the color camera 30 and the image camera of the RGB-D camera irrespective of the position of the camera, and simultaneously calibrate a plurality of synchronized RGB- can do.

In addition, if the length of the side of the polyhedron is known in the future, the relationship between the synchronized cameras can be obtained.

In addition, according to the present invention, a polyhedron can be reconstructed on the basis of a depth image and a color image captured by each camera, and another object can be reconstructed on the assumption that cameras are still in the same position.

FIG. 1A is a diagram of Zhang's technique in which corner points are found in a color image.
FIG. 1B is a diagram of Zhang's technique where feature points are found in a depth image.
FIG. 2A is a diagram of a Herrera technique in which corner points are found in a color image.
FIG. 2B shows Herrera's technique in which feature points are found in a depth image.
FIG. 3A is a DCCT technique in which a circle corresponding to a sphere is found in a color image.
FIG. 3B is a diagram illustrating DCCT technique in which a circle corresponding to a sphere is searched and its center is found.
4 is a block diagram of a camera calibration system according to an embodiment of the present invention.
5 is a configuration diagram of a camera calibration system according to another embodiment of the present invention.
6 is a flowchart of a camera calibration method according to an embodiment of the present invention.
7A is a view showing a color image of a cube taken by a color camera according to an embodiment of the present invention.
7B is a view showing a depth image of a cube taken by a depth camera according to an embodiment of the present invention.
8A is a diagram illustrating extraction of a vertex of a cube in a color image according to an exemplary embodiment of the present invention.
8B is a diagram illustrating extraction of a vertex of a cube in a depth image according to an embodiment of the present invention.
9 is a flowchart of a camera calibration method according to another embodiment of the present invention.
10 is a diagram illustrating three-dimensional vertex coordinates and edge vectors extracted from actual data according to another embodiment of the present invention.
11 is a view showing a virtual polygon restored on a world coordinate system according to another embodiment of the present invention.
FIG. 12 is a diagram showing a point group obtained by each camera in one coordinate system according to another embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings in order to facilitate a person skilled in the art to easily carry out the technical idea of the present invention. . In the drawings, the same reference numerals are used to designate the same or similar components throughout the drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.

4 is a block diagram of a camera calibration system according to an embodiment of the present invention. The RGB-D camera unit 40, the first vertex extraction unit 210, the second vertex extraction unit 220, 310), and the like.

The RGB-D camera unit 40 may capture a color (RGB) image and a depth image. The RGB-D camera may include a depth sensor integrated with the RGB camera, And a depth camera.

The RGB-D camera unit 40 can acquire an image from a polyhedron moving within the field of view of the camera.

A general camera obtains only a color image, but the RGB-D camera unit 40 used according to an embodiment of the present invention includes a depth camera 20 for obtaining a depth image or a depth sensor and a color And may include a camera 30.

The depth image and the color image may be composed of consecutive frames in time, and one frame may have one image.

The color image and the depth image of the polyhedron captured by the camera can be stored and updated in a separate storage device (not shown) for use in detecting vertex information of a polyhedron described later.

The color camera 30 and the depth camera 20 may simultaneously photograph a polyhedron corresponding to a subject and the color camera 30 may generate a color image and the depth camera 20 may generate a depth image have.

The depth camera 20 may include an infrared ray sending unit and an infrared ray receiving unit. When the infrared rays transmitted from the infrared ray sending unit are reflected by the subject, the infrared ray receiving unit receives the depth information and measures the depth of the subject.

To capture depth images, you can use SwissRanger 4000, PMD [vision] CamCube, D-IMager or Microsoft's Kinect depending on the design conditions.

The spheres used in the DCCT method can be precisely located through the vertexes of the polyhedra without curved surfaces, whereas curved surfaces are limited in the depth of view.

The color camera 30 and the depth camera 20 may respectively generate a color image and a depth image and transmit the color image and the depth image to the first vertex extracting unit 210 and the second vertex extracting unit 220.

The first vertex extracting unit 210 extracts vertex information of the polyhedron from the depth image.

The second vertex extracting unit 220 extracts vertex information of the polyhedron from the color image.

At this time, the vertex information may further include edge vector information formed by three intersecting points about a vertex of the polyhedron.

The first vertex extracting unit 210 and the second vertex extracting unit 220 may form a single vertex extracting unit, but they may have different configurations.

Since the polyhedron is not stopped at the time of shooting but moves within the field of view of the camera, it is possible to photograph a polyhedron moving in the field of view of the camera to find a plurality of points that can be used to obtain the calibration matrix.

That is, the first vertex extracting unit 210 may extract a plurality of vertex information from the depth image, and the second vertex extracting unit 220 may extract a plurality of vertex information from the color image.

The first processing unit 310 matches the vertex information of the polyhedron extracted by the first vertex extracting unit 210 and the vertex information of the polyhedron extracted by the second vertex extracting unit 220, A calibration operation may be performed to derive an association between the camera 20 and the color camera 30. [

Since the vertex extracted from the color image and the vertex extracted from the depth image represent the same vertex of the polyhedron, the association between the depth camera 20 and the color camera 30 can be understood.

It is also possible to identify the geometrical relationship between the cameras using the image received from the color camera 30 and the depth camera 20, and to correct based on the identified relationship.

In this case, the first processing unit 310 may include a depth camera 20 of the plurality of RGB-D cameras and a plurality of color- It is possible to perform a calibration operation for simultaneously deriving the relationship of the camera 30.

5 is a block diagram of a camera calibration system according to another embodiment of the present invention. The RGB-D camera unit 40 includes a first vertex extraction unit 210, a second vertex extraction unit 220, (230), a second processing unit (320), and the like.

The RGB-D camera unit 40, the first vertex extraction unit 210, and the second vertex extraction unit 220 are as described above.

The first vertex extracting unit 210 and the second vertex extracting unit 220 extract the vertexes of the polyhedron from the color image and the depth image acquired by the RGB-D camera unit 40, And can be represented by a three-dimensional coordinate value.

In addition, the first vertex extracting unit 210 and the second vertex extracting unit 220 may extract a plurality of vertex information from the depth image and the color image, respectively.

The vertex information and the coordinate information contained in the vertex correspond to a local coordinate system based on each RGB-D camera coordinate system, and each RGB-D camera coordinate system corresponds to a single coordinate system, a world coordinate system It needs to be indicated.

It is necessary to map the detected vertex to a virtual polygon in order to display each camera coordinate system as one coordinate system.

In this process, rotation and translation information of each vertex coordinate can be obtained on the basis of the virtual polygon, which is the positional relationship of each camera.

The decompression unit 230 decompresses the vertex information of the polyhedron extracted by the first vertex extraction unit 210 and the vertex information of the polyhedron extracted by the second vertex extraction unit 220, And restores the virtual polygons.

At this time, the virtual polygon is restored in three dimensions on the basis of the world coordinate system.

The second processor 320 performs a calibration operation for deriving a positional relationship between the plurality of RGB-D camera units 40 on the basis of the vertex information including coordinate information and the like on the basis of the restored virtual polygon Can be performed.

Hereinafter, a method of calibrating a camera according to the present invention will be described in detail with reference to embodiments of the present invention. However, these examples are for illustrative purposes only, and the scope of the present invention is not limited to these examples.

For camera calibration according to an embodiment of the present invention, a cube of a polyhedron is used, but the present invention is not limited thereto, and polyhedrons having various shapes can be used.

FIG. 6 shows a flowchart of a camera calibration method according to an embodiment of the present invention.

Referring to FIG. 6, an RGB-D camera including a depth camera 20 for acquiring a depth image and a color camera 30 for acquiring a color image is installed and one polyhedron is moved in the camera field of view The depth image and the color image are acquired (S10).

FIG. 7A is a view showing a color image of a cube taken by a color camera according to an embodiment of the present invention, and FIG. 7B is a view illustrating a depth image of a cube captured by a depth camera according to an embodiment of the present invention.

Next, vertex information of the polyhedron is extracted from the vertex information of the polyhedron and the vertex information of the polyhedron from the color image (S11).

At this time, it is preferable to extract a plurality of vertices corresponding to the corresponding points for obtaining the calibration matrix from the depth image and the color image.

FIG. 8A is a diagram illustrating extraction of a vertex of a cube in a color image according to an exemplary embodiment of the present invention. FIG. 8B illustrates extraction of a vertex of a cube in a depth image according to an exemplary embodiment of the present invention. FIG.

8A, extraction of the vertex information of the polyhedron from the color image can be performed through an intersection of edges generated by color discontinuities of polyhedrons having different colors on the respective faces of the color image.

That is, it uses the fact that the color of each surface of the polyhedron is different, and it searches for the corner that occurs due to the discontinuity of color and searches for the vertex which is its intersection.

In this process, you can see which vertex of the polygon the vertex represents through the arrangement of three differently colored faces.

8B, extracting the vertex information of the polyhedron from the depth image may include obtaining a three-dimensional point cloud in the depth image, estimating a plane formed by the three-dimensional point cloud, . ≪ / RTI >

Since the brightness of the image obtained through the depth camera 20 indicates the distance from the camera, the three-dimensional coordinates of each pixel can be known. Such a group of three-dimensional points is called a point cloud.

That is, the vertex information acquires a three-dimensional point cloud in the depth image, estimates a plane formed by the three-dimensional point cloud through a method such as RANSAC (Random Sample Consensus), and then intersects each intersection line It can also be extracted through the intersection.

Thereafter, calibration for deriving an association between the depth camera 20 and the color camera 30 by matching the vertex information of the polyhedron extracted from the depth image and the vertex information of the polyhedron extracted from the color image, (S12).

It is possible to know what an association between the depth camera 20 and the color camera 30 is because the vertexes of the respective images are photographed at the same vertex.

That is, performing the calibration operation to derive the relationship between the depth camera 20 and the color camera 30 may be performed by using the vertex information of the polyhedron extracted from the depth image and the vertex information of the polyhedron extracted from the color image And the identity of the vertex information is used.

The plurality of RGB-D cameras may be installed, and in this case, simultaneous calibration of the plurality of RGB-D cameras may be performed.

FIG. 9 shows a flowchart of a camera calibration method according to another embodiment of the present invention.

Referring to FIG. 9, a plurality of RGB-D camera units 40 including a depth camera 20 for acquiring a depth image and a color camera 30 for acquiring a color image are installed, A polyhedron is moved to obtain a depth image and a color image (S20).

In order to capture a color image and a depth image of a polyhedron, a plurality of RGB-D camera units 40 may be attached to a support so that there is no movement of viewpoint.

In this case, by installing the image capturing direction of each camera to be 90 degrees, it is possible to acquire an image of the 360 degree direction of the polyhedron, and the distance from the polyhedron can be maintained at approximately 1 to 2 meters. However, The calibration operation can be performed by the method proposed in the present invention.

Next, the vertex information of the polyhedron is extracted from the vertex information of the polyhedron and the vertex information of the polyhedron from the color image from the depth image (S21), and the vertex information is based on the local coordinate system based on each camera viewpoint.

At this time, it is preferable to extract a plurality of vertices corresponding to the corresponding points for obtaining the calibration matrix from the depth image and the color image.

10 is a diagram illustrating three-dimensional vertex coordinates and edge vectors extracted from actual data according to another embodiment of the present invention.

The vertex information may further include edge vector information formed by three intersecting points about a vertex of the polyhedron.

At this time, since the information obtained from each camera is expressed by the local coordinate system, it is necessary to convert the information of each point to a common world coordinate system.

To do this, one of the cameras may be selected and the local coordinate system of the camera may be used, or a new three-dimensional world coordinate system may be used.

The vertex coordinates or corner vectors of the polyhedron can be converted into a virtual polygonal coordinate system having the same size as the actual polygon in order to minimize the mean square error (MSE) for the distances between the vertices that can be observed from each camera to be.

Thereafter, the virtual polygon is reconstructed on the three-dimensional basis using the vertex information of the polyhedron extracted from the depth image and the vertex information of the polyhedron extracted from the color image (S22).

Then, a calibration operation is performed to derive a positional relationship between the plurality of RGB-D camera units on the basis of the restored virtual polygon as a reference (S23).

11 is a view showing a virtual polygon restored on a world coordinate system according to another embodiment of the present invention.

11, the first camera 110, the second camera 120, the third camera 130, and the fourth camera 140 constituting a plurality of RGB-D cameras are positioned on a single world coordinate system The first vertex 11 extracted by the first camera 110, the second vertex 12 extracted by the second camera 120, and the third vertex extracted by the third camera 130 13), and maps to a virtual polygon on the basis of the fourth vertex 14 extracted by the fourth camera 140 and the corner coordinates.

In this process, the positional relationship between the RGB-D camera units 40 can be derived through each vertex or corner coordinate information based on the virtual polygon.

Specifically, it can be derived through rotation and translation information of the polyhedral vertex coordinates.

This indicates the process of obtaining an extrinsic parameter. The external parameter means a rotation and movement transformation matrix for transformation between the world coordinate system and the local coordinate system. The geometric relationship between the camera and the external space, Related parameters.

FIG. 12 shows a result of representing a point group obtained by each camera in one coordinate system according to another embodiment of the present invention.

The point cloud can be interpreted as a set of points created through a three-dimensional scene using the depth camera 20, and the point cloud acquired through at least one depth camera 20 includes at least one point and at least one point And a depth value intensity for the first and second images.

As the density becomes higher and more specific data become, the point cloud which is composed of numerous color and coordinate data and has a spatial structure can have a meaning as a three-dimensional model.

Additional work such as ICP (Iterative Closest Point Algorithm) can be used to improve the calibration results. At this time, the ICP means a method of repeatedly re-adjusting the parameters so as to minimize the distance between the adjacent points from the point group to which the approximate point is set.

As described above, an optimal embodiment has been disclosed in the drawings and specification. Although specific terms have been employed herein, they are used for purposes of illustration only and are not intended to limit the scope of the invention as defined in the claims or the claims. Therefore, those skilled in the art will appreciate that various modifications and equivalent embodiments are possible without departing from the scope of the present invention. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

11: First vertex
12: second vertex
13: third vertex
14: fourth vertex
20: Depth camera
30: Color camera
40: RGB-D camera unit
100: Camera Calibration System
110: First camera
120: Second camera
130: Third camera
140: Fourth camera
210: First vertex extracting unit
220: second vertex extracting unit
230:
310: first processing section
320:

Claims (16)

An RGB-D camera unit including a depth camera for acquiring a depth image of a moving polyhedron and a color camera for acquiring a color image;
A first vertex extractor for extracting vertex information of the polyhedron from the depth image;
A second vertex extractor for extracting vertex information of the polyhedron from the color image; And
A calibration operation for deriving an association between the depth camera and the color camera by matching the vertex information of the polyhedron extracted by the first vertex extracting unit and the vertex information of the polyhedron extracted by the second vertex extracting unit, And a second processing unit for performing the second processing,
Wherein the vertex information further includes edge vector information formed by three intersecting points about a vertex of the polyhedron,
Wherein the first vertex extractor and the second vertex extractor extract a plurality of vertex information from the depth image and the color image, respectively.
delete The method according to claim 1,
The RGB-D camera unit includes a plurality of RGB-D cameras,
Wherein the first processing unit performs a calibration operation for simultaneously deriving a relationship between a depth camera of the plurality of RGB-D cameras and a color camera.
A plurality of RGB-D camera units including a depth camera for acquiring a depth image of a moving polyhedron and a color camera for acquiring a color image;
A first vertex extractor for extracting vertex information of the polyhedron from the depth image;
A second vertex extractor for extracting vertex information of the polyhedron from the color image;
A reconstruction unit for reconstructing a virtual polygon in three dimensions using the vertex information of the polyhedron extracted by the first vertex extraction unit and the vertex information of the polyhedron extracted by the second vertex extraction unit; And
And a second processing unit for performing a calibration operation for deriving a positional relationship between the plurality of RGB-D camera units through the vertex information based on the restored virtual polygon,
Wherein the vertex information further includes edge vector information formed by three intersecting points about a vertex of the polyhedron,
Wherein the first vertex extractor and the second vertex extractor extract a plurality of vertex information from the depth image and the color image, respectively.
delete 5. The method of claim 4,
Wherein the virtual polygon is restored based on a world coordinate system.
delete There is provided an image processing method comprising the steps of: installing an RGB-D camera including a depth camera for acquiring a depth image and a color camera for acquiring a color image, moving one polyhedron in a camera field of view to obtain a depth image and a color image;
Extracting vertex information of the polyhedron from the depth image and the color image; And
And performing a calibration operation to derive a relationship between the depth camera and the color camera by matching the vertex information of the polyhedron extracted from the depth image and the vertex information of the polyhedron extracted from the color image,
Wherein the extracting of the vertex information of the polyhedron from the color image comprises:
Wherein the color image is extracted through an intersection of edges generated by color discontinuities of polyhedrons having different colors on respective surfaces,
Wherein the extracting of the vertex information of the polyhedron from the depth image comprises:
A 3D point cloud is obtained from the depth image, a plane formed by the 3D point cloud is estimated,
The step of extracting the vertex information of the polyhedron includes:
And extracting a plurality of vertexes corresponding to corresponding points for obtaining the calibration matrix from the depth image and the color image.
delete 9. The method of claim 8,
Wherein the step of calibrating to derive an association between the depth camera and the color camera comprises:
Wherein the vertex information of the polyhedron extracted from the depth image is identical to the vertex information of the polyhedron extracted from the color image.
9. The method of claim 8,
A plurality of RGB-D cameras are installed,
Wherein the calibrating operation for deriving the association between the depth camera and the color camera comprises:
And performing simultaneous calibration of the plurality of RGB-D cameras.
delete delete A plurality of RGB-D camera units including a depth camera for acquiring a depth image and a color camera for acquiring a color image, and acquiring a depth image and a color image by moving one polyhedron in a camera field of view;
Extracting vertex information of the polyhedron from the depth image and the color image;
Reconstructing a virtual polygon in three dimensions using vertex information of the polyhedron extracted from the depth image and vertex information of the polyhedron extracted from the color image; And
And performing a calibration operation for deriving a positional relationship between the plurality of RGB-D camera units through coordinate information included in the vertex information based on the restored virtual polygon,
Wherein the extracting of the vertex information of the polyhedron from the color image comprises:
Wherein the color image is extracted through an intersection of edges generated by color discontinuities of polyhedrons having different colors on respective surfaces,
Wherein the extracting of the vertex information of the polyhedron from the depth image comprises:
A 3D point cloud is obtained from the depth image, a plane formed by the 3D point cloud is estimated,
The step of extracting the vertex information of the polyhedron includes:
And extracting a plurality of vertexes corresponding to corresponding points for obtaining the calibration matrix from the depth image and the color image.
delete 15. The method of claim 14,
Wherein the positional relationship between the plurality of RGB-D camera units is derived through coordinate rotation and parallel movement information of the polyhedron vertexes.




KR1020150163440A 2015-11-20 2015-11-20 Camera calibration system and method KR101781515B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150163440A KR101781515B1 (en) 2015-11-20 2015-11-20 Camera calibration system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150163440A KR101781515B1 (en) 2015-11-20 2015-11-20 Camera calibration system and method

Publications (2)

Publication Number Publication Date
KR20170059272A KR20170059272A (en) 2017-05-30
KR101781515B1 true KR101781515B1 (en) 2017-09-25

Family

ID=59052993

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150163440A KR101781515B1 (en) 2015-11-20 2015-11-20 Camera calibration system and method

Country Status (1)

Country Link
KR (1) KR101781515B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10628968B1 (en) 2018-12-05 2020-04-21 Toyota Research Institute, Inc. Systems and methods of calibrating a depth-IR image offset

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108347600B (en) * 2018-03-05 2020-01-07 上海复瞻智能科技有限公司 Industrial camera correction method and system
KR102206108B1 (en) * 2019-09-20 2021-01-21 광운대학교 산학협력단 A point cloud registration method based on RGB-D camera for shooting volumetric objects
CN113433533B (en) * 2021-07-09 2024-04-19 上海研鼎信息技术有限公司 TOf camera testing device and testing method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10628968B1 (en) 2018-12-05 2020-04-21 Toyota Research Institute, Inc. Systems and methods of calibrating a depth-IR image offset

Also Published As

Publication number Publication date
KR20170059272A (en) 2017-05-30

Similar Documents

Publication Publication Date Title
CN109561296B (en) Image processing apparatus, image processing method, image processing system, and storage medium
US10659750B2 (en) Method and system for presenting at least part of an image of a real object in a view of a real environment, and method and system for selecting a subset of a plurality of images
EP2615580B1 (en) Automatic scene calibration
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
US9396542B2 (en) Method of estimating imaging device parameters
US10846844B1 (en) Collaborative disparity decomposition
KR102354299B1 (en) Camera calibration method using single image and apparatus therefor
WO2016029939A1 (en) Method and system for determining at least one image feature in at least one image
JP2011123071A (en) Image capturing device, method for searching occlusion area, and program
KR101781515B1 (en) Camera calibration system and method
JP6352208B2 (en) 3D model processing apparatus and camera calibration system
KR102206108B1 (en) A point cloud registration method based on RGB-D camera for shooting volumetric objects
WO2015108996A1 (en) Object tracking using occluding contours
Khoshelham et al. Generation and weighting of 3D point correspondences for improved registration of RGB-D data
Weinmann et al. Fully automatic image-based registration of unorganized TLS data
CN109064533B (en) 3D roaming method and system
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
Matiukas et al. Point cloud merging for complete 3D surface reconstruction
Slossberg et al. Freehand Laser Scanning Using Mobile Phone.
KR20120056668A (en) Apparatus and method for recovering 3 dimensional information
JP2018036884A (en) Light source estimation device and program
JP2016218920A (en) Information processor, information processing method, and program
Chotikakamthorn Near point light source location estimation from shadow edge correspondence
Kim et al. Projection-based registration using a multi-view camera for indoor scene reconstruction
Ahmadabadian Photogrammetric multi-view stereo and imaging network design

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant