KR101634283B1 - The apparatus and method of 3d modeling by 3d camera calibration - Google Patents

The apparatus and method of 3d modeling by 3d camera calibration Download PDF

Info

Publication number
KR101634283B1
KR101634283B1 KR1020150181436A KR20150181436A KR101634283B1 KR 101634283 B1 KR101634283 B1 KR 101634283B1 KR 1020150181436 A KR1020150181436 A KR 1020150181436A KR 20150181436 A KR20150181436 A KR 20150181436A KR 101634283 B1 KR101634283 B1 KR 101634283B1
Authority
KR
South Korea
Prior art keywords
dimensional
calibration
control unit
modeling
image
Prior art date
Application number
KR1020150181436A
Other languages
Korean (ko)
Inventor
박영기
김원길
김대용
Original Assignee
주식회사 싸인텔레콤
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 싸인텔레콤 filed Critical 주식회사 싸인텔레콤
Priority to KR1020150181436A priority Critical patent/KR101634283B1/en
Application granted granted Critical
Publication of KR101634283B1 publication Critical patent/KR101634283B1/en

Links

Images

Classifications

    • H04N13/02
    • H04N13/0003
    • H04N13/0048
    • H04N13/0425

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

In the present invention, a device using two or more pieces of image information has been proposed in the prior art to increase the reliability and accuracy of calibration data. However, as the number of images increases, the amount of calculation increases, In order to solve the problem that it is not possible to use the system, it is difficult to estimate the correct calibration point when the outliers in the image are dominant and the line component for the calibration point estimation is insufficient at the calibration point estimation. The three-dimensional camera modeling control unit 300, and the three-dimensional spatial modeling unit 400. The three-dimensional modeling unit 400 includes the algorithm engine unit 100, the calibration point estimation control unit 200, the three-dimensional camera calibration control unit 300, 3D spatial modeling can be reconfigured based on the standard, and it can be easily applied to the existing surveillance camera installed, Can be improved by 80% compared with the conventional method. Moreover, robust calibration can be performed even when automatic correction is difficult due to lack of image information or texture noise, and high quality 3D spatial modeling data is generated Dimensional space modeling can be formed on the specific space and the specific object of the corrected single image so that the actual distance can be calculated at every position in the image and thereby the actual size and the moving speed of the specific object Dimensional space modeling control method and apparatus for controlling a three-dimensional camera modeling in a single image, which can measure a moving direction of a camera, and can improve a surveillance region and a tracking range by 80%.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to a three-dimensional spatial modeling control apparatus and method,

In the present invention, three-dimensional information of a specific space and a specific object can be measured with four or more calibration points that can be measured in a single camera and a single image, and then three-dimensional spatial modeling can be performed through three-dimensional camera calibration, Dimensional space modeling control apparatus for calculating a real distance from a position of a specific object and measuring and reconstructing an actual size, a moving speed and a moving direction of the specific object through a three-dimensional camera calibration in a single image .

The three-dimensional information of the real world is converted into the form of two-dimensional image data through a digital image acquisition device such as a camera or a camcorder.

Dimensional data such as size, shape, and rotation are lost in the process of projecting the three-dimensional spatial information onto the two-dimensional plane.

The problem of 3D data loss refers to the loss of reference information for generating 3D spatial information in an intelligent surveillance system in which relative position between cameras, object movement trajectory and size information should be used.

Therefore, there is a need for a camera calibration technique for lossy three-dimensional information estimation.

The general calibration procedure performs a localization of the points on the calibration pattern, calculates the camera parameters, or estimates the camera parameters using the geometric properties of the pattern.

Currently, the most widely used passive method uses pattern images taken at different angles, or multiple images taken at the same point from different angles.

Conventionally, a device using two or more pieces of image information has been proposed in order to increase the reliability and accuracy of calibration data. However, as the number of images increases, the amount of calculation increases, and in a surveillance system using a single camera, There was a disadvantage that was impossible.

Also, when the calibration point is estimated, there is a problem that accurate calibration points can not be estimated when the outlier component in the image is dominant and when the line component for calibration point estimation is insufficient.

Korean Patent Registration No. 10-0890625

In order to solve the above problems, in the present invention, it is possible to reconstruct three-dimensional space modeling based on measured three-dimensional information even in a single camera and a single image, and to provide a 3D spatial modeling data Dimensional space modeling can be formed on a specific space and a specific object of the corrected single image so that the actual distance can be calculated at every position in the image and thereby the actual size and moving speed of a specific object Dimensional space modeling control method and apparatus for controlling the three-dimensional space modeling through a three-dimensional camera calibration in a single image capable of measuring a moving direction.

According to an aspect of the present invention, there is provided an apparatus for controlling three-dimensional space modeling through three-dimensional camera calibration in a single image,

Dimensional space modeling based on the measured three-dimensional information after measuring three-dimensional information of a specific space and a specific object with four or more calibration points that can be measured from a single camera and a single image.

More specifically, an image coordinate transformation algorithm engine unit 100 for transforming three-dimensional spatial coordinates of a real space into two-dimensional image coordinates of a single image,

A calibration point estimation control unit 200 for forming a virtual world coordinate system of a line and a point in a single image and then setting a spatial calibration point and a spatial origin to estimate a calibration point;

When more than four calibration points are estimated through the calibration point estimation control unit, the camera parameter values X axis calibration point, Y axis calibration point, Z axis calibration point, focal distance, pan, tilt Dimensional camera calibration; a three-dimensional camera calibration control unit 300 for calculating three-

And a three-dimensional spatial modeling unit 400 for forming three-dimensional space modeling on a specific space and a specific object of the single image corrected through the three-dimensional camera calibration control unit.

In addition, a method for forming a three-dimensional space modeling through a three-dimensional camera calibration in a single image according to the present invention

(S10) of converting the three-dimensional spatial coordinates of the actual specific space into the two-dimensional image coordinates of the single image through the image coordinate conversion algorithm engine unit according to the present invention, setting the internal parameters for the three-dimensional camera calibration and the external parameters, Wow,

(S20) of forming a virtual space-type world coordinate on a single image through a calibration point estimation controller and then setting a spatial correction point and a spatial origin to estimate a correction point;

When more than four calibration points are estimated through the calibration point estimation control unit in the 3D camera calibration control unit, camera parameter values such as X axis calibration point, Y axis calibration point, Z axis calibration point, focal distance, pan, Tilt) to perform three-dimensional camera calibration (S30)

(S40) of forming three-dimensional space modeling on a specific space and a specific object of the single image corrected through the three-dimensional camera calibration control unit in the three-dimensional spatial modeling forming unit.

As described above, in the present invention,

First, it is possible to reconstruct three-dimensional space modeling based on the measured three-dimensional information in a single camera and a single image, and easily apply it to existing installed surveillance cameras, thereby improving compatibility and application range by 80% .

Second, robust calibration can be performed even when automatic correction is difficult due to lack of image information or texture noise, and high-quality 3D spatial modeling data can be generated as compared with existing devices using linear birth control

Third, it is possible to form 3D spatial modeling on a specific space and a specific object of the corrected single image, and thus it is possible to calculate an actual distance at every position in the image, thereby realizing an actual size, a moving speed, And the monitoring area and the tracking range can be improved by 80% compared to the conventional one.

1 is a block diagram showing components of a three-dimensional spatial modeling control apparatus 1 through a three-dimensional camera calibration in a single image according to the present invention.
FIG. 2 is a perspective view showing the components of a three-dimensional spatial modeling formation control apparatus according to the present invention,
3 is an exploded perspective view showing components of a pinhole camera model algorithm according to the present invention,
FIG. 4 is a block diagram showing the components of the calibration point estimation control unit according to the present invention;
Fig. 5 is a diagram showing a relationship between a single image according to the present invention and a calibration point, a relationship required for estimating a focal distance from a relationship between focal lengths,
FIG. 6 is a block diagram showing components of a three-dimensional spatial modeling unit according to the present invention;
FIG. 7 is a block diagram of a space-type world coordinate setting unit according to the present invention.
In one embodiment showing that the reference coordinate point is set on the XY plane as a single image and the floor of the space,
FIG. 8 is a diagram showing a case where a single image is a road image in a tunnel through a first real world coordinate calculation unit according to the present invention, two points along a road dividing line are displayed on the road surface, In one embodiment showing that four points having a rectangular shape are defined by additionally displaying two points at the same distance in the in direction,
9 is a diagram illustrating an example of setting a polygon pointer including a line and a point along a bottom surface of a specific space to be rendered on an image plane of a single image through the polygon pointer controller according to the present invention ,
FIG. 10 is a diagram illustrating an example of setting a displacement in the Z-axis of a real-world coordinate at a real-world coordinate point of a floor of a second real-world coordinate point computing unit through a Z-axis displacement control unit according to the present invention.
FIG. 11 is a view for explaining an example of a method for correcting an image coordinate point of a single image photographed by a single camera in a three-dimensional space modeling control unit according to the present invention, Dimensional spatial modeling to a specific space of a single image and a specific object by connecting them to each other,
12 is a flowchart illustrating a method of generating three-dimensional space modeling in a specific space and a specific object of a single image corrected in the three-dimensional space modeling control unit according to the present invention, and calculating an actual distance at every position in the image, In one embodiment that illustrates measuring the actual size, moving speed, and direction of movement,
FIG. 13 is a flowchart illustrating a method of forming a three-dimensional space modeling by three-dimensional camera calibration in a single image according to the present invention,
FIG. 14 is a flowchart showing a specific process of forming three-dimensional space modeling on a specific space and a specific object of a single image corrected through a three-dimensional camera calibration control unit in the three-dimensional space modeling forming unit according to the present invention.

Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.

FIG. 1 is a block diagram showing components of a three-dimensional spatial modeling control apparatus 1 through a three-dimensional camera calibration in a single image according to the present invention. Dimensional space of the specific object and the specific object, and reconstructing the three-dimensional space modeling based on the measured three-dimensional information.

2, the 3D spatial modeling control apparatus 1 includes an image coordinate conversion algorithm engine unit 100, a calibration point estimation control unit 200, a three-dimensional camera calibration control unit 300 And a three-dimensional spatial modeling unit 400.

First, the image coordinate transformation algorithm engine unit 100 according to the present invention will be described.

The image coordinate conversion algorithm engine unit 100 converts the three-dimensional spatial coordinates of an actual specific space into two-dimensional image coordinates of a single image.

It consists of a pinhole camera model algorithm 110.

3, the pinhole camera model algorithm 110 has a role of observing a structure in which light reflected on an object reaches a retina (or a sensor of a camera) through an eye lens (or a lens of a camera) do.

That is, when light is reflected at a point in an object in a three-dimensional real world, only light rays from the various angles of rays that are directed to the pinhole go without being obstructed by the pinhole plane.

As a result, these rays are focused on what is called the image plane or projective plane.

The size of this image is determined by the focal length of the pinhole camera.

The pinhole camera model algorithm is expressed by the following equation (1).

Figure 112015124203406-pat00001

Here, x and y represent the horizontal and vertical coordinates of the image plane, and X, Y and Z represent coordinates in the three-dimensional space. f x , f y is the focal length for the x and y axes, c x , c y is the center of the image, S is the scale, γ is the asymmetry coefficient, r ij , i, j = 3 represents rotation information, and t k , k = 1, 2, 3 represent movement information.

The leftmost 3 * 3 matrix on the right side of Equation 1 is set as an internal parameter, and the middle 3 * 4 matrix is called an external parameter, which is denoted by [R] [T].

The parallel straight line existing in the three-dimensional space is theoretically converted to the two-dimensional image plane by the single camera using the eulidean coordinate, Together.

In addition, since the concept of infinity becomes usable, the point at which the parallel lines cross each other is called a calibration point. Each of these points has a property that the line segments corresponding to the axis and orthogonal to each other are orthogonal to each other.

By using this property, it is possible to estimate the three-dimensional information corrected by the perspective projection transformation.

Next, the calibration point estimation control unit 200 according to the present invention will be described.

The calibration point estimation control unit 200 forms imaginary spatial world coordinates made up of lines and points on a single image and then sets a spatial correction point and a spatial origin to estimate a correction point It plays a role.

4, the space type global coordinate setting unit 210, the spatial correction point setting unit 220, the spatial origin setting unit 230, and the first real world coordinate computing unit 240 are configured as shown in FIG.

First, the spatial world coordinate setting unit 210 according to the present invention will be described.

The space-type world coordinates setting unit 210 sets the bottom surface of the space to an XY plane on a single image, sets a Z-axis direction to a direction perpendicular to the bottom surface, and then moves an arbitrary direction in the X- And a direction orthogonal to the X-axis direction is set as a Y-axis direction, thereby forming a world coordinate composed of one long hexagon.

This is configured to set the reference coordinate point as a single image and the bottom surface of the space on the XY plane as shown in Fig.

The reason for this is that the calibration line obtained by connecting the calibration points of the X and Y axes to the two calibration points is used as the horizontal reference line.

Second, the spatial correction point setting unit 220 according to the present invention will be described.

The space-type calibration point setting unit 220 sets four points corresponding to the rectangle in the XY plane set by the space-type world coordinates setting unit as calibration points or extracts four points that can be measured from a single image And set it as a calibration point.

That is, when the calibration points are orthogonal to each other and the perpendicular lines perpendicular to the line made by the other two calibration points at each calibration point are drawn, the intersection of the four perpendicular lines is found at the center of the image.

Based on this, four points corresponding to a rectangle are set as calibration points in the XY plane set by the space-type world coordinates setting unit, or four points that can be measured in a single image are set as correction points.

Third, the spatial origin setting unit 230 according to the present invention will be described.

The space-type origin point setting unit 230 sets the center point of the four rectangular calibration points as the origin of the world coordinates, which is a rectangular hexahedron.

Here, the origin refers to the center point.

Fourth, the first real world coordinate calculation unit 240 according to the present invention will be described.

The first real world coordinate calculation unit 240 calculates the real world coordinates of the four calibration points with respect to the origin.

That is, as shown in Fig. 8, when a single image is a road image in a tunnel, two points along the dividing line are displayed on the road surface, and the two points on the road surface are the same Add two points to the distance to define four calibration points with a rectangular shape.

In this case, the real world coordinate for the four calibration points is a three-dimensional coordinate composed of X, Y, and Z, and the Z coordinate is '0'.

Next, the three-dimensional camera calibration control unit 300 according to the present invention will be described.

If more than four calibration points are estimated through the calibration point estimation control unit, the three-dimensional camera calibration control unit 300 determines camera parameters such as an X axis calibration point, a Y axis calibration point, a Z axis calibration point, a focal distance, ), And tilt (Tilt), thereby performing a three-dimensional camera calibration.

It consists of a hybrid camera calibration algorithm engine unit 310 combining an algebraic algorithm and a geometric algorithm.

That is, c x , c y among the internal parameters are equal to the center point of the image, f x , f y have the same focal length value, and γ is set to zero.

Assuming that the estimated four calibration points are V i (x i , y i , z i , f) and i = 1, 2, and 3, the focal length is expressed by the following equation (2).

Figure 112015124203406-pat00002

Here, when the value of x i x j + y i y j + z i z j is a positive number, the focal distance is calculated through a geometric algorithm.

FIG. 5 is a diagram illustrating an example of a relationship required for estimating a focal distance from a relationship between a single image and a calibration point and a focal length. A calibration line connecting the intersections V1 and V2 is perpendicular to a straight line passing through the center of the image do.

Also, the single image surface is perpendicular to the optical axis of the camera coordinate system, and the lines between the two calibration points and O are perpendicular to each other.

From this relationship, the focal length of the zoom

Figure 112015124203406-pat00003
Is calculated according to the following equation (3).

Figure 112015124203406-pat00004

The hybrid camera calibration algorithm engine unit according to the present invention uses the relation between the calibration point and the internal parameters to express the tilt rotation transformation matrix, which is an external parameter, as shown in Equation (4).

Figure 112015124203406-pat00005

Here, S = {S1, S2, S3}.

The tilt rotation conversion value of the three-dimensional camera calibration can be estimated through Equation (4).

Next, the three-dimensional space modeling forming unit 400 according to the present invention will be described.

The three-dimensional spatial modeling unit 400 forms a three-dimensional spatial modeling on a specific space and a specific object of a single image corrected through the three-dimensional camera calibration control unit.

6, the polygon pointer control unit 410, the second real world coordinate point calculation unit 420, the Z-axis displacement control unit 430, the image coordinate point conversion unit 440, the three-dimensional space modeling control unit 450 ).

9, the polygon pointer control unit 410 sets a polygon pointer having a line and a point along a bottom surface of a specific space to be depicted on an image plane of a single image do.

The second real world coordinate point calculator 420 calculates a real coordinate point of the polygon pointer set through the polygon pointer controller.

As shown in FIG. 10, the Z-axis displacement control unit 430 sets the displacement of the real world coordinate in the Z-axis on the real world coordinate points of the bottom surface of the second real world coordinate point computing unit.

The image coordinate point conversion unit 440 converts the real-world coordinate points into image coordinate points through the three-dimensional camera calibration information.

11, the three-dimensional space modeling control unit 450 corrects the coordinate points set in the first single image and the X-axis, Y-axis, and Z-axis through the smart camera calibration control unit, Dimensional spatial modeling to a specific space of a single image and a specific object by connecting the image coordinate points of the image to each other.

12, it is possible to form a three-dimensional space modeling on a specific space and a specific object of the corrected single image, so that an actual distance can be calculated at every position in the image, And the moving speed and the moving direction can be measured.

Hereinafter, a method for forming a three-dimensional space modeling through a three-dimensional camera calibration in a single image according to the present invention will be described in detail.

First, as shown in FIG. 13, three-dimensional spatial coordinates of an actual specific space are converted into two-dimensional image coordinates of a single image through an image coordinate conversion algorithm engine unit (S10).

That is, an internal parameter for three-dimensional camera calibration made up of horizontal and vertical coordinates of the image plane, coordinates on the three-dimensional space, focal length, center of the image, scale, asymmetry coefficient, r rotation information, Parameter to convert the three-dimensional spatial coordinates of the actual specific space into two-dimensional image coordinates of a single image.

Next, a virtual space-type world coordinate is formed on a single image through a correction point estimation controller, and a calibration point is estimated by setting a spatial correction point and a spatial origin (S20).

Next, when four or more calibration points are estimated through the calibration point estimation control unit in the three-dimensional camera calibration control unit, the X-axis calibration point, the Y-axis calibration point, the Z-axis correction point, the focal distance, , And tilt (Tilt), and performs three-dimensional camera calibration (S30).

Finally, three-dimensional spatial modeling is formed on the specific space and the specific object of the single image corrected by the three-dimensional camera calibration control unit in the three-dimensional spatial modeling forming unit (S40).

That is, as shown in FIG. 14, a polygon pointer consisting of a line and a point is set along the bottom surface of a specific space to be rendered on the image plane of the single image through the polygon pointer control unit (S41) .

Then, the second real world coordinate point computing unit computes a real world coordinate point of the polygon pointer set through the polygon pointer control unit (S42).

Subsequently, the Z-axis displacement control unit sets a displacement in the Z-axis of the real world coordinate to the real world coordinate points of the bottom surface of the second real world coordinate point computing unit (S43).

Then, the image coordinate point conversion unit converts the real-world coordinate points into the image coordinate point through the three-dimensional camera calibration information (S44).

Finally, the three-dimensional space modeling control unit corrects the coordinate points set in the first single image and the X, Y, and Z axes through the smart camera calibration control unit so that the image coordinate points of a single image captured by a single camera are connected to each other Dimensional space modeling is formed on the single image (S45).

1: formation of three-dimensional spatial modeling control device 100: image coordinate conversion algorithm engine
200: calibration point estimation control unit 300: three-dimensional camera calibration control unit
400: 3D spatial modeling forming unit

Claims (6)

A three-dimensional spatial modeling control device for reconstructing three-dimensional space modeling based on measured three-dimensional information after measuring three-dimensional information of a specific space and a specific object with four or more calibration points that can be measured from a single camera and a single image Lt; / RTI >
The three-dimensional space modeling control device
An image coordinate transformation algorithm engine unit 100 for converting the three-dimensional spatial coordinates of a real space into two-dimensional image coordinates of a single image,
A calibration point estimation control unit 200 for forming a virtual world coordinate system of a line and a point in a single image and then setting a spatial correction point and a spatial origin to estimate a calibration point;
When four or more calibration points are estimated through the calibration point estimation control unit, the camera parameter values are calculated as an X-axis calibration point, a Y-axis calibration point, a Z-axis calibration point, a focal distance, a pan, and a tilt, A three-dimensional camera calibration control unit 300 for performing three-dimensional camera calibration,
Dimensional camera modeling unit 400 for forming a three-dimensional space modeling on a specific space and a specific object of a single image corrected through a three-dimensional camera calibration control unit. A dimensional space modeling formation control apparatus comprising:
The three-dimensional space modeling forming unit 400
A polygon pointer control unit 410 for setting a polygon pointer composed of lines and points along a bottom surface of a specific space to be depicted on an image plane of a single image,
A second real world coordinate point computing unit 420 for computing a real world coordinate point of the polygon pointer set through the polygon pointer control unit,
A Z-axis displacement control unit 430 for setting a displacement in the Z-axis of the real world coordinate on the real world coordinate points of the bottom surface of the second real world coordinate point computing unit,
An image coordinate point conversion unit 440 for converting the real-world coordinate points into image coordinate points through the three-dimensional camera calibration information,
The coordinate points set in the first image are corrected to the X-axis, Y-axis, and Z-axis through the smart camera calibration control unit, and three-dimensional spatial modeling is performed on a single image by connecting image coordinate points of a single image captured by a single camera Dimensional space modeling control unit 450 for controlling the three-dimensional space modeling control unit 450 to control the three-dimensional space modeling control unit 450 to control the three-dimensional space modeling formation control unit 450.
delete delete delete (S10) of transforming a three-dimensional spatial coordinate of an actual specific space into a two-dimensional image coordinate of a single image through an image coordinate conversion algorithm engine unit and setting an internal parameter for three-dimensional camera calibration and an external parameter,
(S20) of forming a virtual space-type world coordinate on a single image through a calibration point estimation controller and then setting a spatial correction point and a spatial origin to estimate a correction point;
When more than four calibration points are estimated through the calibration point estimation control unit in the 3D camera calibration control unit, camera parameter values such as X axis calibration point, Y axis calibration point, Z axis calibration point, focal distance, pan, Tilt) to perform three-dimensional camera calibration (S30)
Dimensional space modeling and forming a three-dimensional space modeling on a specific space and a specific object of the single image corrected through the three-dimensional camera calibration control unit in the three-dimensional space modeling forming unit. A method of forming a spatial modeling,
The step (S40) of forming the three-dimensional space modeling on the specific space and the specific object of the single image corrected through the three-dimensional camera calibration control unit in the three-dimensional space modeling forming unit
A step (S41) of setting a polygon pointer consisting of a line and a point along a bottom surface of a specific space to be rendered on the image plane of the single image through the polygon pointer control unit,
A step (S42) of calculating a real world coordinate point of a polygon pointer set through a polygon pointer control unit in a second real world coordinate point computing unit,
A step (S43) of setting displacement in the Z axis of the real world coordinate to the real world coordinate points of the bottom surface of the second real world coordinate point computing unit in the Z axis displacement control unit,
(S44) of transforming real-world coordinate points into image coordinate points through three-dimensional camera calibration information in an image coordinate point conversion unit,
Three-dimensional space modeling The controller coordinates the coordinate points set in the first single image and the X, Y, and Z axes through the smart camera calibration control unit, connecting the image coordinate points of a single image captured by a single camera to a single image Dimensional space modeling, and forming a three-dimensional space modeling (S45).
delete
KR1020150181436A 2015-12-18 2015-12-18 The apparatus and method of 3d modeling by 3d camera calibration KR101634283B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150181436A KR101634283B1 (en) 2015-12-18 2015-12-18 The apparatus and method of 3d modeling by 3d camera calibration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150181436A KR101634283B1 (en) 2015-12-18 2015-12-18 The apparatus and method of 3d modeling by 3d camera calibration

Publications (1)

Publication Number Publication Date
KR101634283B1 true KR101634283B1 (en) 2016-06-30

Family

ID=56352904

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150181436A KR101634283B1 (en) 2015-12-18 2015-12-18 The apparatus and method of 3d modeling by 3d camera calibration

Country Status (1)

Country Link
KR (1) KR101634283B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101677972B1 (en) * 2016-07-26 2016-11-21 주식회사 싸인텔레콤 The apparatus and method of multi-lane car plate detection and recognition
KR20190076806A (en) * 2017-12-22 2019-07-02 한국기술교육대학교 산학협력단 Method for workspace modeling based on virtual wall using 3d scanner and system thereof
US11453337B2 (en) 2019-11-19 2022-09-27 Samsung Electronics Co., Ltd. Method and apparatus with three-dimensional object display

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100890625B1 (en) 2007-09-20 2009-03-27 조성윤 High-speed Weight In Motion
JP2009210331A (en) * 2008-03-03 2009-09-17 Toa Corp Camera calibration apparatus and camera calibration method
JP2010193458A (en) * 2009-02-19 2010-09-02 Sony Europe Ltd Image processing device, image processing system, and image processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100890625B1 (en) 2007-09-20 2009-03-27 조성윤 High-speed Weight In Motion
JP2009210331A (en) * 2008-03-03 2009-09-17 Toa Corp Camera calibration apparatus and camera calibration method
JP2010193458A (en) * 2009-02-19 2010-09-02 Sony Europe Ltd Image processing device, image processing system, and image processing method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101677972B1 (en) * 2016-07-26 2016-11-21 주식회사 싸인텔레콤 The apparatus and method of multi-lane car plate detection and recognition
KR20190076806A (en) * 2017-12-22 2019-07-02 한국기술교육대학교 산학협력단 Method for workspace modeling based on virtual wall using 3d scanner and system thereof
KR101998396B1 (en) 2017-12-22 2019-07-09 한국기술교육대학교 산학협력단 Method for workspace modeling based on virtual wall using 3d scanner and system thereof
US11453337B2 (en) 2019-11-19 2022-09-27 Samsung Electronics Co., Ltd. Method and apparatus with three-dimensional object display

Similar Documents

Publication Publication Date Title
CN107705333B (en) Space positioning method and device based on binocular camera
JP6363863B2 (en) Information processing apparatus and information processing method
JP6465789B2 (en) Program, apparatus and method for calculating internal parameters of depth camera
US10290119B2 (en) Multi view camera registration
JP5029618B2 (en) Three-dimensional shape measuring apparatus, method and program by pattern projection method
JP4814669B2 (en) 3D coordinate acquisition device
Orghidan et al. Camera calibration using two or three vanishing points
US20200334842A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
CN111192235B (en) Image measurement method based on monocular vision model and perspective transformation
WO2013008804A1 (en) Measurement device and information processing device
TWI520576B (en) Method and system for converting 2d images to 3d images and computer-readable medium
US9418435B2 (en) Three-dimensional measurement method
CN102831601A (en) Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
US10186051B2 (en) Method and system for calibrating a velocimetry system
Mahdy et al. Projector calibration using passive stereo and triangulation
KR101634283B1 (en) The apparatus and method of 3d modeling by 3d camera calibration
CN108180888A (en) A kind of distance detection method based on rotating pick-up head
CN103900473A (en) Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor
CN110415286A (en) A kind of outer ginseng scaling method of more flight time depth camera systems
CN111105467B (en) Image calibration method and device and electronic equipment
CN116188558A (en) Stereo photogrammetry method based on binocular vision
Deng et al. Registration of multiple rgbd cameras via local rigid transformations
JP2012023561A (en) Control system of stereo imaging device
BR112021008558A2 (en) apparatus, disparity estimation method, and computer program product
KR20050061115A (en) Apparatus and method for separating object motion from camera motion

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant