KR20160146567A - Method and device for detecting variable and fast moving object - Google Patents

Method and device for detecting variable and fast moving object Download PDF

Info

Publication number
KR20160146567A
KR20160146567A KR1020160072323A KR20160072323A KR20160146567A KR 20160146567 A KR20160146567 A KR 20160146567A KR 1020160072323 A KR1020160072323 A KR 1020160072323A KR 20160072323 A KR20160072323 A KR 20160072323A KR 20160146567 A KR20160146567 A KR 20160146567A
Authority
KR
South Korea
Prior art keywords
dimensional
coordinate
image
matrix
fixed area
Prior art date
Application number
KR1020160072323A
Other languages
Korean (ko)
Inventor
신동한
Original Assignee
신동한
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 신동한 filed Critical 신동한
Publication of KR20160146567A publication Critical patent/KR20160146567A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The present invention relates a method and apparatus for detecting an object and, more particularly, a method and apparatus for detecting an object, which may calculate three-dimensional coordinate information on an object that moves variably and fast in a specific region. To achieve this, the method for detecting an object according to the present invention, which detects a variable and fast moving object on a fixed region that is not moved, using a camera, includes: a fixed region extracting step of photographing the variable and fast moving object on the fixed region using the camera and extracting a fixed region image by separating the fixed region and a background from an acquired current image frame; an object two-dimensional coordinate acquiring step of extracting an object image by separating the fixed region and an object from the extracted fixed region image, and acquiring two-dimensional image coordinates (object location coordinates) corresponding to a location of the object, from the extracted object image; a fixed region three-dimensional coordinate acquiring step of acquiring three-dimensional world coordinates of the fixed region from the extracted fixed region image; and an object three-dimensional coordinate acquiring step of acquiring three-dimensional world coordinates of the variable and fast moving object by combining a Z coordinate that is acquired in the fixed region three-dimensional coordinate acquiring step with a X coordinate and a Y coordinate that are obtained in the object two-dimensional coordinate acquiring step.

Description

Field of the Invention [0001] The present invention relates to a method and an apparatus for detecting a variable moving object,

The present invention relates to a method and apparatus for detecting an object, and more particularly, to an object detecting method and apparatus capable of calculating three-dimensional coordinate information of an object that can be rapidly and rapidly moving in a certain area.

Augmented Reality (AR) refers to a technique of superimposing a three-dimensional virtual object (image) on a real screen. In order to realize the augmented reality, it is necessary to accurately recognize and detect the object in the part where the virtual image is superimposed on the real image screen.

The object detection is to obtain the three-dimensional coordinates (world coordinates) of the object, specifically, the three-dimensional coordinates of the object in the world coordinate system from the image captured by the camera. The three-dimensional coordinates of such an object can be obtained by accurately detecting the shape (outline) of the object (target) and keeping the detected shape constant.

Therefore, for a stationary object or a slow moving object, it is easy to obtain the three-dimensional coordinate because the object's shape can be continuously recognized. However, for a variable moving object, the object can not be continuously recognized. It is difficult to obtain.

Of course, it is possible to recognize the shape of a fast moving object by using an expensive high-precision camera. However, since the present augmented reality is implemented in a small or portable device such as a personal computer or a smart phone, Object detection is only possible for a stationary object or a slowly moving object because it is only a general-purpose small camera sensor.

Patent Document 1:

It is an object of the present invention to provide an object detecting method and an object detecting method capable of accurately detecting a rapidly moving object.

Another object of the present invention is to provide an object detecting method and apparatus which can accurately obtain the three-dimensional coordinates of an object even if it can not continuously recognize the shape of a rapidly moving object by using a general camera.

In order to achieve the above object, there is provided a method of detecting an object moving rapidly and variable in a fixed region using a camera, the method comprising the steps of: A fixed region extraction step of extracting a fixed region image by separating a fixed region and a background from an image frame; extracting an object image by separating the fixed region and the object from the extracted fixed region image; Dimensional coordinate of the fixed region; acquiring an object two-dimensional coordinate of the corresponding fixed-region image to obtain a corresponding two-dimensional image coordinate (object position coordinate); acquiring a three-dimensional world coordinate of the fixed region from the extracted fixed- The Z coordinate obtained in the step of acquiring the three-dimensional coordinate and the Z coordinate obtained in the object two- Combining the X coordinate and the Y coordinate obtained in the step includes a three-dimensional object coordinates acquiring step of acquiring three-dimensional world coordinates of the variable as a fast moving object.

There is provided an apparatus for detecting an object moving rapidly and variable in a fixed region that is not moved by using a camera, the apparatus comprising: a fixed region image separating a fixed region and a background from a current image frame input from the camera, Extracting an object image by separating the fixed area and the object from the extracted fixed area image, and acquiring a two-dimensional image coordinate (object position coordinate) corresponding to the position of the object from the extracted object image; Dimensional image coordinate (fixed area position coordinate) corresponding to the position of the fixed area from the extracted fixed area image, and a two-dimensional image coordinate (marker position coordinate) corresponding to the marker of the fixed area, Dimensional coordinate corresponding to the marker position coordinate and the marker position coordinate, Dimensional coordinates of the fixed region by using the three-dimensional transformation matrix calculated from the matching pairs of the coordinates of the coordinates of the object, and calculating the Z coordinate of the fixed region in the three-dimensional world coordinate and the X coordinate and Y coordinate Dimensional coordinates of the variable moving object by combining the three-dimensional coordinates of the moving object.

As described above, according to the present invention, it is possible to accurately obtain the three-dimensional coordinates of an object moving rapidly and variable even if a general camera is used.

That is, the shape of the moving object can not be accurately detected in the image captured by the general camera, and even if the object is detected, the shape can not be continuously recognized, so that the three-dimensional coordinates of the object can not be obtained. However, Dimensional coordinate from a fast-moving object, obtains three-dimensional coordinates from the non-moving region, and obtains the three-dimensional coordinates of the object that moves rapidly by combining the two-dimensional coordinates of the object with the Z- .

BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows an object in which an object is detected in accordance with the present invention.
2 is an internal configuration diagram of an object detection apparatus according to the present invention;
3 is a flowchart of an object detection method according to the present invention.
4 is a flowchart illustrating a process of acquiring three-dimensional coordinates of a fixed area according to the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. The configuration of the present invention and the operation and effect thereof will be clearly understood through the following detailed description.

Before describing the present invention in detail, the same components are denoted by the same reference symbols as possible even if they are displayed on different drawings. In the case where it is judged that the gist of the present invention may be blurred to a known configuration, do.

1 illustrates an object in which an object is detected according to an embodiment of the present invention.

As shown in FIG. 1, an object to be detected includes a fixed area 1 and an object 2 moving within the fixed area 1.

In the embodiment of the present invention, the fixed area 1 has a stadium shape and the object 2 has a top shape. The stadium (1) is fixed, and one or more tops (2) rotate and move rapidly in the stadium (1).

In the embodiment of the present invention, the top 2 that moves rapidly in the stadium 1 is exemplified. However, the present invention is not limited to this, and various types of fixed areas 1 and objects 2 may be possible.

As described above, according to the present invention, when a camera (not shown) rotates in the stadium 1 and photographs a fast moving top, the object detecting device detects the top of the top 2 The three-dimensional coordinates are detected, and a virtual image is superimposed on a portion corresponding to the three-dimensional coordinates of the detected top, thereby realizing an augmented reality.

Herein, how the three-dimensional coordinates of the object, which is variable and rapidly moving like the top, can be calculated from the image captured by the camera will be described in detail with reference to FIGS. 2 and 3. FIG.

2 shows an internal configuration of an object detecting apparatus according to the present invention.

2, the object detecting apparatus includes a fixed area extracting unit 10, an object extracting unit 20, a fixed area two-dimensional coordinate obtaining unit 30, a three-dimensional coordinate obtaining unit 40, (50) and the like.

The object detecting apparatus may be implemented by a terminal device such as a personal computer (PC), a smart phone, a tablet PC, a notebook computer, or the like, or may be implemented as an augmented reality terminal. In addition, each element constituting the object detection apparatus may be implemented by software or part of the software may be implemented by hardware. The object detecting apparatus according to the present invention includes a general camera, not a high-speed camera. A typical camera has a frame rate of about 30 frames per second, and frames per second may be lower depending on the ambient illumination environment.

The fixed area extracting unit 10 extracts the fixed area image by separating the background from the current image frame received from the general camera. That is, the fixed area extraction unit 10 extracts the fixed area image by discarding the background part outside the fixed area in the image frame based on the shape and size of the fixed area.

That is, after the RGB image inputted from the camera is binarized, the area representing the characteristics of the stadium is recognized as the stadium and the fixed area image is extracted. The area indicating the characteristics of the stadium refers to the part indicating the shape and size of the stadium set in advance. The stadium is already made, so you can set its shape and size.

The object extracting unit 20 extracts the object image by separating the fixed area and the object from the fixed area image extracted by the fixed area extracting unit 10. [ That is, the object extracting unit 20 extracts the object image by discarding the rest of the object from the fixed region image based on the shape and size of the predetermined object.

Likewise, the extracted fixed region image is binarized, and an object image is extracted by recognizing an area representing the characteristic of the object as an object. The area representing the characteristic of the object refers to a part indicating the shape and size of the top formed in advance. Since the top is already made, you can set its shape and size.

Specifically, the object extracting unit 20 extracts an object from a fixed area image including an object extracting module 22, an object comparing module 24, and an object tracking module 26, and tracks (tracks) And acquires the two-dimensional image coordinates (object position coordinates) of the object.

The object extraction module 22 receives the fixed region image extracted from the fixed region extraction unit 10 and extracts the object image from the shape and size of the predetermined object in the extracted fixed region image.

The object comparison module 24 obtains the position and color histogram of the object from the object image extracted by the object extraction module 22 and compares the position and color histogram with the position and color histogram of the object image extracted from the previous image frame.

The object tracking module 26 recognizes that the same object is moved if the position of the object image of the current image frame and the color histogram of the object image of the previous image frame are similar to each other and if the position or color histogram of the object image is different, And acquires a two-dimensional image coordinate (object position coordinate) corresponding to the position of the object while tracking the movement of the object.

The fixed area two-dimensional coordinate acquiring unit 30 acquires a two-dimensional image coordinate (fixed area position coordinate) corresponding to the position of the fixed area from the fixed area image extracted by the fixed area extracting unit 10, (Marker position coordinate) corresponding to the marker by recognizing the marker 3 attached to the marker 3. Here, the marker position coordinates may be included in the fixed region position coordinates as one of the fixed region position coordinates, but they are separately shown for convenience of explanation. That is, the fixed area two-dimensional coordinate obtaining unit 30 obtains the fixed area position coordinates and the marker position coordinates from the fixed area image.

The three-dimensional coordinate acquiring unit 40 acquires three-dimensional coordinates of the object using the fixed area position coordinates, marker position coordinates, and object position coordinates received from the fixed area two-dimensional coordinate obtaining unit 30, Obtain the coordinates. That is, the three-dimensional coordinate acquisition unit 40 acquires the three-dimensional world coordinates of the fixed area using the three-dimensional transformation matrix, and obtains the three-dimensional world coordinates of the fixed area using the Z coordinate and the object position coordinate To obtain a three-dimensional world coordinate of the object that is variable and rapidly moving.

Specifically, the three-dimensional coordinate obtaining unit 40 includes a three-dimensional conversion matrix calculating module 42, a fixed area three-dimensional coordinate converting module 44, an object three-dimensional coordinate calculating module 46, and the like.

The 3D transformation matrix calculation module 42 calculates a 3D transformation matrix using an intrinsic parameter and an extrinsic parameter of the camera. The three-dimensional transformation matrix is composed of a matrix of a camera matrix and a rotation / translation matrix. The camera matrix is obtained using an internal parameter, and the rotation / translation matrix is obtained using an external parameter.

The internal parameters of the camera are internal values of the camera itself, such as focal length, principal point, and the like. The internal parameters of the camera are usually given values or can be obtained using a calibration tool.

The external parameters of the camera are parameters that describe the conversion relationship between the camera image coordinate system and the world coordinate system and can be expressed by rotation and translation between two coordinate systems. That is, the external parameter of the camera means the position and the posture of the camera and is defined as a rotation / translation matrix.

The three-dimensional transformation matrix calculation module 42 can obtain the rotation / translation matrix in two ways.

First, an outer parameter is obtained by using a matching pair of a two-dimensional image coordinate (marker position coordinate) of the fixed area marker obtained through the fixed area two-dimensional coordinate obtaining section 30 and a three-dimensional world coordinate corresponding to the position coordinate of the marker Can be obtained.

Second, the rotation matrix is obtained by using the tilt / roll / pan angle of the camera, and the translation matrix is obtained from the matching pair (3 (corresponding to the marker position coordinates of the fixed region and the marker position coordinates) Dimensional world coordinates), but can be obtained by setting the Z coordinate to zero. In the case of calculating the external parameter using the camera tilt information, the number of unknowns to be obtained is smaller than that of the first method, so that the amount of computation can be reduced. In the case of the second method, since the tilt information can not be known in a general PC web cam, it can be applied only to a mobile device camera such as a mobile phone.

The fixed area three-dimensional coordinate transformation module 44 transforms the fixed-area position coordinates input from the fixed-area two-dimensional coordinate acquisition unit 30 into three-dimensional transformation coordinates using the three-dimensional transformation matrix calculated by the three- Converts to world coordinates.

The object three-dimensional coordinate calculation module 46 combines the Z coordinate input from the fixed area three-dimensional coordinate transformation module 44 and the X coordinate and the Y coordinate of the object input from the object extraction unit 20, Dimensional world coordinate of the object.

The image compositing unit 50 superimposes the virtual image on the portion corresponding to the three-dimensional world coordinate of the object in the current image frame. That is, the image synthesis unit 50 receives the 3D world coordinate information of the object from the 3D coordinate acquisition unit 40 and reads the virtual image corresponding to the object from the memory (not shown) And realizes an augmented reality by superimposing a virtual image on the part.

3 shows a flowchart of an object detection method according to the present invention.

The object detection process shown in FIG. 3 may be performed in a terminal device such as a personal computer (PC), a smart phone, a tablet PC, a notebook computer, or the like having a camera and implemented with an augmented reality service. Specifically, such an object detection process is performed by a microprocessor of the terminal device by performing software detection of object detection software, or by a software execution of a microprocessor, and by a hardware such as a dedicated chip .

Referring to FIG. 3, when a camera photographs an object that is rapidly and rapidly moving in a fixed area, an image is generated every frame (for example, 30 frames per second). Then, the fixed area image is extracted by separating the fixed area and the background according to the shape and size of the predetermined fixed area in the current image frame acquired from the camera (S10).

Next, an object image is extracted by separating the fixed area and the object according to the shape and size of the preset object in the extracted fixed area image (S20).

When the object image is extracted from the fixed region image, the two-dimensional coordinates of the object are obtained from the object image (S30). The two-dimensional coordinate of the object refers to a two-dimensional image coordinate corresponding to the position of the object, that is, an object position coordinate. The 2D image coordinate can be obtained from the object image projected on the camera image coordinate system. The 2D image coordinate is finally obtained through object image extraction, object image comparison and tracking process.

Then, the three-dimensional coordinates of the fixed area are obtained from the previously extracted fixed area image (S40). The three-dimensional coordinates of the fixed area are three-dimensional coordinates in the world coordinate system, that is, the world coordinate, and the process of acquiring the three-dimensional coordinates will be described in detail with reference to FIG.

First, the two-dimensional coordinates of the fixed area and the two-dimensional coordinates of the fixed area marker are obtained from the previously extracted fixed area image (S100). The two-dimensional coordinates of the fixed area are the two-dimensional image coordinates corresponding to the positions of the fixed area, that is, the fixed area position coordinates, and the two-dimensional coordinates of the fixed area markers are the two-dimensional image coordinates corresponding to the markers attached to the fixed area, Position coordinates.

Next, a three-dimensional transformation matrix is calculated (S102). The three-dimensional transformation matrix is expressed by Equation (1).

Figure pat00001

Here, s is a constant as a scale, A is a camera matrix, [R | t] is a rotation / translation matrix, (x, y) is a two-dimensional coordinate, and (X, Y, Z) . In the three-dimensional transformation matrix, the scale (s) refers to the scale (ratio) of the coordinate size between the image coordinate system and the world coordinate system.

The camera matrix A can be obtained using a given internal parameter. The internal parameters include the focal length (f x , f y ) and the principal point (c x , c y ).

The rotation / translation matrix (R | t) can be obtained in two ways using external parameters. The external parameters include the rotation (R x , R y , R z ) and the translation (t x , t y , t z ) for each axis.

Method 1

The two-dimensional coordinates of the camera image coordinate system are matched with the three-dimensional coordinates of the world coordinate system, and the rotation / translation matrix is obtained using the RANSAC / LMEDS algorithm. That is, the rotation / parallel movement matrix is obtained by applying the matching pair extracted as a sample (the three-dimensional world coordinate pair corresponding to the marker position coordinate and the marker position coordinate) to the equation (1). The matching pairs required to obtain the rotation / translation matrix can be obtained as follows.

That is, the marker 3 may be attached to the corner of the stadium, which is a fixed area (see FIG. 1), and the world coordinates of the marker may be set. Then, when the image of the stadium is acquired by the camera, the image coordinates of the marker, that is, the marker position coordinates are obtained, and then a matching pair of the world coordinates and the marker position coordinates set corresponding to the marker is inserted into the equation (1) Can be obtained. This is the same as acquiring the position and attitude information of the camera.

Method 2

The translation matrix R is obtained by using the tilt / roll / pan angle of the camera. The translation matrix t is obtained by matching the two-dimensional coordinates of the camera image coordinate system with the three-dimensional coordinates of the world coordinate system. Is set to 0, that is, t z = 0. This can be done by setting the z-axis to 0 in the world coordinates of the marker attached to the stadium.

When t z is set to 0, it is assumed that the moving object in the fixed region always exists at the zero point in the world coordinate system Z and moves.

The rotation matrix R is given by Equation 2. The rotation matrixes R x , R y , and R z of the respective axes are obtained by using the tilt / roll / fan angle of the camera and then multiplied by the rotation matrixes of the axes, The rotation matrix is calculated.

Figure pat00002

Here, R x , R y , and R z are expressed as Equations 3 through 5, respectively, as tilt about the x axis, roll about the y axis, and panning about the z axis.

Figure pat00003

Figure pat00004

Figure pat00005

When the three-dimensional transformation matrix is calculated using the method 1 or 2, the three-dimensional coordinates of the fixed area are obtained by substituting the two-dimensional image coordinates (fixed area position coordinates) of the fixed area into the expression (1) ).

As described above, when the two-dimensional coordinates (position coordinates) of the object and the three-dimensional coordinates of the fixed area are obtained, the three-dimensional coordinates of the object are finally obtained (S50). That is, the three-dimensional world coordinates of the object can be obtained by combining the X coordinate and the Y coordinate of the two-dimensional image coordinate corresponding to the position of the object and the Z coordinate of the three-dimensional world coordinate of the fixed area.

When the 3D world coordinate of the object is obtained, a realistic augmented reality can be realized through an image synthesis process of displaying a virtual image superimposed on a part corresponding to the 3D world coordinate of the object in the current image frame.

In order to obtain the 3D transformation matrix, the world coordinates of the marker or the feature point of the object and the image coordinate matching are required, and the coordinates of the marker or the feature point must be detected. However, if the object is moving at a variable speed, ordinary cameras can not detect the image coordinates of the marker or minutiae. This is because, in the case of the marker, the shape of the marker disappears when the object moves quickly, and the point or the outline is not accurately detected in the characteristic point, and the shape is not maintained even if the shape is detected.

For this reason, objects moving at variable speed can not perform world coordinate and image coordinate matching, and accordingly, a three-dimensional transformation matrix can not be obtained in real time, so that three-dimensional coordinate information can not be obtained.

However, according to the method of the present invention, when an object is moving with a variable speed and moving rapidly in a fixed area, an image of a rapidly moving object is blurred when an object is photographed by a camera, and image coordinates of a matching pair can be detected (X, y) coordinates of the object and the coordinates of the fixed region are obtained from the fact that the two-dimensional image coordinate corresponding to the position of the object can be obtained and that the fixed region is fixed and the three- Dimensional coordinates of the 3D object of the object moving in the 3D coordinate system.

The foregoing description is merely illustrative of the present invention, and various modifications may be made by those skilled in the art without departing from the spirit of the present invention.

Accordingly, the embodiments disclosed in the specification of the present invention are not intended to limit the present invention. The scope of the present invention should be construed according to the following claims, and it is to be understood that all the techniques within the scope of the claims are also included in the scope of the present invention.

1: fixed region 2: object
10: fixed area extracting unit 20: object extracting unit
22: object extraction module 24: object comparison module
26: Object tracking module 30: Fixed area two-dimensional coordinate obtaining part
40: Three-dimensional coordinate acquisition unit 42: Three-dimensional conversion matrix calculation module
44: fixed area three-dimensional coordinate conversion module 46: object three-dimensional coordinate calculation module
50:

Claims (11)

A method for detecting a variable moving object in a fixed region that does not move using a camera,
A fixed area extracting step of extracting a fixed area image by separating a fixed area and a background from a current image frame obtained by photographing an object which is variable and rapidly moving in the fixed area,
Extracting an object image by separating the fixed area and the object from the extracted fixed area image, and obtaining two-dimensional image coordinates (object position coordinates) corresponding to the position of the object from the extracted object image;
Acquiring a three-dimensional world coordinate of a fixed area from the extracted fixed area image;
An object three-dimensional coordinate acquisition step of acquiring three-dimensional world coordinates of the variable moving object by combining the Z coordinate obtained in the fixed area three-dimensional coordinate obtaining step and the X coordinate and Y coordinate obtained in the object two-dimensional coordinate obtaining step ≪ / RTI >
The method according to claim 1,
Wherein the obtaining of the fixed area three-dimensional coordinates includes obtaining two-dimensional image coordinates (fixed area position coordinates) corresponding to the positions of the fixed areas from the extracted fixed area images and two-dimensional image coordinates corresponding to the markers of the fixed areas ),
Calculating a three-dimensional transformation matrix using an internal parameter and an external parameter of the camera;
And converting the fixed area position coordinates into three-dimensional world coordinates using the three-dimensional transformation matrix.
3. The method of claim 2,
In the process of calculating the three-dimensional transformation matrix, the three-dimensional transformation matrix includes a matrix multiplication of a camera matrix and a rotation / translation matrix,
Wherein the camera matrix is obtained using an internal parameter and the rotation / translation matrix is calculated using an external parameter,
Wherein the rotation matrix is obtained using a tilt / roll / pan angle of the camera, and the translation matrix includes a predetermined matching pair (a three-dimensional matrix corresponding to the marker position coordinate and the marker position coordinate, A matching pair of world coordinates), and setting the Z coordinate to zero.
3. The method of claim 2,
In the process of calculating the three-dimensional transformation matrix, the three-dimensional transformation matrix includes a matrix multiplication of a camera matrix and a rotation / translation matrix,
Wherein the camera matrix is obtained using an internal parameter and the rotation / translation matrix is calculated using an external parameter,
Wherein the rotation / balance movement matrix is obtained using a preset matching pair (a matching pair of the three-dimensional world coordinate corresponding to the marker position coordinate and the marker position coordinate).
The method according to claim 1,
Further comprising an image synthesizing step of superimposing a virtual image on a portion corresponding to a three-dimensional world coordinate of the object in the current image frame.
An apparatus for detecting a rapidly moving object in a fixed region that does not move using a camera,
A fixed area extraction unit for extracting a fixed area image by separating a fixed area and a background from a current image frame received from the camera;
An object extraction unit for extracting an object image by separating the fixed area and the object from the extracted fixed area image and obtaining two-dimensional image coordinates (object position coordinates) corresponding to the position of the object from the extracted object image,
Dimensional coordinate (fixed area position coordinate) corresponding to the position of the fixed area from the extracted fixed area image and a two-dimensional image coordinate (marker position coordinate) corresponding to the marker of the fixed area, An acquisition unit,
Acquiring three-dimensional world coordinates of a fixed region using a three-dimensional transformation matrix calculated from a matching pair of three-dimensional world coordinates corresponding to the marker position coordinates and the marker position coordinates, And a three-dimensional coordinate obtaining unit for obtaining three-dimensional world coordinates of the variable fast moving object by combining the Z coordinate and the X coordinate and Y coordinate in the object position coordinate.
The method according to claim 6,
Wherein the object extraction unit extracts an object image from the shape and size of a predetermined object in the extracted fixed area image,
An object comparison module for obtaining the position and color histogram of the object from the extracted object image and comparing the position and the color histogram of the object image extracted in the previous image frame with the color histogram of the object image,
If the position of the current image frame and the position of the object image of the previous image frame and the color histogram are similar to each other, it is recognized that the same object has moved. If the position of the object image or the color histogram are different from each other, And acquiring two-dimensional image coordinates (object position coordinates) corresponding to the position of the object while tracking the object.
The method according to claim 6,
Wherein the three-dimensional coordinate obtaining unit comprises: a three-dimensional transformation matrix calculating module for calculating a three-dimensional transformation matrix using internal parameters and external parameters of the camera;
A fixed area three-dimensional coordinate transformation module for converting the fixed area position coordinates input from the fixed area two-dimensional coordinate obtaining part into three-dimensional world coordinates through the three-dimensional transformation matrix;
Dimensional coordinates of the variable moving object by combining the Z coordinate input from the fixed area three-dimensional coordinate transformation module and the X coordinate and the Y coordinate of the object input from the object extraction unit, And an output module.
9. The method of claim 8,
Wherein the three-dimensional transformation matrix comprises a matrix multiplication of a camera matrix and a rotation / translation matrix, wherein the three-dimensional transformation matrix calculation module obtains a camera matrix using internal parameters and obtains a rotation / translation matrix using external parameters As a result,
Wherein the rotation matrix is obtained using a tilt / roll / pan angle of the camera, and the translation matrix includes a predetermined matching pair (a three-dimensional matrix corresponding to the marker position coordinate and the marker position coordinate, A matching pair of the world coordinates), and setting the Z coordinate to zero.
9. The method of claim 8,
Wherein the three-dimensional transformation matrix comprises a matrix multiplication of a camera matrix and a rotation / translation matrix, wherein the three-dimensional transformation matrix calculation module obtains a camera matrix using internal parameters and obtains a rotation / translation matrix using external parameters As a result,
Wherein the rotation / translation matrix is obtained using a predetermined matching pair (matching pair of the marker position coordinates and the three-dimensional world coordinates corresponding to the marker position coordinates).
The method according to claim 6,
Further comprising an image composer for superimposing a virtual image on a portion corresponding to a three-dimensional world coordinate of the object in the current frame image.
KR1020160072323A 2015-06-12 2016-06-10 Method and device for detecting variable and fast moving object KR20160146567A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20150083393 2015-06-12
KR1020150083393 2015-06-12

Publications (1)

Publication Number Publication Date
KR20160146567A true KR20160146567A (en) 2016-12-21

Family

ID=57734696

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160072323A KR20160146567A (en) 2015-06-12 2016-06-10 Method and device for detecting variable and fast moving object

Country Status (1)

Country Link
KR (1) KR20160146567A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102029850B1 (en) * 2019-03-28 2019-10-08 세종대학교 산학협력단 Object detecting apparatus using camera and lidar sensor and method thereof
KR102041320B1 (en) * 2019-10-01 2019-11-06 주식회사 아이디어캐슬 Precision-Location Based Optimized 3D Map Delivery System
WO2021167365A3 (en) * 2020-02-21 2021-10-14 삼성전자 주식회사 Electronic device and method for tracking movement of object
CN113923420A (en) * 2021-11-18 2022-01-11 京东方科技集团股份有限公司 Area adjustment method and device, camera and storage medium
WO2022197036A1 (en) * 2021-03-15 2022-09-22 삼성전자 주식회사 Measurement method using ar, and electronic device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100124571A (en) 2009-05-19 2010-11-29 한양대학교 산학협력단 Apparatus and method for guiding information using augmented reality

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100124571A (en) 2009-05-19 2010-11-29 한양대학교 산학협력단 Apparatus and method for guiding information using augmented reality

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102029850B1 (en) * 2019-03-28 2019-10-08 세종대학교 산학협력단 Object detecting apparatus using camera and lidar sensor and method thereof
KR102041320B1 (en) * 2019-10-01 2019-11-06 주식회사 아이디어캐슬 Precision-Location Based Optimized 3D Map Delivery System
WO2021167365A3 (en) * 2020-02-21 2021-10-14 삼성전자 주식회사 Electronic device and method for tracking movement of object
WO2022197036A1 (en) * 2021-03-15 2022-09-22 삼성전자 주식회사 Measurement method using ar, and electronic device
CN113923420A (en) * 2021-11-18 2022-01-11 京东方科技集团股份有限公司 Area adjustment method and device, camera and storage medium

Similar Documents

Publication Publication Date Title
US11164001B2 (en) Method, apparatus, and system for automatically annotating a target object in images
US11393173B2 (en) Mobile augmented reality system
US11308347B2 (en) Method of determining a similarity transformation between first and second coordinates of 3D features
US9576183B2 (en) Fast initialization for monocular visual SLAM
JP4750859B2 (en) Data processing apparatus, method, and recording medium
WO2017041731A1 (en) Markerless multi-user multi-object augmented reality on mobile devices
KR20160146567A (en) Method and device for detecting variable and fast moving object
KR102124617B1 (en) Method for composing image and an electronic device thereof
JP6491517B2 (en) Image recognition AR device, posture estimation device, and posture tracking device
WO2016029939A1 (en) Method and system for determining at least one image feature in at least one image
JP6883608B2 (en) Depth data processing system that can optimize depth data by aligning images with respect to depth maps
JP2016218905A (en) Information processing device, information processing method and program
KR102398478B1 (en) Feature data management for environment mapping on electronic devices
US20190073796A1 (en) Method and Image Processing System for Determining Parameters of a Camera
WO2020237565A1 (en) Target tracking method and device, movable platform and storage medium
CN104156998A (en) Implementation method and system based on fusion of virtual image contents and real scene
WO2016025328A1 (en) Systems and methods for depth enhanced and content aware video stabilization
KR101703013B1 (en) 3d scanner and 3d scanning method
CN108028904B (en) Method and system for light field augmented reality/virtual reality on mobile devices
US10540809B2 (en) Methods and apparatus for tracking a light source in an environment surrounding a device
CN110310325B (en) Virtual measurement method, electronic device and computer readable storage medium
CN112073640A (en) Panoramic information acquisition pose acquisition method, device and system
KR101509105B1 (en) Apparatus for providing augmented reality and method thereof
CN112258435A (en) Image processing method and related product
KR101307014B1 (en) smartphone possible Frame Matching using Mask technique of Background Frame and thereof.

Legal Events

Date Code Title Description
A201 Request for examination
A302 Request for accelerated examination
E902 Notification of reason for refusal
E601 Decision to refuse application