KR20090000777A - Augmented reality system using tangible object and method for providing augmented reality - Google Patents

Augmented reality system using tangible object and method for providing augmented reality Download PDF

Info

Publication number
KR20090000777A
KR20090000777A KR1020070032538A KR20070032538A KR20090000777A KR 20090000777 A KR20090000777 A KR 20090000777A KR 1020070032538 A KR1020070032538 A KR 1020070032538A KR 20070032538 A KR20070032538 A KR 20070032538A KR 20090000777 A KR20090000777 A KR 20090000777A
Authority
KR
South Korea
Prior art keywords
tracking information
image
sensory object
augmented reality
color
Prior art date
Application number
KR1020070032538A
Other languages
Korean (ko)
Inventor
박영민
우운택
Original Assignee
광주과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 광주과학기술원 filed Critical 광주과학기술원
Priority to KR1020070032538A priority Critical patent/KR20090000777A/en
Publication of KR20090000777A publication Critical patent/KR20090000777A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00249Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a photographic apparatus, e.g. a photographic printer or a projector

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An augmented reality system using a tangible object is provided to trace a tangible object which provides an interface of controlling digital information for a user in a virtual environment without a magnetic tracker. An augmented reality system using a tangible object comprises the following units. A projector(10) projects a background image on a surface of a table(40). A first camera(20) photographs the surface of the table and produces a first image. A second camera(30) photographs a tangible object(60) and produces a second image.

Description

Augmented Reality System Using Tangible Object And Method For Providing Augmented Reality

1 is a block diagram showing the configuration of an augmented reality system using a sensory object according to an embodiment of the present invention.

FIG. 2 is a block diagram illustrating a detailed configuration of an apparatus for providing augmented reality of the augmented reality system shown in FIG. 1.

3 is a flowchart illustrating a process of providing augmented reality according to an embodiment of the present invention.

4 is a flowchart illustrating a process of extracting two-dimensional tracking information of a sensory object shown in FIG. 3 in detail.

Explanation of symbols on the main parts of the drawings

110: 2D tracking information extractor 120: 3D tracking information extractor

130: tracking information matching unit 141: color correction module

143: coordinate transformation module 150: image synthesis unit

160: storage unit

The present invention relates to an augmented reality system, and more particularly, to augmented reality system and augmented reality providing method using a sensory object that can track the sensory object located on the table in a table-based augmented reality system.

Augmented Reality is a fusion of physical and electronic spaces that augments the real world by adding virtual information to the user's perception by mixing virtual images such as computer graphics with real or real world images. Used for.

A tangible object is a metaphor that means how a user interacts in a virtual environment. The tangible object provides a user with an intuitive interface to control digital information in the virtual environment for easy interaction between the user and the virtual content. to provide.

Conventional table-based augmented reality systems include a method using a magnetic tracker and a method using an infrared camera on the table floor.

The method for tracking an object using the magnetic tracker includes a source in which a coil for emitting an electromagnetic field is installed, and a plurality of sensors for sensing the electromagnetic field. The sensor senses and calculates the position and orientation of the source.

However, in the method using the magnetic tracker, since the source and the plurality of sensors are connected to a system that calculates the movement and orientation of the source through a wired cable, a general object cannot be used as an object and inconvenience in use due to the wired cable. There is this.

In addition, the method of tracking an object using the infrared camera has a disadvantage in that the system configuration is complicated because an infrared illuminator and a camera with a visible light filter are attached and an infrared reflector is attached to the object.

Accordingly, it is a first object of the present invention to provide an augmented reality system using a sensory object that can track a sensory object with a simple configuration.

In addition, a second object of the present invention is to provide a method for providing augmented reality that can accurately track the sensory object.

An augmented reality system using a sensory object according to an aspect of the present invention for achieving the first object of the present invention described above, and a projector for projecting a background image on the bottom surface of the table, by photographing the bottom surface of the table first In the augmented reality system using a table-based sensory object including a first camera for providing an image and a second camera for providing a second image by photographing the sensory object located above the table, the original image of the background image And correcting the color error between the first image and the first image, extracting 2D tracking information and 3D tracking information of the sensory object based on the first image and the second image, and extracting the 2D tracking information and the 3D tracking information. And augmented reality providing apparatus for matching and generating tracking information of the sensory object. The augmented reality providing apparatus may generate tracking information of the sensory object based on the 3D tracking information extracted based on the second image when the sensory object does not contact the surface of the table. The apparatus for augmented reality providing eight colors including red, green, and blue as the original background image on the bottom surface of the table, and a first image in which the first camera captures the projected original background image. The color error table can be generated by matching the color values. The apparatus for providing augmented reality includes a color correction module configured to generate a color error table for correcting color errors between the original background image and the first image photographed by the first camera, and the sensory object based on the first image. A two-dimensional tracking information extraction unit for recognizing a first marker installed on a bottom of the at least one and extracting the two-dimensional tracking information including at least one of a position and an orientation of the sensory object located on the surface of the table; A 3D tracking information extraction unit for recognizing a second marker installed on an upper surface of the sensory object based on the 3D tracking information including at least one of 3D position information, orientation information, and attitude information of the sensory object; Estimation information definition for generating tracking information of the sensory object by matching 2D tracking information and 3D tracking information It may include a. The 2D tracking information extracting unit receives a first image including the image of the first marker, corrects the color of the original background image, and compares the corrected background image with the color value of the first image to determine the color value. An area having a large error displacement may be estimated as an area in which the sensory object is located, the first marker may be recognized in the estimated area, and two-dimensional tracking information of the sensory object may be extracted based on the recognized first marker. . The tracking information matching unit compares at least one pattern information among shapes and sizes of the first marker and the second marker with a predefined reference pattern to determine a similarity of the pattern, and the two-dimensional tracking based on the similarity The tracking information of the sensory object may be generated by matching the tracking information by differently assigning weights to the information and the 3D tracking information.

In addition, the augmented reality providing method according to an aspect of the present invention for achieving a second object of the present invention generates a color error table for correcting the color error of the original background image and the first image photographed the table bottom surface And extracting two-dimensional tracking information and three-dimensional tracking information of the sensory object, and generating the tracking information of the sensory object by matching the extracted two-dimensional tracking information and three-dimensional tracking information. . Generating a color error table for correcting color errors between the original background image and the first image photographing the bottom surface of the table includes eight colors including red, green, and blue as the original background image. The color error table may be generated by projecting onto a bottom surface of the first camera, and matching the color values of the first image in which the first camera photographs the projected original background image. Extracting the 2D tracking information and the 3D tracking information of the sensory object may include receiving a first image including an image of a first marker installed on the bottom of the sensory object, and using the color error table. After correcting the original background image, comparing the corrected background image with the color values of the first image, estimating a region having a large error displacement of the color values as an area where the sensory object is located, and a region where the sensory object is located Setting the estimated area to a uniform color and recognizing the first marker in the area set to the uniform color and extracting the 2D tracking information based on the recognized first marker. have. The step of generating the tracking information of the sensory object by matching the extracted 2D tracking information and 3D tracking information may include a shape of a first marker installed on a bottom of the sensory object and a second marker installed on an upper surface of the sensory object. And determining the similarity of the patterns by comparing at least one pattern information among sizes with a predefined reference pattern, and differently assigning weights to the two-dimensional tracking information and the three-dimensional tracking information based on the similarity. And matching the tracking information. The step of generating the tracking information of the sensory object by matching the extracted 2D tracking information and the 3D tracking information is based on the extracted 3D tracking information when the sensory object does not contact the surface of the table. Tracking information of the sensory object may be generated.

As the present invention allows for various changes and numerous embodiments, particular embodiments will be illustrated in the drawings and described in detail in the written description.

However, this is not intended to limit the present invention to specific embodiments, it should be understood to include all modifications, equivalents, and substitutes included in the spirit and scope of the present invention.

Terms such as first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component. The term and / or includes a combination of a plurality of related items or any item of a plurality of related items.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Hereinafter, the same reference numerals are used for the same components in the drawings, and duplicate descriptions of the same components are omitted.

1 is a block diagram showing the configuration of an augmented reality system using a sensory object according to an embodiment of the present invention.

Referring to FIG. 1, an augmented reality system using a sensory object according to an embodiment of the present invention includes a projector 10, a first camera 20, a second camera 30, a table 40, and a reflection mirror 50. ), The sensory object 60, the display device 70, and the augmented reality providing device 100.

The projector 10 projects the background image provided from the augmented reality providing apparatus 100 to the reflection mirror 50 so that the background image is reflected by the reflection mirror 50 to be projected onto the bottom surface of the table 40. The background image projected by the projector 10 may be, for example, a map for virtual navigation.

The first camera 20 is attached to the bottom of the sensory object 60 transmitted through the table 40 in contact with the background image projected on the table 40 and the upper surface of the table 40 through the reflection mirror 50. The first marker 61 is photographed and the photographed first image is provided to the apparatus 100 for augmented reality.

The second camera 30 is photographed on the upper surface of the table 40 and attached to the upper surface of the sensory object 60 positioned on the upper surface of the table 40 and the upper surface of the table 40. The photographed second image is transferred to the augmented reality providing apparatus 100.

As the first camera 20 and the second camera 30, a general camera may be used.

The table 40 is formed of a translucent material so that the background image projected from the projector 10 is transmitted to the translucent material so that the table 40 is displayed on the bottom and top of the table 40. The sensory object 60 can be moved while watching the transmitted background image.

The reflective mirror 50 is installed at a lower portion of the table 40 at a predetermined angle to reflect the background image projected from the projector 10 onto the bottom surface of the table 40, and the background projected on the bottom surface of the table 40. The bottom surface of the image and the sensory object 60 is reflected to allow the first camera 20 to take a picture.

The sensory object 60 serves as an interface between the user and the computer. In the virtual space that is a result screen of the augmented reality, the sensory object 60 may be represented as predetermined content. For example, in the case of augmented reality that performs virtual navigation, the sensory object 60 may be represented as content having a human shape or a content having a car shape in a virtual space.

In addition, the sensory object 60 is provided with a marker for tracking the movement of the sensory object 60 by photographing the first camera 20 and the second camera 30. That is, when the sensory object 60 is configured in the shape of a cube, a first marker 61 is provided on the bottom of the sensory object 60 in contact with the table 40, and the first sensor 61 is provided on the upper surface of the sensory object 60. 2 markers 63 are provided.

The first marker 61 and the second marker 63 are provided with a unique identification code ID for distinguishing them from the markers installed in the other sensory objects 60.

The display device 70 displays a result screen of the augmented reality provided from the augmented reality providing apparatus 100. For example, the display device 70 may be configured of a television, a monitor, a multivision including a plurality of monitors, and the like. The augmented reality result screen may be, for example, an image obtained by combining a background image projected by the projector 10 and predetermined content corresponding to the sensory object 60. In addition, the augmented reality result screen may display a variety of information set by the user and / or predetermined additional information corresponding to the movement of the sensory object 60.

The augmented reality providing apparatus 100 provides a predetermined background image to the projector 10 so that the predetermined background image is projected onto the bottom surface of the table 40, and then performs color calibration. Color error between the projected background image and the first image of the background image is corrected.

In addition, the apparatus 100 for augmented reality extracts two-dimensional tracking information of the sensory object 60 based on the first image captured by the first camera 20, and the second image captured by the second camera 30. After extracting the 3D tracking information of the sensory object 60 based on the image, the 2D tracking information and the 3D tracking information are matched to accurately track the moving position, orientation, and posture of the sensory object 60.

The augmented reality providing apparatus 100 generates predetermined content corresponding to the sensory object 60 and generates the predetermined content at a position corresponding to the position information of the tracked sensory object 60 in a background image. After synthesizing the background image and the predetermined content so that is located, the synthesized result screen is provided to the display unit.

When the sensory object 60 does not contact the surface of the table 40, the augmented reality providing apparatus 100 extracts and extracts 3D tracking information based on the second image captured by the second camera 30. The tracking information of the sensory object 60 is generated based on the 3D tracking information.

In the augmented reality system according to an embodiment of the present invention shown in FIG. 1, the background image projected by the projector 10 is reflected on the reflective mirror 50 so as to be projected on the table 40. In another embodiment of the present invention The projector 10 may be installed below the table 40 so that the background image projected by the projector 10 may be directly projected onto the bottom surface of the table 40 without using the reflective mirror 50.

In addition, in the exemplary embodiment of the present invention illustrated in FIG. 1, the first camera 20 photographs the bottom surface of the background image and the sensory object 60 projected onto the bottom surface of the table 40 through the reflective mirror 50. In another embodiment of the present invention, the first camera 20 is placed on the table so as to directly photograph the bottom surface of the background image and the sensory object 60 projected onto the table 40 without passing through the reflective mirror 50. It can also be provided in the lower part of 40.

FIG. 2 is a block diagram illustrating a detailed configuration of an apparatus for providing augmented reality of the augmented reality system shown in FIG. 1.

2, the augmented reality providing apparatus 100 according to an embodiment of the present invention is a two-dimensional tracking information extraction unit 110, three-dimensional tracking information extraction unit 120, tracking information matching unit 130, The controller 140 includes an image synthesizer 150 and a storage 160. The controller 140 may include a color correction module 141 and a coordinate conversion module 143.

The two-dimensional tracking information extractor 110 may include two-dimensional tracking information of the sensory object 60 located on the surface of the table 40 based on the first image provided from the first camera 20 (for example, the sensory object ( Location and orientation).

Specifically, the 2D tracking information extractor 110 receives the first image from the first camera 20 and corrects the original background image using the color error table provided from the controller 140, and then corrects the background image. By comparing the first image, an area having a large error displacement of the color value is estimated as an area where the sensory object 60 is located. Here, the first image includes an image of the first marker installed on the bottom of the sensory object.

Then, the area corresponding to the estimated area of the original background image is set to a uniform color and is projected on the bottom surface of the table 40, and the projected background image is installed on the bottom of the sensory object 60 in the first image. Recognizes a unique identification code (ID) of the first marker 61 and obtains two-dimensional tracking information (eg, two-dimensional position information and / or orientation information) of the sensory object 60 based on the recognized first marker. Extract. In addition, the 2D tracking information extractor 110 may extract various pieces of information (eg, vertex information of the first marker 61) in order to estimate the posture of the sensory object 60.

The 3D tracking information extractor 120 receives the second image provided from the second camera 30 and recognizes the second marker 63 installed on the upper surface of the sensory object 60 among the received second images. Three-dimensional tracking information (eg, three-dimensional position information and / or orientation information) of the object 60 is extracted. In addition, the 3D tracking information extractor 120 may extract various pieces of information (for example, vertex information of the second marker 63) in order to estimate the posture of the sensory object 60.

The tracking information matching unit 130 receives the 2D tracking information of the sensory object 60 from the 2D tracking information extractor 110 and the 3D tracking of the sensory object 60 from the 3D tracking information extractor 120. After receiving the information, the tracking information of the sensory object 60 is generated by matching the provided 2D tracking information and 3D tracking information.

Here, the tracking information matching unit 130 may generate tracking information of the sensory object 60 by matching two-dimensional tracking information and three-dimensional tracking information based on a predetermined confidence criterion.

For example, the tracking information matching unit 130 determines the recognition accuracy of the first marker 61 and the second marker 63 based on the predetermined reliability and weights the first tracking information and the second tracking information. By differently assigning each other to match the tracking information, the accuracy of the tracking information of the sensory object 60 may be improved. The recognition accuracy of the first marker 61 and the second marker 63 is determined by comparing pattern information such as the shape and size of the recognized first marker 61 and the second marker 63 with a predefined reference pattern. It can be determined according to the similarity of.

In addition, the tracking information matching unit 130 uses the vertex information of the first marker 61 included in the two-dimensional tracking information and the vertex information of the second marker 63 included in the three-dimensional tracking information. You can also track your attitude more accurately.

The controller 140 may include a color correction module 141 and a coordinate conversion module 143. The color correction module 141 is a module that performs a preliminary preparation step for tracking the sensory object 60. The original background image projected from the projector 10 to the bottom surface of the table 40 and the first camera 20 are displayed. A color error table is generated to correct color errors with the first image photographed. The generated color error table is used for two-dimensional tracking of the sensory object 60.

The color error as described above is projected by distorting the color by the mechanical and / or optical characteristics of the projector 10 in the process of the original background image is projected on the bottom surface of the table 40, or on the bottom surface of the table 40 When the projected background image is captured by the first camera 20, it is generated by the inherent characteristics of the first camera 20 and the translucency of the table 40.

For example, when a background image composed of red (R (Red) = 255, G (Green) = 0, B (Blue) = 0) is projected through the projector 10 to the bottom surface of the table 40, the projector Depending on the performance of (10), the degree of red color may vary. When the background image projected on the bottom surface of the table 40 is photographed through the first camera 20 again, the color value of the first image photographed due to the inherent characteristics of the first camera 20 is the color of the original background image. It may have a different value from (eg, R = 200, G = 50, B = 70).

Accordingly, the color correction module 141 generates a color error table for correcting color error values of the original background image and the photographed first image, so that the color correction module 141 may be used for two-dimensional tracking of the sensory object 60.

The color error table projects a background image having a predetermined color value to the entire area of the table 40 and captures the projected background image through the first camera 20, and then the color value of the original background image and the captured first image. This can be made by matching the color values of the image.

For example, in generating a color error table, project red color (R = 255, G = 0, B = 0) as the original background image, and record the color value of the first image taken by matching the color value of the original background image. After projecting green (R = 0, G = 255, B = 0) and blue (R = 0, G = 0, B = 255) as the original background image, the above process is repeated. In addition, the above process is repeated for a combination of eight cases in which 0 or 255 is substituted for each of the color elements of R, G, and B of the original background image.

The coordinate transformation module 143 receives the tracking information of the sensory object 60 matched by the tracking information matching unit 130 and based on the provided tracking information, converts the actual coordinates of the sensory object 60 into the corresponding virtual space. Convert to coordinates. Herein, the coordinates of the virtual spaces refer to the coordinates of displaying the content representing the sensory object 60 on the result screen on which the augmented reality is displayed.

The controller 140 reads the content and the background image corresponding to the sensory object 60 from the storage 160, provides the image synthesizer 150 with the virtual space coordinates converted by the coordinate transformation module 143, and then synthesizes the image. It is provided to the unit 150. In addition, the control unit 140 provides the image synthesis unit 150 with posture and / or orientation information of the sensory object 60 from the tracking information matching unit 130.

The control unit 140 provides the color error table generated by the color correction module 141 to the storage unit 160 and provides the stored color error table to the 2D tracking information extractor 110 to extract the 2D tracking information. Enable 110 to track the sensory object 60 from the second image.

In addition, the controller 140 may provide the image synthesis unit 150 with additional information predefined according to the position of the sensory object 60 and / or a predetermined position of the background image, according to the position and posture of the sensory object 60. Allow additional information to be displayed.

The image synthesizing unit 150 receives predetermined content corresponding to the virtual space coordinates, the background image, and the sensory object 60 from the controller 140, and the background image and the predetermined content are based on the coordinates of the virtual space. After synthesis of the synthesized augmented reality result screen is provided to the display device (70). In addition, the image synthesizing unit 150 displays various additional information provided from the control unit 140 on the result screen of the augmented reality.

The storage unit 160 may be configured with a large capacity flash memory or a hard disk. The storage unit 160 includes a background image of augmented reality, content representing the sensory object 60 in a virtual space, reference pattern information of the first marker 61 and the second marker 63, a color error table, and various additional information. It may be stored and may be read and recorded by the controller 140.

3 is a flowchart illustrating a process of providing augmented reality according to an embodiment of the present invention.

Referring to FIG. 3, in the method of providing augmented reality according to an embodiment of the present invention, first, when an augmented reality providing system is initialized, the color correction module 141 may correspond to the control of the controller 140. A color error correction process is performed to correct a color error between the background image projected on the first image photographed on the bottom surface of the table 40 and a color error table (step 210).

The color error correction may be performed before the augmented reality is performed, and the generated color error table may be stored in the storage 160.

Then, when augmented reality is started, the two-dimensional tracking information extractor 110 extracts the two-dimensional tracking information of the sensory object 60 and provides the tracking information matching unit 130 (step 220), and simultaneously the three-dimensional tracking information. The extraction unit 120 extracts 3D tracking information of the sensory object 60 and provides the tracking information matching unit 130 (step 230).

The tracking information matching unit 130 determines whether the 2D tracking information has been extracted (step 240), and when it is determined that the 2D tracking information has been extracted, the tracking information matching unit 130 determines the 2D tracking information and the 3D based on a predetermined reliability. The tracking information is matched to generate tracking information of the sensory object 60 (step 250).

The tracking information matching unit 130 determines whether 2D tracking information is provided from the 2D tracking information extractor 110 to determine whether 2D tracking information is extracted. For example, when the user lifts the sensory object 60 up from the surface of the table 40, the 2D tracking information extractor 110 may not recognize the first marker 61 in the first image. The 2D tracking information extractor 110 may not extract the 2D tracking information and thus cannot provide the extracted 2D tracking information to the tracking information matching unit 130.

If it is determined in operation 240 that the 2D tracking information is not extracted, the tracking information matching unit 130 transmits the 3D tracking information to the coordinate transformation module 143.

The coordinate transformation module 143 converts the actual coordinates of the sensory object 60 included in the tracking information provided from the tracking information matching unit 130 into coordinates of the virtual space corresponding thereto (step 260).

The image synthesizing unit 150 receives the content corresponding to the background image and the sensory object 60 through the control unit 140 and synthesizes the content with the background image based on the coordinates of the virtual space. Next, the generated augmented reality result screen is provided to the display device 70 (step 270).

The display device 70 receives the augmented reality result screen from the image synthesizer 150 and displays the received augmented reality result screen (step 280).

Thereafter, the controller 140 determines whether a predetermined event signal indicating termination of augmented reality has been generated (step 290), and if it is determined that no event signal indicating termination of augmented reality has not been generated, repeating steps 220 through 280. If it is determined that the event signal indicating the end of the augmented reality has been generated and ends the augmented reality providing process.

4 is a flowchart illustrating a process of extracting two-dimensional tracking information of a sensory object shown in FIG. 3 in detail.

First, the 2D tracking information extraction unit 110 receives the first image from the first camera 20 (step 221), and corrects the original background image using the color error table provided from the controller 140 (step 222). Here, the 2D tracking information extractor 110 may correct the original background image by applying the values recorded in the color error table to all color values of the original background image through interpolation.

Thereafter, the 2D tracking information extractor 110 compares the calibrated original image with the first image to determine a region where the error displacement between the color value of the calibrated original image and the color value of the first image is large. It is assumed that the area is located (step 223).

Then, the area in which the sensory object 60 is estimated to exist in the original background image is set to a uniform color so that the background image set to the uniform color can be projected onto the bottom surface of the table 40 ( Step 224). Here, the uniform color may be a color having a predetermined single color.

Thereafter, the 2D tracking information extractor 110 recognizes the first marker 61 in the estimated area of the first image provided from the first camera 20 (step 225).

In operation 226, the first marker 61 is normally recognized by comparing the recognized first marker with a reference pattern of the first marker 61 stored in the storage 160.

If it is determined in step 226 that the first marker 61 is normally recognized, the 2D tracking information extractor 110 may determine 2D tracking information (for example, 2D location information and / or orientation) of the first marker 61. Information) (step 227), and if it is determined that the first marker 61 is not normally recognized, the process returns to step 221 and the subsequent steps are repeatedly performed to recognize the first marker 61.

According to the augmented reality system and augmented reality providing method using the sensory object as described above, first correct the color error of the original image projected on the table and the first image of the projected background image, and located on the upper surface of the translucent table The 2D tracking information and the 3D tracking information are extracted from the first image and the second image respectively photographing the sensory object. After converting the matched tracking information into the coordinates of the virtual space, the predetermined image, which is a metaphor of the background image and the sensory object, is synthesized and displayed on the display device.

Therefore, two general-purpose cameras simultaneously capture the sensory objects located on the top of the translucent table and track the sensory objects by matching the tracking information extracted from each captured image. In addition, since a general-purpose camera is used, an augmented reality system having a simple configuration can be constructed.

Although described with reference to the embodiments above, those skilled in the art will understand that the present invention can be variously modified and changed without departing from the spirit and scope of the invention as set forth in the claims below. Could be.

Claims (11)

A projector for projecting a background image on the bottom surface of the table, a first camera for photographing the bottom surface of the table to provide a first image, and a second image by photographing the sensory object located at the top of the table; In the augmented reality system using a table-based sensory object including a second camera, The color error between the original image of the background image and the first image is corrected, and the 2D tracking information and the 3D tracking information of the sensory object are extracted based on the first image and the second image, and the 2D tracking information and Augmented reality system including the augmented reality providing device for generating the tracking information of the sensory object by matching the three-dimensional tracking information. The apparatus of claim 1, wherein the apparatus for augmented reality provides tracking information of the sensory object based on the 3D tracking information extracted based on the second image when the sensory object does not contact the surface of the table. Augmented reality providing system, characterized in that for generating. The apparatus of claim 1, wherein the augmented reality providing apparatus projects eight colors including red, green, and blue on the bottom surface of the table as the original background image, and the first camera projects the original background image. Augmented reality providing system, characterized in that for generating a color error table by matching the color values of the first image. According to claim 1, wherein the augmented reality providing apparatus A color correction module for generating a color error table for correcting color errors between the original background image and the first image photographed by the first camera; 2D to recognize the first marker installed on the bottom of the sensory object based on the first image to extract the two-dimensional tracking information including at least one of the position and orientation of the sensory object located on the surface of the table Tracking information extraction unit; 3D tracking information for recognizing a second marker installed on an upper surface of the sensory object based on the second image to extract 3D tracking information including at least one of 3D position information, orientation information, and attitude information of the sensory object. Extraction unit; And And an estimation information matching unit for generating the tracking information of the sensory object by matching the 2D tracking information and the 3D tracking information. The method of claim 4, wherein the two-dimensional tracking information extraction unit After receiving the first image including the image of the first marker and correcting the color of the original background image and comparing the color value of the corrected background image and the first image sensory type An augmented reality providing system for estimating a region where an object is located, recognizing the first marker in the estimated region, and extracting two-dimensional tracking information of the sensory object based on the recognized first marker . The method of claim 4, wherein the tracking information matching unit The pattern similarity is determined by comparing the pattern information of at least one of the shape and size of the first marker and the second marker with a predefined reference pattern, and based on the similarity, the 2D tracking information and the 3D Augmented reality providing system for generating a tracking information of the sensory object by matching the tracking information by assigning the weight to the tracking information differently. In the augmented reality providing method using a table-based sensory object, Generating a color error table for correcting color errors of an original background image and a first image photographing the table bottom surface; Extracting two-dimensional tracking information and three-dimensional tracking information of the sensory object; And And matching the extracted 2D tracking information and 3D tracking information to generate tracking information of the sensory object. The method of claim 7, wherein the generating of the color error table for correcting the color error of the first image of the original background image and the table bottom surface comprises red, green, and blue as the original background image. Augmented reality is provided by projecting the color of the branch on the bottom surface of the table, the first camera generates the color error table by matching the color values of the first image of the projected original background image Way. The method of claim 7, wherein the extracting the two-dimensional tracking information and the three-dimensional tracking information of the sensory object Receiving a first image including an image of a first marker installed on a bottom of the sensory object; Correcting the original background image using the color error table and comparing the corrected background image with the color values of the first image to estimate a region having a large error displacement of a color value as an area in which a sensory object is located; Setting an area estimated to be an area where the sensory object is located to a uniform color; And And recognizing the first marker in the region set to the uniform color and extracting the two-dimensional tracking information based on the recognized first marker. The method of claim 7, wherein the step of generating the tracking information of the sensory object by matching the extracted two-dimensional tracking information and three-dimensional tracking information Determining pattern similarity by comparing pattern information of at least one of a shape and a size of a first marker provided on a bottom surface of the sensory object and a second marker provided on an upper surface of the sensory object with a predefined reference pattern; And And matching tracking information by differently assigning weights to the two-dimensional tracking information and the three-dimensional tracking information based on the similarity. The method of claim 7, wherein the step of generating the tracking information of the sensory object by matching the extracted two-dimensional tracking information and three-dimensional tracking information, the extracted three if the sensory object does not contact the surface of the table; Augmented reality providing method for generating the tracking information of the sensory object based on the dimensional tracking information.
KR1020070032538A 2007-04-02 2007-04-02 Augmented reality system using tangible object and method for providing augmented reality KR20090000777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020070032538A KR20090000777A (en) 2007-04-02 2007-04-02 Augmented reality system using tangible object and method for providing augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020070032538A KR20090000777A (en) 2007-04-02 2007-04-02 Augmented reality system using tangible object and method for providing augmented reality

Publications (1)

Publication Number Publication Date
KR20090000777A true KR20090000777A (en) 2009-01-08

Family

ID=40483914

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020070032538A KR20090000777A (en) 2007-04-02 2007-04-02 Augmented reality system using tangible object and method for providing augmented reality

Country Status (1)

Country Link
KR (1) KR20090000777A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110055868A (en) * 2009-11-20 2011-05-26 엘지전자 주식회사 Mobile terminal, digital television and method of controlling dtv
KR101496761B1 (en) * 2014-08-25 2015-02-27 연세대학교 산학협력단 3D Model Control System and Method Based on multi-screen Projection
US8988464B2 (en) 2010-12-14 2015-03-24 Samsung Electronics Co., Ltd. System and method for multi-layered augmented reality
WO2015187309A1 (en) * 2014-06-03 2015-12-10 Intel Corporation Projecting a virtual image at a physical surface
KR101641672B1 (en) * 2015-07-21 2016-07-21 (주)디자인비아트 The system for Augmented Reality of architecture model tracing using mobile terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110055868A (en) * 2009-11-20 2011-05-26 엘지전자 주식회사 Mobile terminal, digital television and method of controlling dtv
US8988464B2 (en) 2010-12-14 2015-03-24 Samsung Electronics Co., Ltd. System and method for multi-layered augmented reality
WO2015187309A1 (en) * 2014-06-03 2015-12-10 Intel Corporation Projecting a virtual image at a physical surface
US9972131B2 (en) 2014-06-03 2018-05-15 Intel Corporation Projecting a virtual image at a physical surface
KR101496761B1 (en) * 2014-08-25 2015-02-27 연세대학교 산학협력단 3D Model Control System and Method Based on multi-screen Projection
KR101641672B1 (en) * 2015-07-21 2016-07-21 (주)디자인비아트 The system for Augmented Reality of architecture model tracing using mobile terminal

Similar Documents

Publication Publication Date Title
JP6531823B2 (en) Imaging system, imaging apparatus, imaging method, and imaging program
JP6507730B2 (en) Coordinate transformation parameter determination device, coordinate transformation parameter determination method, and computer program for coordinate transformation parameter determination
JP5905540B2 (en) Method for providing a descriptor as at least one feature of an image and method for matching features
US20150369593A1 (en) Orthographic image capture system
CN103765870B (en) Image processing apparatus, projector and projector system including image processing apparatus, image processing method
CN104885098B (en) Mobile device based text detection and tracking
CN113514008B (en) Three-dimensional scanning method, three-dimensional scanning system, and computer-readable storage medium
JP4739004B2 (en) Information processing apparatus and information processing method
US20160371855A1 (en) Image based measurement system
KR101506610B1 (en) Apparatus for providing augmented reality and method thereof
KR102354299B1 (en) Camera calibration method using single image and apparatus therefor
EP3910451B1 (en) Display systems and methods for aligning different tracking means
WO2019035155A1 (en) Image processing system, image processing method, and program
JP5756322B2 (en) Information processing program, information processing method, information processing apparatus, and information processing system
KR101227237B1 (en) Augmented reality system and method for realizing interaction between virtual object using the plural marker
US20190073796A1 (en) Method and Image Processing System for Determining Parameters of a Camera
KR20180039013A (en) Feature data management for environment mapping on electronic devices
CN103198286B (en) Information processing terminal, information processing method, and program
CN114766042A (en) Target detection method, device, terminal equipment and medium
KR20090000777A (en) Augmented reality system using tangible object and method for providing augmented reality
CN117173756A (en) Augmented reality AR system, computer equipment and storage medium
JP6027952B2 (en) Augmented reality image generation system, three-dimensional shape data generation device, augmented reality presentation device, augmented reality image generation method, and program
JP2020042575A (en) Information processing apparatus, positioning method, and program
KR20190005222A (en) How to adjust the direction of the line of sight in the representation of the virtual environment
WO2017126072A1 (en) Image recognition system, camera state estimation device and storage medium

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application