KR20160109828A - Augmented reality system - Google Patents

Augmented reality system Download PDF

Info

Publication number
KR20160109828A
KR20160109828A KR1020150034951A KR20150034951A KR20160109828A KR 20160109828 A KR20160109828 A KR 20160109828A KR 1020150034951 A KR1020150034951 A KR 1020150034951A KR 20150034951 A KR20150034951 A KR 20150034951A KR 20160109828 A KR20160109828 A KR 20160109828A
Authority
KR
South Korea
Prior art keywords
augmented reality
content information
user terminal
information
image
Prior art date
Application number
KR1020150034951A
Other languages
Korean (ko)
Inventor
서상현
김용준
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020150034951A priority Critical patent/KR20160109828A/en
Publication of KR20160109828A publication Critical patent/KR20160109828A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides an augmented reality system. The augmented reality system includes a position sensor for providing its own position information to a user terminal, 3D (3-Dimensional) contents information of an object existing in a specific area, and the user terminal Wherein the 3D contents information is provided to the user terminal, and an object is located adjacent to the AR, using the location information provided from the location sensor, and the 3D contents information provided from the augmented reality server And a user terminal for extracting the 3D content information, synthesizing the extracted 3D content information and the image photographed through the camera, and outputting the synthesized image.

Description

AUGMENTED REALITY SYSTEM

The present invention relates to augmented reality technology, and more particularly, to a system for providing a 3D image related to an object to a user accessing a specific object.

Augmented Reality (AR) technology is a technology that shows a virtual world overlaid on the reality that the user sees.

If a conventional virtual reality (VR) technology targets a virtual space and objects, the augmented reality technology displays a virtual world having additional information together with the real world as a single image, It has features that provide information. It is also called mixed reality technology.

However, the conventional virtual reality technique has a problem that the positional error between the real image and the augmented image is large. Therefore, it is necessary to develop a technique for reducing the positional error of the real image and the augmented image.

In order to solve the above-described problems, the present invention uses a location information provided from a sensor to distinguish objects adjacent to the object, extracts an augmented image associated with the object from information provided from a server, And a synthesized real image and outputs the synthesized real image.

According to another aspect of the present invention, there is provided an augmented reality system, comprising: a position sensor for providing position information of a user terminal to a user terminal; 3-dimensional content information of an object existing in a specific area; An augmented reality server that provides the 3D content information to the user terminal when the user terminal approaches within a predetermined range in the specific area and an augmented reality server that distinguishes the augmented reality server from the augmented reality server using position information provided from the position sensor, And a user terminal for extracting 3D content information about the separated object from the 3D content information provided from the 3D content information and synthesizing the extracted 3D content information and the image photographed through the camera.

According to the present invention, the proximity of an object is classified using the position information provided from the sensor, and the positional error between the real image and the augmented image is reduced by synthesizing the augmented image of the adjacent object and the real image captured through the camera It provides an advantage that can be made.

1 is a block diagram illustrating a configuration of an augmented reality system according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating an operation of an augmented reality system according to an exemplary embodiment of the present invention. Referring to FIG.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, advantages and features of the present invention and methods of achieving them will be apparent from the following detailed description of embodiments thereof taken in conjunction with the accompanying drawings.

The present invention may, however, be embodied in many different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, And advantages of the present invention are set forth in the appended claims.

It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular forms herein include plural forms unless the context clearly dictates otherwise. &Quot; comprises "and / or" comprising ", as used herein, is intended to include the use of one or more other components, steps, operations, and / And does not exclude the presence or addition of a compound.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram illustrating a configuration of an augmented reality system according to an embodiment of the present invention.

As shown in FIG. 1, an augmented reality system according to an embodiment of the present invention includes a position sensor 100, a user terminal 200, and an augmented reality server 300.

The position sensor 100 provides its position information to the user terminal 200 approaching within a predetermined range. The position sensor 100 may be a beacon sensor.

The augmented reality server 300 stores 3D (3-Dimensional) information of an object in a specific area, and when receiving a signal transmitted from the position sensor 100 through the user terminal 200, If it is determined that the user has accessed the specific area within a preset range, the 3D content information is stored in the user terminal 200.

To this end, the augmented reality server 300 includes a sensor information storage unit 310 and a 3D content information storage unit 320. [

The user terminal 200 distinguishes objects adjacent to itself from the position information provided from the position sensor 100, extracts 3D content information about the separated objects from the 3D content information provided from the augmented reality server 300, Synthesizes the 3D content information and the image photographed through the camera and outputs the combined image.

The user terminal 200 includes a control unit 110, a 3D content information receiving unit 120, an image input unit 130, and an image output unit 140.

Hereinafter, the operation of the above-described devices will be described in detail.

The sensor information storage unit 310 of the augmented reality server 300 stores unique ID information of the plurality of position sensors 100 mounted in a specific area and location information on the unique ID information.

The 3D content information storage unit 320 stores 3D content information such as content information database-based on a specific area, 3D data of objects existing in a specific area, and location information of objects.

In this case, the 3D data may be generated using modeling using a SFM (Structure Form Motion) technique and a combination of basic solid figures such as a hexahedron and a cylinder, using data of feature points of previously extracted objects, The augmented reality server 300 itself may generate and store 3D data.

The specific areas described in this specification may be sightseeing spots, experiential spots, cultural properties, etc., and the objects present in a specific area are tourist spots, experience spots, buildings, monuments, places, etc. existing in cultural properties.

For example, the 3D content information storage unit 320 may store history information about Bulguksa and related 3D data, location information, guide information, and experience information for major buildings (Bulwangguk, Baekwanggyo, Seokgatap, .

The control unit 110 of the user terminal 200 controls the operation of the user terminal 200 based on a continuous real image acquired through the image input unit 130, position information measured from a sensor (not shown) such as a gyro sensor mounted in the user terminal 200, Or at least one piece of position information provided from the robot 100, or comprehensively uses the three kinds of information to calculate the current position and direction.

At this time, the manner in which the control unit 110 receives the position information from the position sensor 100 may be a method of synchronizing with the position sensor 100 by user selection.

The position and direction are calculated using the continuous image information and the position information of the sensor, thereby reducing the positional error of the real image and the augmented image and improving the accuracy of the augmented image.

If it is determined that the position calculated by the controller 110 is an entrance of a specific area or a 3D content information receiving position provided in a specific area or a moving direction of the 3D content information receiving unit 120 is directed toward an entrance of a specific area, 300, and stores the received 3D content information.

At this time, the 3D content information receiving unit 120 and the augmented reality server 300 can communicate with each other using a communication method such as WiFi, Zigbee, 3G (Generation), 4G (4 Generation) and LTE (Long Term Evolution) Can be used.

Then, the controller 110 continuously calculates the position and direction to determine proximity to a specific object. If the controller 110 determines that the object is close to a specific object or faces a specific object, the 3D content information receiving unit 120 Search 3D content information.

Then, the control unit 110 synthesizes the 3D content information about the object and the real image acquired through the image input unit 130, and outputs the synthesized 3D image through the image output unit 140 to the user.

For example, if the current location is determined to be located in front of Cheongwoongyo in Bulguksa, the control unit 110 displays an image such as a scene in which the Cheongwon Bridge is built on the actual Cheongwoongyo image that the user sees now, Synthesized in the overlapped form, and reproduced in the 3D form through the video output unit 140. [

Then, the controller 110 checks whether the video is finished. If it is determined that the playback of the video is finished, the controller 110 may return to the step of determining whether to access the specific object, and may continue to provide the related content.

In addition, the controller 110 may determine whether the content is close to the overlapping position, thereby preventing overlapping playback of related content.

FIG. 2 is a flowchart illustrating an operation of an augmented reality system according to an exemplary embodiment of the present invention. Referring to FIG.

As shown in FIG. 2, the user terminal calculates a current location and a direction of movement (S200) and determines whether the specific area is accessed (S210).

As a result of the determination, if it is determined that the calculated position is an entrance of a specific area or a 3D content information reception position provided in a specific area, or that the movement direction is directed toward an entrance of a specific area, And stores the received 3D content information (S220).

At this time, the user terminal calculates the current position and direction using the continuous real image, the position information measured from the position sensor, and the like. The augmented reality server and the WiFi, Zigbee, 3G (Generation) (4 Generation), Long Term Evolution (LTE), or the like.

In addition, the augmented reality server includes unique ID information of a plurality of position sensors mounted on a specific area, position information on the plurality of position sensors, content information databaseized by a specific region, 3D data of objects existing in a specific area, The same 3D content information is stored.

In this case, the 3D data may be generated using modeling using a SFM (Structure Form Motion) technique and a combination of basic solid figures such as a hexahedron and a cylinder, using data of feature points of previously extracted objects, It may be information generated from the Augmented Reality itself.

The specific areas described in this specification may be sightseeing spots, experiential spots, cultural properties, etc., and the objects present in a specific area are tourist spots, experience spots, buildings, monuments, places, etc. existing in cultural properties.

Then, the user terminal continuously calculates a position and a direction to determine whether it is in proximity with a specific object existing in a specific area or is performing a specific object (S230).

As a result of the determination, if it is determined that the specific object is located close to a specific object or is facing a specific object, the 3D content information for the object is searched (S240).

Then, the 3D content information and the real image are combined with each other (S250), and the synthesized 3D image is displayed to the user (S260).

For example, if the current location of the user terminal is determined to be located in front of the cloudy sky of Bulguksa, a scene in which a cloud image is constructed on the actual cloud image of the user, a scene of the cloud image of the old people and a historical background related to cloud cloud are superimposed And reproduced in 3D form.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the invention.

Therefore, the embodiments of the present invention are not intended to limit the scope of the present invention, and the scope of the present invention is not limited by these embodiments. It is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

100: Position sensor 200: User terminal
300: augmented reality server

Claims (1)

A position sensor for providing its own position information to a user terminal;
An augmented reality server that stores 3D (3-Dimensional) content information of an object existing in a specific area and provides the 3D content information to the user terminal when the user terminal approaches the predetermined area within a predetermined range; And
Extracting the 3D content information about the separated object from the 3D content information provided from the augmented reality server, separating the 3D content information from the 3D content information provided from the augmented reality server by using the position information provided from the position sensor, And outputting the combined images,
Wherein the augmented reality system comprises:
KR1020150034951A 2015-03-13 2015-03-13 Augmented reality system KR20160109828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150034951A KR20160109828A (en) 2015-03-13 2015-03-13 Augmented reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150034951A KR20160109828A (en) 2015-03-13 2015-03-13 Augmented reality system

Publications (1)

Publication Number Publication Date
KR20160109828A true KR20160109828A (en) 2016-09-21

Family

ID=57080902

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150034951A KR20160109828A (en) 2015-03-13 2015-03-13 Augmented reality system

Country Status (1)

Country Link
KR (1) KR20160109828A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190106404A (en) * 2018-03-09 2019-09-18 (주)벨류데이터 Selfie support Camera System using augmented reality
US10930084B2 (en) 2018-10-18 2021-02-23 Samsung Display Co., Ltd. Electronic device including display unit and method of operating the same
US11215690B2 (en) 2020-05-11 2022-01-04 Ajou University Industry-Academic Cooperation Foundation Object location measurement method and augmented reality service providing device using the same

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190106404A (en) * 2018-03-09 2019-09-18 (주)벨류데이터 Selfie support Camera System using augmented reality
US10930084B2 (en) 2018-10-18 2021-02-23 Samsung Display Co., Ltd. Electronic device including display unit and method of operating the same
US11215690B2 (en) 2020-05-11 2022-01-04 Ajou University Industry-Academic Cooperation Foundation Object location measurement method and augmented reality service providing device using the same

Similar Documents

Publication Publication Date Title
US10964108B2 (en) Augmentation of captured 3D scenes with contextual information
US10462406B2 (en) Information processing apparatus and information processing method
US8963999B1 (en) Augmented reality with earth data
US9392248B2 (en) Dynamic POV composite 3D video system
JP2020504872A (en) System and method for controlling vehicle movement
EP2672455B1 (en) Apparatus and method for providing 3D map showing area of interest in real time
US20220385721A1 (en) 3d mesh generation on a server
CN110361005B (en) Positioning method, positioning device, readable storage medium and electronic equipment
KR102197615B1 (en) Method of providing augmented reality service and server for the providing augmented reality service
CN106289263A (en) Indoor navigation method and device
KR102383567B1 (en) Method and system for localization based on processing visual information
KR101996241B1 (en) Device and method for providing 3d map representing positon of interest in real time
KR20160109828A (en) Augmented reality system
US10957100B2 (en) Method and apparatus for generating 3D map of indoor space
KR101700651B1 (en) Apparatus for tracking object using common route date based on position information
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
KR101611427B1 (en) Image processing method and apparatus performing the same
CN109214482A (en) A kind of indoor orientation method, device, terminal device and storage medium
US20220138979A1 (en) Remote measurements from a live video stream
US11836942B2 (en) Information integration method, information integration device, and information integration program
JP2014106602A (en) Information terminal device
KR20210049527A (en) Method for receiving map information from map generating server and user equipment performing method
KR20210048928A (en) Method for determining location to implement virtual object and user equipment performing method
KR101316387B1 (en) Method of object recognition using vision sensing and distance sensing
KR20210048798A (en) Method for determining pose of camera provided in user equipment and location calculation server performing method