WO2012108721A2 - Device and method for providing augmented reality using image information - Google Patents

Device and method for providing augmented reality using image information Download PDF

Info

Publication number
WO2012108721A2
WO2012108721A2 PCT/KR2012/001006 KR2012001006W WO2012108721A2 WO 2012108721 A2 WO2012108721 A2 WO 2012108721A2 KR 2012001006 W KR2012001006 W KR 2012001006W WO 2012108721 A2 WO2012108721 A2 WO 2012108721A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
augmented reality
plane
information
lane information
Prior art date
Application number
PCT/KR2012/001006
Other languages
French (fr)
Korean (ko)
Other versions
WO2012108721A3 (en
Inventor
고석필
Original Assignee
팅크웨어(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 팅크웨어(주) filed Critical 팅크웨어(주)
Publication of WO2012108721A2 publication Critical patent/WO2012108721A2/en
Publication of WO2012108721A3 publication Critical patent/WO2012108721A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to an apparatus and method for providing augmented reality to an image based on image information, and more particularly, to an augmented reality providing apparatus for synthesizing and providing a virtual object to an image captured by a camera based on the image information. It relates to a method for providing augmented reality.
  • Navigation route and road information providing method has been developed from the 2D map that provides only the shape of the road to the shape of the building around the road to the user to easily find the destination 3D map.
  • the building cannot be displayed in the same shape as the reality due to the limitation of data capacity or modeling time.
  • the exterior of a building is changed only for a short period of time due to events or other conditions, it is difficult to reflect the changed information in the 3D map, and thus the state of the building cannot be displayed in real time.
  • the present invention provides an apparatus and method for providing augmented reality that enhances a user's perception of a corresponding place by displaying information for guiding a route or information related to a road on an image of a real space photographed by a camera.
  • an apparatus and method for providing augmented reality for improving mapping accuracy between a 2D plane and a 3D space by mapping a 2D plane to a 3D space based on an image photographed by a camera are provided.
  • An apparatus for providing augmented reality may include: a 3D space generator configured to generate a virtual 3D space based on an image captured by a camera; A plane determination unit to determine a 2D plane based on the lane information included in the image; A mapping unit which maps the 2D plane to the virtual 3D space; And an augmented reality providing unit configured to provide an augmented reality by synthesizing an information object with the image based on the mapped 2D plane.
  • Augmented reality providing method comprises the steps of generating a virtual 3D space based on the camera parameters; Determining a 2D plane based on lane information of an image captured by the camera; Mapping the 2D plane to the virtual 3D space; And synthesizing an information object with the image based on the mapped 2D plane to provide augmented reality.
  • the user's recognition of the place can be improved by displaying information for guiding a route or information related to a road in an image of the actual space photographed by the camera.
  • the mapping accuracy between the 2D plane and the 3D space can be improved by mapping the 2D plane to the 3D space based on the image photographed by the camera.
  • FIG. 1 is a block diagram illustrating an apparatus for providing augmented reality according to an embodiment of the present invention.
  • 3 is an example of an augmented reality image provided in accordance with one embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a method of providing augmented reality according to an embodiment of the present invention.
  • the augmented reality providing method according to an embodiment of the present invention may be performed by an augmented reality providing apparatus.
  • the apparatus for providing augmented reality according to an embodiment of the present invention may be one of a mobile terminal, a navigation system, or a black box, and receives an image from a mobile terminal, a navigation system, a black box, or a separate camera to provide augmented reality. It may be a terminal.
  • FIG. 1 is a block diagram illustrating an apparatus for providing augmented reality according to an embodiment of the present invention.
  • the apparatus 100 for augmented reality may include a camera 110, a calibrator 120, a 3D space generator 130, a lane information extractor 140, and a plane.
  • the determination unit 150, the mapping unit 160, and the augmented reality providing unit 170 may be included.
  • the calibration unit 120 may estimate a camera parameter of the camera from an image photographed by the camera using calibration.
  • the camera may be a camera 110 embedded in the augmented reality providing apparatus 100 or may be a camera installed separately from the augmented reality providing apparatus 100 to photograph the front of the vehicle.
  • the camera parameter may be a parameter constituting a camera matrix which is information indicating a relationship between the actual space and the picture.
  • the 3D space generator 130 may generate a virtual 3D space based on the image photographed by the camera.
  • the 3D space generator 130 obtains depth information from an image captured by the camera based on the camera parameter estimated by the calibrator 120, and virtually based on the acquired depth information and the image. You can create 3D space.
  • the lane information extractor 140 may acquire a still image by capturing an image captured by the camera, and extract a plurality of lane information from the still image. Since the video captured by the camera is a video photographing the front of the vehicle in real time, it is difficult to extract lane information because the lane continues to move. Accordingly, the lane information extractor 140 may capture the image to obtain a still image, and extract lane information from a still image having no lane change.
  • the lane information extractor 140 may extract only one lane information. Accordingly, when the lane information extractor 140 fails to extract the plurality of lane information from the still image, the lane information extractor 140 may determine that the acquired still image is inappropriate and recapture the image. In this case, the lane information extractor 140 may extract a plurality of lane information from another still image obtained by recapturing.
  • the plane determiner 150 may determine the 2D plane based on the lane information included in the image photographed by the camera.
  • the plane determiner 150 may obtain a plurality of reference points based on the lane information extracted by the lane information extractor 140, and determine the 2D plane by using a linear equation connecting the reference points.
  • the reference point may include a bottom point, which is a point where each lane information and the bottom of the image meet, and a vanishing point.
  • the mapping unit 160 may map the 2D plane determined by the plane determination unit 150 to the virtual 3D space generated by the 3D space generation unit 130.
  • the mapping unit 160 may project the 2D plane determined by the plane determination unit 150 to the virtual 3D space generated by the 3D space generation unit 130 to form a bottom surface of the 3D space.
  • the augmented reality provider 170 may provide an augmented reality by synthesizing an information object with the image based on the 2D plane mapped by the mapping unit 160.
  • the information object may be an object that displays information related to a path on which the vehicle is traveling or a road on which the vehicle is driving.
  • the information object may include at least one of information indicating that 000M remains to a specific place, information indicating a place of attention such as a crosswalk, speeding camera position information, guide symbol, and direction information of a route.
  • the plane determiner 150 may determine the 2D plane based on the lane information 210 and 220 extracted by the lane information extractor 140.
  • the plane determination unit 150 may obtain a vanishing point 230 that is a point at which the lane information 210 and the lane information 220 cross each other.
  • the plane determination unit 150 may include a bottom point 250, which is a point where the lane information 210 meets the bottom of the image 200, and a bottom point that is a point where the lane information 220 meets the bottom of the image. 240 can be obtained.
  • the plane determination unit 150 may calculate the equation of the straight line between the vanishing point 230 and the bottom point 250 and the equation of the straight line between the vanishing point 230 and the bottom point 240.
  • the plane determination unit 150 is a triangle formed by the equation of the straight line between the vanishing point 230 and the bottom point 250, the equation of the straight line between the vanishing point 230 and the bottom point 240, and the bottom of the image 200. Can be determined as the 2D plane.
  • the apparatus 100 for augmented reality extracts lane information based on an image photographed by a camera installed in a vehicle, and generates a 2D plane based on the extracted lane information. May not be affected.
  • 3 is an example of an augmented reality image provided in accordance with one embodiment of the present invention.
  • the augmented reality providing apparatus 100 may provide an augmented reality by synthesizing an information object with an image photographed by a camera. For example, as shown in FIG. 3, the direction information 310 of the route and the information 320 indicating 500M remaining to a specific place may be displayed on the image 300 captured by the camera.
  • the apparatus 100 for providing augmented reality synthesizes and displays information related to a route guide or a road guide on a real-time image photographed by a camera to display a user's information on a corresponding place. It can increase the recognition.
  • FIG. 4 is a flowchart illustrating a method of providing augmented reality according to an embodiment of the present invention.
  • the calibrator 120 may estimate camera parameters of the camera from an image photographed by the camera using calibration.
  • the 3D space generator 130 may generate a virtual 3D space based on the camera parameter estimated in operation S410.
  • the 3D space generator 130 may obtain depth information from an image captured by the camera based on a camera parameter, and generate a virtual 3D space based on the acquired depth information and the image.
  • the lane information extractor 140 may acquire a still image by capturing an image captured by the camera.
  • the lane information extractor 140 may determine whether the still image acquired in operation S430 is a still image capable of extracting lane information. In more detail, the lane information extractor 140 may attempt to extract lane information from the still image acquired in operation S430. If the plurality of lane information is not extracted, the lane information extractor 140 may determine that the still image acquired in operation S430 is not a still image from which the lane information may be extracted. In this case, the lane information extractor 140 may return to step S430 to capture another still image from among images captured by the camera to obtain another still image.
  • the plane determination unit 150 may obtain a plurality of reference points based on the lane information extracted by the lane information extraction unit 140 in operation S440.
  • the reference point acquired by the plane determination unit 150 may include a bottom point, which is a point where each lane information and the bottom of the image meet, and a vanishing point.
  • the plane determination unit 150 may determine the 2D plane using the equations of the reference points obtained in operation S440 and a straight line connecting the reference points.
  • the mapping unit 160 may map the 2D plane determined in operation S460 to the virtual 3D space generated in operation S420.
  • the mapping unit 160 may project the 2D plane determined in operation S460 to the virtual 3D space generated in operation S460 to configure a bottom surface of the 3D space.
  • the augmented reality provider 170 may provide an augmented reality by synthesizing an information object with the image based on the 2D plane mapped in operation S470.
  • the augmented reality provider 170 may set the type and position of the information object to be displayed on the 2D plane based on the map data, and map the position of the information object set on the 2D plane to the bottom surface of the 3D space. Can be.
  • the augmented reality providing unit 170 may select a position to display the information object in the image taken by the camera according to the position mapped to the floor surface.
  • the augmented reality provider 170 may synthesize and display the information object at a selected position in the image photographed by the camera.
  • the information object may be an object that displays information related to a path on which the vehicle is traveling or a road on which the vehicle is driving.
  • the present invention can improve the user's perception of a corresponding place by displaying information for guiding a route or information related to a road in an image of a real space photographed by a camera.
  • mapping accuracy may be improved as compared with the conventional method of mapping the 2D plane to the 3D space based on the location where the sensor is installed.

Abstract

Disclosed are a method and a device for providing an augmented reality to an image photographed by a camera on the basis of image information. An augmented reality providing device comprises: a 3D space generation unit which generates a virtual 3D space on the basis of an image photographed by a camera; a plane determination unit which determines a 2D plane on the basis of traffic lane information contained in said image; a mapping unit which maps said 2D plane in said virtual 3D space; and an augmented reality providing unit which provides an augmented reality by synthesizing an information object with said image on the basis of the mapped 2D plane.

Description

영상 정보를 이용한 증강 현실 제공 장치 및 방법Apparatus and method for providing augmented reality using image information
본 발명은 영상 정보를 기초로 상기 영상에 증강 현실을 제공하는 장치 및 방법에 관한 것으로, 보다 상세하게는 영상 정보를 기초로 카메라가 촬영한 영상에 가상 오브젝트를 합성하여 제공하는 증강 현실 제공 장치 및 증강 현실 제공 방법에 관한 것이다. The present invention relates to an apparatus and method for providing augmented reality to an image based on image information, and more particularly, to an augmented reality providing apparatus for synthesizing and providing a virtual object to an image captured by a camera based on the image information. It relates to a method for providing augmented reality.
내비게이션의 경로 및 도로 정보 제공 방법은 단순한 도로의 형태만을 제공하는 2D 맵에서 도로 주변 건물의 형상까지 제공함으로써 사용자가 목적지를 용이하게 찾을 수 있도록 하는 3D 맵으로 발전되었다.Navigation route and road information providing method has been developed from the 2D map that provides only the shape of the road to the shape of the building around the road to the user to easily find the destination 3D map.
그러나, 3D 맵이라 하더라도, 데이터의 용량이나 모델링 시간의 제한으로 인하여 현실과 동일한 형상으로 건물을 표시할 수는 없다. 특히, 이벤트나 다른 조건으로 단 기간 동안만 건물의 외장이 변경되는 경우, 변경된 정보를 3D 맵에 반영하기는 어려우므로 실시간으로 건물의 상태를 표시할 수는 없었다. 또한, 사용자에 따라서는 2D나 3D 맵 상에서 표시되는 도로와 사용자가 운전하면서 보고 있는 도로를 매칭하기 어려워하는 경우도 있었다.However, even in the case of a 3D map, the building cannot be displayed in the same shape as the reality due to the limitation of data capacity or modeling time. In particular, when the exterior of a building is changed only for a short period of time due to events or other conditions, it is difficult to reflect the changed information in the 3D map, and thus the state of the building cannot be displayed in real time. In some cases, it may be difficult for a user to match a road displayed on a 2D or 3D map with a road that the user views while driving.
따라서, 사용자가 경로 및 도로 정보에 대응하는 실제 도로나 건물을 용이하게 인식할 수 있도록 하는 방법이 요구되고 있다.Accordingly, there is a need for a method for allowing a user to easily recognize a real road or building corresponding to the route and road information.
본 발명은 카메라가 촬영한 실제 공간의 영상에 경로를 안내하기 위한 정보나 도로에 관련된 정보를 표시함으로써 해당 장소에 대한 사용자의 인식성을 높이는 증강 현실 제공 장치 및 방법을 제공한다. The present invention provides an apparatus and method for providing augmented reality that enhances a user's perception of a corresponding place by displaying information for guiding a route or information related to a road on an image of a real space photographed by a camera.
또한, 본 발명의 일실시예에 의하면, 카메라가 촬영한 영상을 기초로 2D의 평면을 3D 공간에 맵핑함으로써 2D 평면과 3D 공간 간의 맵핑 정밀도를 높이는 증강 현실 제공 장치 및 방법을 제공한다. According to an embodiment of the present invention, an apparatus and method for providing augmented reality for improving mapping accuracy between a 2D plane and a 3D space by mapping a 2D plane to a 3D space based on an image photographed by a camera are provided.
본 발명의 일실시예에 따른 증강 현실 제공 장치는 카메라가 촬영한 영상을 기초로 가상 3D 공간을 생성하는 3D 공간 생성부; 상기 영상에 포함된 차선 정보를 기초로 2D 평면을 결정하는 평면 결정부; 상기 2D 평면을 상기 가상 3D 공간에 맵핑하는 맵핑부; 및 맵핑된 2D 평면을 기초로 상기 영상에 정보 오브젝트를 합성하여 증강 현실을 제공하는 증강 현실 제공부를 포함할 수 있다.An apparatus for providing augmented reality according to an embodiment of the present invention may include: a 3D space generator configured to generate a virtual 3D space based on an image captured by a camera; A plane determination unit to determine a 2D plane based on the lane information included in the image; A mapping unit which maps the 2D plane to the virtual 3D space; And an augmented reality providing unit configured to provide an augmented reality by synthesizing an information object with the image based on the mapped 2D plane.
본 발명의 일실시예에 따른 증강 현실 제공 방법은 카메라 파라미터를 기초로 가상 3D 공간을 생성하는 단계; 카메라가 촬영한 영상의 차선 정보를 기초로 2D 평면을 결정하는 단계; 상기 2D 평면을 상기 가상 3D 공간에 맵핑하는 단계; 및 맵핑된 2D 평면을 기초로 상기 영상에 정보 오브젝트를 합성하여 증강 현실을 제공하는 단계를 포함할 수 있다.Augmented reality providing method according to an embodiment of the present invention comprises the steps of generating a virtual 3D space based on the camera parameters; Determining a 2D plane based on lane information of an image captured by the camera; Mapping the 2D plane to the virtual 3D space; And synthesizing an information object with the image based on the mapped 2D plane to provide augmented reality.
본 발명의 일실시예에 의하면, 카메라가 촬영한 실제 공간의 영상에 경로를 안내하기 위한 정보나 도로에 관련된 정보를 표시함으로써 해당 장소에 대한 사용자의 인식성을 높일 수 있다.According to an embodiment of the present invention, the user's recognition of the place can be improved by displaying information for guiding a route or information related to a road in an image of the actual space photographed by the camera.
또한, 본 발명의 일실시예에 의하면, 카메라가 촬영한 영상을 기초로 2D의 평면을 3D 공간에 맵핑함으로써 2D 평면과 3D 공간 간의 맵핑 정밀도를 높일 수 있다.In addition, according to an embodiment of the present invention, the mapping accuracy between the 2D plane and the 3D space can be improved by mapping the 2D plane to the 3D space based on the image photographed by the camera.
도 1은 본 발명의 일실시예에 따른 증강 현실 제공 장치를 도시한 블록 다이어그램이다. 1 is a block diagram illustrating an apparatus for providing augmented reality according to an embodiment of the present invention.
도 2는 본 발명의 일실시예에서 2D 평면을 결정하는 과정의 일례이다.2 is an example of a process of determining a 2D plane in an embodiment of the present invention.
도 3은 본 발명의 일실시예에 따라 제공된 증강 현실 이미지의 일례이다.3 is an example of an augmented reality image provided in accordance with one embodiment of the present invention.
도 4는 본 발명의 일실시예에 따른 증강 현실 제공 방법을 도시한 플로우차트이다.4 is a flowchart illustrating a method of providing augmented reality according to an embodiment of the present invention.
이하, 본 발명의 실시예를 첨부된 도면을 참조하여 상세하게 설명한다. 본 발명의 일실시예에 따른 증강 현실 제공 방법은 증강 현실 제공 장치에 의해 수행될 수 있다. 이때, 본 발명의 일실시예에 따른 증강 현실 제공 장치는 이동 단말기나 내비게이션, 또는 블랙 박스 중 하나일 수도 있고, 이동 단말기나 내비게이션, 블랙 박스, 또는 별도의 카메라로부터 영상을 제공받아 증강 현실을 제공하는 단말기일 수도 있다.Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. The augmented reality providing method according to an embodiment of the present invention may be performed by an augmented reality providing apparatus. In this case, the apparatus for providing augmented reality according to an embodiment of the present invention may be one of a mobile terminal, a navigation system, or a black box, and receives an image from a mobile terminal, a navigation system, a black box, or a separate camera to provide augmented reality. It may be a terminal.
도 1은 본 발명의 일실시예에 따른 증강 현실 제공 장치를 도시한 블록 다이어그램이다. 1 is a block diagram illustrating an apparatus for providing augmented reality according to an embodiment of the present invention.
도 1을 참고하면, 본 발명의 일실시예에 따른 증강 현실 제공 장치(100)는 카메라(110), 캘리브레이션부(120), 3D공간 생성부(130), 차선 정보 추출부(140), 평면 결정부(150), 맵핑부(160), 및 증강 현실 제공부(170)를 포함할 수 있다. Referring to FIG. 1, the apparatus 100 for augmented reality according to an exemplary embodiment of the present invention may include a camera 110, a calibrator 120, a 3D space generator 130, a lane information extractor 140, and a plane. The determination unit 150, the mapping unit 160, and the augmented reality providing unit 170 may be included.
캘리브레이션부(120)는 캘리브레이션(Calibration)을 사용하여 카메라가 촬영한 영상으로부터 카메라의 카메라 파라미터를 추정할 수 있다. 이때, 카메라는 증강 현실 제공 장치(100)에 내장된 카메라(110)일 수도 있고, 증강 현실 제공 장치(100)와 별개로 설치되어 차량의 전방을 촬영하는 카메라일 수 있다. 일례로, 이동 단말기나 내비게이션, 또는 블랙 박스에 내장된 카메라이거나 차량의 전방을 향하여 설치된 별도의 카메라일 수도 있다. 이때, 카메라 파라미터는 실사공간이 사진에 맺히는 관계를 나타내는 정보인 카메라 행열을 구성하는 파라미터일 수 있다. The calibration unit 120 may estimate a camera parameter of the camera from an image photographed by the camera using calibration. In this case, the camera may be a camera 110 embedded in the augmented reality providing apparatus 100 or may be a camera installed separately from the augmented reality providing apparatus 100 to photograph the front of the vehicle. For example, it may be a camera embedded in a mobile terminal, a navigation device, or a black box, or a separate camera installed toward the front of the vehicle. In this case, the camera parameter may be a parameter constituting a camera matrix which is information indicating a relationship between the actual space and the picture.
3D 공간 생성부(130)는 카메라가 촬영한 영상을 기초로 가상 3D 공간을 생성할 수 있다. 구체적으로 3D 공간 생성부(130)는 캘리브레이션부(120)가 추정한 카메라 파라미터를 기초로 카메라가 촬영한 영상으로부터 깊이 정보(Depths information)를 획득하고, 획득한 깊이 정보와 상기 영상을 기초로 가상 3D 공간을 생성할 수 있다.The 3D space generator 130 may generate a virtual 3D space based on the image photographed by the camera. In detail, the 3D space generator 130 obtains depth information from an image captured by the camera based on the camera parameter estimated by the calibrator 120, and virtually based on the acquired depth information and the image. You can create 3D space.
차선 정보 추출부(140)는 카메라가 촬영한 영상을 캡쳐하여 정지 영상을 획득하고, 상기 정지 영상에서 복수의 차선 정보를 추출할 수 있다. 카메라가 촬영한 영상은 실시간으로 차량의 전방을 촬영한 동영상이므로 차선이 계속 이동하므로 차선 정보를 추출하기 어려운 점이 있다. 따라서, 차선 정보 추출부(140)는 상기 영상을 캡쳐하여 정지 영상을 획득하고, 차선의 변화가 없는 정지 영상에서 차선 정보를 추출할 수 있다. The lane information extractor 140 may acquire a still image by capturing an image captured by the camera, and extract a plurality of lane information from the still image. Since the video captured by the camera is a video photographing the front of the vehicle in real time, it is difficult to extract lane information because the lane continues to move. Accordingly, the lane information extractor 140 may capture the image to obtain a still image, and extract lane information from a still image having no lane change.
하지만, 차선 정보 추출부(140)가 차량이 차선을 변경하는 시점에서 정지 영상을 획득한 경우, 차선 정보 추출부(140)는 차선 정보를 하나 밖에 추출하지 못할 수도 있다. 따라서, 차선 정보 추출부(140)는 정지 영상에서 복수의 차선 정보를 추출하지 못하는 경우, 획득한 정지 영상이 부적절한 것으로 판단하여 상기 영상을 재 캡쳐할 수 있다. 이때, 차선 정보 추출부(140)는 재 캡쳐로 획득한 다른 정지 영상에서 복수의 차선 정보를 추출할 수 있다.However, when the lane information extractor 140 obtains a still image at a time when the vehicle changes lanes, the lane information extractor 140 may extract only one lane information. Accordingly, when the lane information extractor 140 fails to extract the plurality of lane information from the still image, the lane information extractor 140 may determine that the acquired still image is inappropriate and recapture the image. In this case, the lane information extractor 140 may extract a plurality of lane information from another still image obtained by recapturing.
평면 결정부(150)는 카메라가 촬영한 영상에 포함된 차선 정보를 기초로 2D 평면을 결정할 수 있다. 이때, 평면 결정부(150)는 차선 정보 추출부(140)가 추출한 차선 정보를 기초로 복수의 기준점을 획득하고, 기준점들을 연결하는 직선의 방정식을 사용하여 2D 평면을 결정할 수 있다. 이때, 기준점은 각각의 차선 정보와 상기 영상의 바닥이 만나는 점인 바닥 점(bottom point)과, 소실점(Vanishing point)을 포함할 수 있다.The plane determiner 150 may determine the 2D plane based on the lane information included in the image photographed by the camera. In this case, the plane determiner 150 may obtain a plurality of reference points based on the lane information extracted by the lane information extractor 140, and determine the 2D plane by using a linear equation connecting the reference points. In this case, the reference point may include a bottom point, which is a point where each lane information and the bottom of the image meet, and a vanishing point.
평면 결정부(150)가 2D 평면을 결정하는 과정은 이하 도 2를 기초로 상세히 설명한다.The process of determining the 2D plane by the plane determination unit 150 will be described in detail with reference to FIG. 2.
맵핑부(160)는 평면 결정부(150)가 결정한 2D 평면을 3D 공간 생성부(130)가 생성한 가상 3D 공간에 맵핑(Mapping)할 수 있다. 구체적으로, 맵핑부(160)는 평면 결정부(150)가 결정한 2D 평면을 3D 공간 생성부(130)가 생성한 가상 3D 공간에 투영(Projection)하여 3D 공간의 바닥 면을 구성할 수 있다. The mapping unit 160 may map the 2D plane determined by the plane determination unit 150 to the virtual 3D space generated by the 3D space generation unit 130. In detail, the mapping unit 160 may project the 2D plane determined by the plane determination unit 150 to the virtual 3D space generated by the 3D space generation unit 130 to form a bottom surface of the 3D space.
증강 현실 제공부(170)는 맵핑부(160)에서 맵핑된 2D 평면을 기초로 상기 영상에 정보 오브젝트를 합성하여 증강 현실(Augmented Reality)을 제공할 수 있다. 이때, 정보 오브젝트는 차량이 주행할 경로나 차량이 주행 중인 도로에 관련된 정보를 표시하는 오브젝트일 수 있다. 일례로, 정보 오브젝트는 특정 장소까지 000M 남았다는 정보, 횡단 보도와 같은 주의 장소를 표시하는 정보, 과속 카메라 위치 정보, 안내 심볼, 및 경로의 방향 정보 중 적어도 하나를 포함할 수 있다.The augmented reality provider 170 may provide an augmented reality by synthesizing an information object with the image based on the 2D plane mapped by the mapping unit 160. In this case, the information object may be an object that displays information related to a path on which the vehicle is traveling or a road on which the vehicle is driving. For example, the information object may include at least one of information indicating that 000M remains to a specific place, information indicating a place of attention such as a crosswalk, speeding camera position information, guide symbol, and direction information of a route.
도 2는 본 발명의 일실시예에서 2D 평면을 결정하는 과정의 일례이다.2 is an example of a process of determining a 2D plane in an embodiment of the present invention.
본 발명의 일실시예에 따른 평면 결정부(150)는 차선 정보 추출부(140)가 추출한 차선 정보(210, 220)를 기초로 2D 평면을 결정할 수 있다. The plane determiner 150 according to an exemplary embodiment may determine the 2D plane based on the lane information 210 and 220 extracted by the lane information extractor 140.
먼저, 평면 결정부(150)는 차선 정보(210)과 차선 정보(220)를 연장할 경우 서로 교차하게 되는 점인 소실점(Vanishing point)(230)을 획득할 수 있다. 또한, 평면 결정부(150)는 차선 정보(210)이 영상(200)의 바닥과 만나는 점인 바닥 점(bottom point)(250)과, 차선 정보(220)이 영상의 바닥과 만나는 점인 바닥 점(240)을 획득할 수 있다.First, the plane determination unit 150 may obtain a vanishing point 230 that is a point at which the lane information 210 and the lane information 220 cross each other. In addition, the plane determination unit 150 may include a bottom point 250, which is a point where the lane information 210 meets the bottom of the image 200, and a bottom point that is a point where the lane information 220 meets the bottom of the image. 240 can be obtained.
다음으로 평면 결정부(150)는 소실점(230)과 바닥 점(250) 간의 직선의 방정식과, 소실점(230)과 바닥 점(240) 간의 직선의 방정식을 계산할 수 있다. Next, the plane determination unit 150 may calculate the equation of the straight line between the vanishing point 230 and the bottom point 250 and the equation of the straight line between the vanishing point 230 and the bottom point 240.
마지막으로 평면 결정부(150)는 소실점(230)과 바닥 점(250) 간의 직선의 방정식과, 소실점(230)과 바닥 점(240) 간의 직선의 방정식 및 영상(200)의 바닥으로 형성되는 삼각형을 2D 평면으로 결정할 수 있다.Finally, the plane determination unit 150 is a triangle formed by the equation of the straight line between the vanishing point 230 and the bottom point 250, the equation of the straight line between the vanishing point 230 and the bottom point 240, and the bottom of the image 200. Can be determined as the 2D plane.
본 발명에 따른 증강 현실 제공 장치(100)는 차량에 설치된 카메라에서 촬영한 영상을 기초로 차선 정보를 추출하고, 추출한 차선 정보에 기초하여 2D 평면을 생성하므로, 2D 평면의 생성에 도로의 고저 변화에 영향을 받지 않을 수 있다.The apparatus 100 for augmented reality according to the present invention extracts lane information based on an image photographed by a camera installed in a vehicle, and generates a 2D plane based on the extracted lane information. May not be affected.
도 3은 본 발명의 일실시예에 따라 제공된 증강 현실 이미지의 일례이다.3 is an example of an augmented reality image provided in accordance with one embodiment of the present invention.
본 발명의 일실시예에 따른 증강 현실 제공 장치(100)는 카메라가 촬영한 영상에 정보 오브젝트를 합성하여 증강 현실을 제공할 수 있다. 일례로, 도 3에 도시된 바와 같이 카메라에서 촬영한 영상(300)에 경로의 방향 정보(310)과 특정 장소까지 500M 남았다는 정보(320)를 합성하여 표시할 수 있다.The augmented reality providing apparatus 100 according to an embodiment of the present invention may provide an augmented reality by synthesizing an information object with an image photographed by a camera. For example, as shown in FIG. 3, the direction information 310 of the route and the information 320 indicating 500M remaining to a specific place may be displayed on the image 300 captured by the camera.
본 발명의 일실시예에 따른 증강 현실 제공 장치(100)는 도 3에 도시된 바와 같이 카메라로 촬영한 실시간 영상에 경로 안내 또는 도로 안내에 관련된 정보를 합성하여 표시함으로써, 해당 장소에 대한 사용자의 인식성을 높일 수 있다.As shown in FIG. 3, the apparatus 100 for providing augmented reality according to an embodiment of the present invention synthesizes and displays information related to a route guide or a road guide on a real-time image photographed by a camera to display a user's information on a corresponding place. It can increase the recognition.
도 4는 본 발명의 일실시예에 따른 증강 현실 제공 방법을 도시한 플로우차트이다.4 is a flowchart illustrating a method of providing augmented reality according to an embodiment of the present invention.
단계(S410)에서 캘리브레이션부(120)는 캘리브레이션(Calibration)을 사용하여 카메라가 촬영한 영상으로부터 카메라의 카메라 파라미터를 추정할 수 있다. In operation S410, the calibrator 120 may estimate camera parameters of the camera from an image photographed by the camera using calibration.
단계(S420)에서 3D 공간 생성부(130)는 단계(S410)에서 추정된 카메라 파라미터를 기초로 가상 3D 공간을 생성할 수 있다. 구체적으로 3D 공간 생성부(130)는 카메라 파라미터를 기초로 카메라가 촬영한 영상으로부터 깊이 정보(Depths information)를 획득하고, 획득한 깊이 정보와 상기 영상을 기초로 가상 3D 공간을 생성할 수 있다.In operation S420, the 3D space generator 130 may generate a virtual 3D space based on the camera parameter estimated in operation S410. In detail, the 3D space generator 130 may obtain depth information from an image captured by the camera based on a camera parameter, and generate a virtual 3D space based on the acquired depth information and the image.
단계(S430)에서 차선 정보 추출부(140)는 카메라가 촬영한 영상을 캡쳐하여 정지 영상을 획득할 수 있다.In operation S430, the lane information extractor 140 may acquire a still image by capturing an image captured by the camera.
단계(S440)에서 차선 정보 추출부(140)는 단계(S430)에서 획득한 정지 영상이 차선 정보를 추출할 수 있는 정지 영상인지 여부를 확인할 수 있다. 구체적으로 차선 정보 추출부(140)는 단계(S430)에서 획득한 정지 영상에서 차선 정보의 추출을 시도할 수 있다. 차선 정보 추출부(140)는 복수의 차선 정보가 추출되지 않는 경우, 단계(S430)에서 획득한 정지 영상이 차선 정보를 추출할 수 있는 정지 영상이 아닌 것으로 판단할 수 있다. 이때, 차선 정보 추출부(140)는 단계(S430)로 되돌아가 카메라가 촬영한 영상 중 다른 장면을 캡쳐하여 다른 정지 영상을 획득할 수 있다.In operation S440, the lane information extractor 140 may determine whether the still image acquired in operation S430 is a still image capable of extracting lane information. In more detail, the lane information extractor 140 may attempt to extract lane information from the still image acquired in operation S430. If the plurality of lane information is not extracted, the lane information extractor 140 may determine that the still image acquired in operation S430 is not a still image from which the lane information may be extracted. In this case, the lane information extractor 140 may return to step S430 to capture another still image from among images captured by the camera to obtain another still image.
단계(S450)에서 평면 결정부(150)는 단계(S440)에서 차선 정보 추출부(140)가 추출한 차선 정보를 기초로 복수의 기준점을 획득할 수 있다. 이때, 평면 결정부(150)가 획득하는 기준점은 각각의 차선 정보와 상기 영상의 바닥이 만나는 점인 바닥 점(bottom point)과, 소실점(Vanishing point)을 포함할 수 있다.In operation S450, the plane determination unit 150 may obtain a plurality of reference points based on the lane information extracted by the lane information extraction unit 140 in operation S440. In this case, the reference point acquired by the plane determination unit 150 may include a bottom point, which is a point where each lane information and the bottom of the image meet, and a vanishing point.
단계(S460)에서 평면 결정부(150)는 단계(S440)에서 획득한 기준점들과, 기준점들을 연결하는 직선의 방정식을 사용하여 2D 평면을 결정할 수 있다. In operation S460, the plane determination unit 150 may determine the 2D plane using the equations of the reference points obtained in operation S440 and a straight line connecting the reference points.
단계(S470)에서 맵핑부(160)는 단계(S460)에서 결정된 2D 평면을 단계(S420)에서 생성된 가상 3D 공간에 맵핑(Mapping)할 수 있다. 구체적으로, 맵핑부(160)는 단계(S460)에서 결정된 2D 평면을 단계(S460)에서 생성된 가상 3D 공간에 투영(Projection)하여 3D 공간의 바닥 면을 구성할 수 있다.In operation S470, the mapping unit 160 may map the 2D plane determined in operation S460 to the virtual 3D space generated in operation S420. In detail, the mapping unit 160 may project the 2D plane determined in operation S460 to the virtual 3D space generated in operation S460 to configure a bottom surface of the 3D space.
단계(S480)에서 증강 현실 제공부(170)는 단계(S470)에서 맵핑된 2D 평면을 기초로 상기 영상에 정보 오브젝트를 합성하여 증강 현실을 제공할 수 있다. 일례로, 증강 현실 제공부(170)는 지도 데이터를 기초로 표시할 정보 오브젝트의 종류와 위치를 2D 평면 상에 설정하고, 2D 평면 상에 설정된 정보 오브젝트의 위치를 3D 공간의 바닥 면에 맵핑할 수 있다. 다음으로, 증강 현실 제공부(170)는 바닥 면에 맵핑된 위치에 따라 카메라가 촬영한 영상에서 정보 오브젝트를 표시할 위치를 선택할 수 있다. 마지막으로 증강 현실 제공부(170)는 카메라가 촬영한 영상에서 선택한 위치에 정보 오브젝트를 합성하여 디스플레이 할 수 있다. 이때, 정보 오브젝트는 차량이 주행할 경로나 차량이 주행 중인 도로에 관련된 정보를 표시하는 오브젝트일 수 있다.In operation S480, the augmented reality provider 170 may provide an augmented reality by synthesizing an information object with the image based on the 2D plane mapped in operation S470. For example, the augmented reality provider 170 may set the type and position of the information object to be displayed on the 2D plane based on the map data, and map the position of the information object set on the 2D plane to the bottom surface of the 3D space. Can be. Next, the augmented reality providing unit 170 may select a position to display the information object in the image taken by the camera according to the position mapped to the floor surface. Finally, the augmented reality provider 170 may synthesize and display the information object at a selected position in the image photographed by the camera. In this case, the information object may be an object that displays information related to a path on which the vehicle is traveling or a road on which the vehicle is driving.
본 발명은 카메라가 촬영한 실제 공간의 영상에 경로를 안내하기 위한 정보나 도로에 관련된 정보를 표시함으로써 해당 장소에 대한 사용자의 인식성을 높일 수 있다.The present invention can improve the user's perception of a corresponding place by displaying information for guiding a route or information related to a road in an image of a real space photographed by a camera.
또한, 카메라가 촬영한 영상을 기초로 2D의 평면을 3D 공간에 맵핑하므로 센서가 설치된 장소를 기초로 2D의 평면을 3D 공간에 맵핑하는 종래 방법에 비하여 맵핑 정밀도를 높일 수 있다.In addition, since the 2D plane is mapped to the 3D space based on the image photographed by the camera, mapping accuracy may be improved as compared with the conventional method of mapping the 2D plane to the 3D space based on the location where the sensor is installed.
이상과 같이 본 발명은 비록 한정된 실시예와 도면에 의해 설명되었으나, 본 발명은 상기의 실시예에 한정되는 것은 아니며, 본 발명이 속하는 분야에서 통상의 지식을 가진 자라면 이러한 기재로부터 다양한 수정 및 변형이 가능하다.As described above, the present invention has been described by way of limited embodiments and drawings, but the present invention is not limited to the above embodiments, and those skilled in the art to which the present invention pertains various modifications and variations from such descriptions. This is possible.
그러므로, 본 발명의 범위는 설명된 실시예에 국한되어 정해져서는 아니 되며, 후술하는 특허청구범위뿐 아니라 이 특허청구범위와 균등한 것들에 의해 정해져야 한다.Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined not only by the claims below but also by the equivalents of the claims.

Claims (16)

  1. 카메라가 촬영한 영상을 기초로 가상 3D 공간을 생성하는 3D 공간 생성부;3D space generating unit for generating a virtual 3D space based on the image taken by the camera;
    상기 영상에 포함된 차선 정보를 기초로 2D 평면을 결정하는 평면 결정부;A plane determination unit to determine a 2D plane based on the lane information included in the image;
    상기 2D 평면을 상기 가상 3D 공간에 맵핑하는 맵핑부; 및A mapping unit which maps the 2D plane to the virtual 3D space; And
    맵핑된 2D 평면을 기초로 상기 영상에 정보 오브젝트를 합성하여 증강 현실을 제공하는 증강 현실 제공부Augmented reality providing unit for providing augmented reality by synthesizing the information object to the image based on the mapped 2D plane
    를 포함하는 증강 현실 제공 장치.Augmented reality providing device comprising a.
  2. 제1항에 있어서,The method of claim 1,
    상기 평면 결정부는, The plane determination unit,
    상기 차선 정보를 기초로 복수의 기준점을 획득하고, 상기 기준점들을 연결하여 2D 평면을 결정하는 것을 특징으로 하는 증강 현실 제공 장치.And obtaining a plurality of reference points based on the lane information, and determining the 2D plane by connecting the reference points.
  3. 제2항에 있어서,The method of claim 2,
    상기 영상을 캡쳐하여 정지 영상을 획득하고, 상기 정지 영상에서 복수의 차선 정보를 추출하는 차선 정보 추출부A lane information extractor configured to capture the image to obtain a still image and extract a plurality of lane information from the still image
    를 더 포함하는 증강 현실 제공 장치.Augmented reality providing device further comprising.
  4. 제3항에 있어서,The method of claim 3,
    상기 차선 정보 추출부는,The lane information extraction unit,
    상기 정지 영상에서 복수의 차선 정보를 추출하지 못하는 경우, 상기 영상을 재 캡쳐하여 다른 정지 영상을 획득하고, 다른 정지 영상에서 복수의 차선 정보를 추출하는 것을 특징으로 하는 증강 현실 제공 장치.And when the plurality of lane information is not extracted from the still image, recapture the image to obtain another still image, and extract the plurality of lane information from the other still image.
  5. 제2항에 있어서,The method of claim 2,
    상기 기준점은,The reference point,
    각각의 차선 정보와 상기 영상의 바닥이 만나는 점인 바닥 점(bottom point)과, 상기 각각의 차선 정보가 만나는 점인 소실점(Vanishing point)을 포함하는 것을 특징으로 하는 증강 현실 제공 장치.And a vanishing point, which is a point where each lane information and the bottom of the image meet, and a vanishing point, where each lane information meets.
  6. 제1항에 있어서,The method of claim 1,
    상기 정보 오브젝트는,The information object,
    차량이 주행할 경로나 차량이 주행 중인 도로에 관련된 정보를 표시하는 오브젝트인 것을 특징으로 하는 증강 현실 제공 장치.An apparatus for providing augmented reality, characterized in that the object for displaying information relating to the path to the vehicle or the road on which the vehicle is driving.
  7. 제1항에 있어서,The method of claim 1,
    캘리브레이션(Calibration)을 사용하여 상기 영상으로부터 상기 카메라의 카메라 파라미터를 추정하는 캘리브레이션부Calibration unit for estimating the camera parameters of the camera from the image using the calibration (Calibration)
    를 더 포함하는 증강 현실 제공 장치.Augmented reality providing device further comprising.
  8. 제7항에 있어서,The method of claim 7, wherein
    상기 3D 공간 생성부는,The 3D space generator,
    상기 카메라 파라미터를 기초로 상기 영상으로부터 깊이 정보를 획득하고, 획득한 깊이 정보와 상기 영상을 기초로 가상 3D 공간을 생성하는 것을 특징으로 하는 증강 현실 제공 장치.And obtaining depth information from the image based on the camera parameter, and generating a virtual 3D space based on the acquired depth information and the image.
  9. 카메라가 촬영한 영상을 기초로 가상 3D 공간을 생성하는 단계;Generating a virtual 3D space based on an image captured by the camera;
    상기 영상에 포함된 차선 정보를 기초로 2D 평면을 결정하는 단계;Determining a 2D plane based on lane information included in the image;
    상기 2D 평면을 상기 가상 3D 공간에 맵핑하는 단계; 및Mapping the 2D plane to the virtual 3D space; And
    맵핑된 2D 평면을 기초로 상기 영상에 정보 오브젝트를 합성하여 증강 현실을 제공하는 단계Providing augmented reality by synthesizing an information object on the image based on the mapped 2D plane
    를 포함하는 증강 현실 제공 방법.Augmented reality providing method comprising a.
  10. 제9항에 있어서,The method of claim 9,
    상기 2D 평면을 결정하는 단계는, Determining the 2D plane,
    상기 차선 정보를 기초로 복수의 기준점을 획득하는 단계; 및Obtaining a plurality of reference points based on the lane information; And
    상기 기준점들을 연결하여 2D 평면을 결정하는 단계Determining the 2D plane by connecting the reference points
    를 포함하는 증강 현실 제공 방법.Augmented reality providing method comprising a.
  11. 제10항에 있어서,The method of claim 10,
    상기 영상을 캡쳐하여 정지 영상을 획득하는 단계; 및Capturing the image to obtain a still image; And
    상기 정지 영상에서 복수의 차선 정보를 추출하는 단계Extracting a plurality of lane information from the still image
    를 더 포함하는 것을 특징으로 하는 증강 현실 제공 방법.Augmented reality providing method further comprising.
  12. 제11항에 있어서,The method of claim 11,
    상기 정지 영상에서 복수의 차선 정보를 추출하지 못하는 경우, 상기 영상을 재 캡쳐하여 다른 정지 영상을 획득하는 단계Recapturing the image to obtain another still image when the plurality of lane information is not extracted from the still image
    를 더 포함하고, More,
    상기 차선 정보를 추출하는 단계는,Extracting the lane information,
    다른 정지 영상에서 복수의 차선 정보를 추출하는 것을 특징으로 하는 증강 현실 제공 방법.Augmented reality providing method for extracting a plurality of lane information from another still image.
  13. 제10항에 있어서,The method of claim 10,
    상기 기준점은,The reference point,
    각각의 차선 정보와 상기 영상의 바닥이 만나는 점인 바닥 점(bottom point)과, 상기 각각의 차선 정보가 만나는 점인 소실점(Vanishing point)을 포함하는 것을 특징으로 하는 증강 현실 제공 방법.And a vanishing point, which is a point where each lane information and the bottom of the image meet, and a vanishing point, which is a point where each lane information meets.
  14. 제9항에 있어서,The method of claim 9,
    상기 정보 오브젝트는,The information object,
    차량이 주행할 경로나 차량이 주행 중인 도로에 관련된 정보를 표시하는 오브젝트인 것을 특징으로 하는 증강 현실 제공 방법.An augmented reality providing method characterized in that the object for displaying information relating to the path to the vehicle or the road on which the vehicle is driving.
  15. 제9항에 있어서,The method of claim 9,
    캘리브레이션(Calibration)을 사용하여 상기 영상으로부터 상기 카메라의 카메라 파라미터를 추정하는 단계Estimating camera parameters of the camera from the image using calibration
    를 더 포함하는 증강 현실 제공 방법.Augmented reality providing method further comprising.
  16. 제15항에 있어서,The method of claim 15,
    상기 3D 공간을 생성하는 단계는,Generating the 3D space,
    상기 카메라 파라미터를 기초로 상기 영상으로부터 깊이 정보를 획득하는 단계; 및Obtaining depth information from the image based on the camera parameter; And
    획득한 깊이 정보와 상기 영상을 기초로 가상 3D 공간을 생성하는 단계Generating a virtual 3D space based on the acquired depth information and the image;
    를 포함하는 증강 현실 제공 방법.Augmented reality providing method comprising a.
PCT/KR2012/001006 2011-02-11 2012-02-10 Device and method for providing augmented reality using image information WO2012108721A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2011-0012353 2011-02-11
KR1020110012353A KR101188105B1 (en) 2011-02-11 2011-02-11 Apparatus and method for providing argumented reality using image information

Publications (2)

Publication Number Publication Date
WO2012108721A2 true WO2012108721A2 (en) 2012-08-16
WO2012108721A3 WO2012108721A3 (en) 2012-12-20

Family

ID=46639081

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/001006 WO2012108721A2 (en) 2011-02-11 2012-02-10 Device and method for providing augmented reality using image information

Country Status (2)

Country Link
KR (1) KR101188105B1 (en)
WO (1) WO2012108721A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014151366A1 (en) * 2013-03-15 2014-09-25 Elwha Llc Cross-reality select, drag, and drop for augmented reality systems
DE102013210789A1 (en) 2013-06-10 2014-12-11 Robert Bosch Gmbh Augmented reality system and method for generating and displaying augmented reality object representations for a vehicle
US8928695B2 (en) 2012-10-05 2015-01-06 Elwha Llc Formatting of one or more persistent augmentations in an augmented view in response to multiple input factors
US9077647B2 (en) 2012-10-05 2015-07-07 Elwha Llc Correlating user reactions with augmentations displayed through augmented views
US9105126B2 (en) 2012-10-05 2015-08-11 Elwha Llc Systems and methods for sharing augmentation data
US9111384B2 (en) 2012-10-05 2015-08-18 Elwha Llc Systems and methods for obtaining and using augmentation data and for sharing usage data
US9141188B2 (en) 2012-10-05 2015-09-22 Elwha Llc Presenting an augmented view in response to acquisition of data inferring user activity
US9639964B2 (en) 2013-03-15 2017-05-02 Elwha Llc Dynamically preserving scene elements in augmented reality systems
US9671863B2 (en) 2012-10-05 2017-06-06 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US10109075B2 (en) 2013-03-15 2018-10-23 Elwha Llc Temporal element restoration in augmented reality systems
US10269179B2 (en) 2012-10-05 2019-04-23 Elwha Llc Displaying second augmentations that are based on registered first augmentations

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101285075B1 (en) * 2011-11-24 2013-07-17 팅크웨어(주) Method and apparatus for providing augmented reality view mode using sensor data and lane information
KR102406490B1 (en) * 2014-12-01 2022-06-10 현대자동차주식회사 Electronic apparatus, control method of electronic apparatus, computer program and computer readable recording medium
KR102411046B1 (en) * 2015-06-05 2022-06-20 엘지디스플레이 주식회사 Bar type display apparatus and vehicle comprising the same
KR102407296B1 (en) * 2015-07-30 2022-06-10 현대오토에버 주식회사 Apparatus and method of displaying point of interest
KR102434406B1 (en) * 2016-01-05 2022-08-22 한국전자통신연구원 Augmented Reality device based on recognition spacial structure and method thereof
KR101988128B1 (en) 2017-10-11 2019-06-11 이태홍 A railway running device of electric wheel using augmented reality
US10417829B2 (en) 2017-11-27 2019-09-17 Electronics And Telecommunications Research Institute Method and apparatus for providing realistic 2D/3D AR experience service based on video image
KR102067823B1 (en) * 2017-11-27 2020-01-17 한국전자통신연구원 Method and apparatus for operating 2d/3d augument reality technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080077391A (en) * 2005-12-02 2008-08-22 코닌클리케 필립스 일렉트로닉스 엔.브이. Stereoscopic image display method and apparatus, method for generating 3d image data from a 2d image data input and an apparatus for generating 3d image data from a 2d image data input
KR20090065223A (en) * 2007-12-17 2009-06-22 한국전자통신연구원 Apparatus for displaying guidance object and method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100472823B1 (en) 2002-10-21 2005-03-08 학교법인 한양학원 Method for detecting lane and system therefor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080077391A (en) * 2005-12-02 2008-08-22 코닌클리케 필립스 일렉트로닉스 엔.브이. Stereoscopic image display method and apparatus, method for generating 3d image data from a 2d image data input and an apparatus for generating 3d image data from a 2d image data input
KR20090065223A (en) * 2007-12-17 2009-06-22 한국전자통신연구원 Apparatus for displaying guidance object and method thereof

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9448623B2 (en) 2012-10-05 2016-09-20 Elwha Llc Presenting an augmented view in response to acquisition of data inferring user activity
US10180715B2 (en) 2012-10-05 2019-01-15 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US10665017B2 (en) 2012-10-05 2020-05-26 Elwha Llc Displaying in response to detecting one or more user behaviors one or more second augmentations that are based on one or more registered first augmentations
US8928695B2 (en) 2012-10-05 2015-01-06 Elwha Llc Formatting of one or more persistent augmentations in an augmented view in response to multiple input factors
US8941689B2 (en) 2012-10-05 2015-01-27 Elwha Llc Formatting of one or more persistent augmentations in an augmented view in response to multiple input factors
US9077647B2 (en) 2012-10-05 2015-07-07 Elwha Llc Correlating user reactions with augmentations displayed through augmented views
US9105126B2 (en) 2012-10-05 2015-08-11 Elwha Llc Systems and methods for sharing augmentation data
US9111384B2 (en) 2012-10-05 2015-08-18 Elwha Llc Systems and methods for obtaining and using augmentation data and for sharing usage data
US9674047B2 (en) 2012-10-05 2017-06-06 Elwha Llc Correlating user reactions with augmentations displayed through augmented views
US9141188B2 (en) 2012-10-05 2015-09-22 Elwha Llc Presenting an augmented view in response to acquisition of data inferring user activity
US10713846B2 (en) 2012-10-05 2020-07-14 Elwha Llc Systems and methods for sharing augmentation data
US10269179B2 (en) 2012-10-05 2019-04-23 Elwha Llc Displaying second augmentations that are based on registered first augmentations
US9111383B2 (en) 2012-10-05 2015-08-18 Elwha Llc Systems and methods for obtaining and using augmentation data and for sharing usage data
US9671863B2 (en) 2012-10-05 2017-06-06 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US10254830B2 (en) 2012-10-05 2019-04-09 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US10109075B2 (en) 2013-03-15 2018-10-23 Elwha Llc Temporal element restoration in augmented reality systems
US10025486B2 (en) 2013-03-15 2018-07-17 Elwha Llc Cross-reality select, drag, and drop for augmented reality systems
US10628969B2 (en) 2013-03-15 2020-04-21 Elwha Llc Dynamically preserving scene elements in augmented reality systems
US9639964B2 (en) 2013-03-15 2017-05-02 Elwha Llc Dynamically preserving scene elements in augmented reality systems
WO2014151366A1 (en) * 2013-03-15 2014-09-25 Elwha Llc Cross-reality select, drag, and drop for augmented reality systems
DE102013210789A1 (en) 2013-06-10 2014-12-11 Robert Bosch Gmbh Augmented reality system and method for generating and displaying augmented reality object representations for a vehicle
EP2813999A2 (en) 2013-06-10 2014-12-17 Robert Bosch Gmbh Augmented reality system and method of generating and displaying augmented reality object representations for a vehicle

Also Published As

Publication number Publication date
WO2012108721A3 (en) 2012-12-20
KR20120092352A (en) 2012-08-21
KR101188105B1 (en) 2012-10-09

Similar Documents

Publication Publication Date Title
WO2012108721A2 (en) Device and method for providing augmented reality using image information
JP3437555B2 (en) Specific point detection method and device
KR101269981B1 (en) Bird's-eye image forming device, bird's-eye image forming method, and recording medium
US20140285523A1 (en) Method for Integrating Virtual Object into Vehicle Displays
JP6082802B2 (en) Object detection device
WO2005088971A1 (en) Image generation device, image generation method, and image generation program
JP6820561B2 (en) Image processing device, display device, navigation system, image processing method and program
JP2008128827A (en) Navigation device, navigation method, and program thereof
KR101285075B1 (en) Method and apparatus for providing augmented reality view mode using sensor data and lane information
JP2011170599A (en) Outdoor structure measuring instrument and outdoor structure measuring method
KR101996241B1 (en) Device and method for providing 3d map representing positon of interest in real time
KR20180120456A (en) Apparatus for providing virtual reality contents based on panoramic image and method for the same
JP6345381B2 (en) Augmented reality system
JP5825713B2 (en) Dangerous scene reproduction device for vehicles
KR20090072523A (en) Method for distance estimation and apparatus for the same
CN112484743B (en) Vehicle-mounted HUD fusion live-action navigation display method and system thereof
WO2018101746A2 (en) Apparatus and method for reconstructing road surface blocked area
JP2003319388A (en) Image processing method and apparatus
JP4696925B2 (en) Image processing device
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
WO2009151220A2 (en) User-view output system and method
JPWO2020039897A1 (en) Station monitoring system and station monitoring method
JP5196426B2 (en) Navigation device
CN113807282A (en) Data processing method and device and readable storage medium
KR101351611B1 (en) Navigator using real time image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12744612

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/12/2013)

122 Ep: pct application non-entry in european phase

Ref document number: 12744612

Country of ref document: EP

Kind code of ref document: A2