WO2011083929A2 - 뷰잉 프러스텀을 이용하여 객체에 대한 정보를 제공하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 - Google Patents
뷰잉 프러스텀을 이용하여 객체에 대한 정보를 제공하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 Download PDFInfo
- Publication number
- WO2011083929A2 WO2011083929A2 PCT/KR2010/009278 KR2010009278W WO2011083929A2 WO 2011083929 A2 WO2011083929 A2 WO 2011083929A2 KR 2010009278 W KR2010009278 W KR 2010009278W WO 2011083929 A2 WO2011083929 A2 WO 2011083929A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- viewing frustum
- interest
- viewing
- information
- user terminal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004458 analytical method Methods 0.000 claims description 11
- 230000003190 augmentative effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 2
- 238000000638 solvent extraction Methods 0.000 claims 2
- 230000006870 function Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1626—Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/403—Edge-driven scaling; Edge-based scaling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Definitions
- the present invention relates to a method, system and computer readable recording medium for providing information about an object using a viewing frustum. More specifically, the present invention relates to a viewing frustum which is specified using the user terminal device as a viewpoint when an image is captured by the user terminal device or an image is input through the user terminal device in a preview state.
- a viewing frustum which is specified using the user terminal device as a viewpoint when an image is captured by the user terminal device or an image is input through the user terminal device in a preview state.
- geographic information which has been provided in the form of booklets, is converted into digital data.
- a typical form of geographic information that is digitized, for example, an electronic map service provided online, a navigation system installed in an automobile, and the like.
- the digitized geographic information as described above has the advantage of being able to provide various convenience functions to the user by being combined with a user interface such as a search function, and when a change is made to the geographic information, it can be easily updated through a remote update or the like. Therefore, the situation is superior to the conventional printed matter in terms of the latest geographic information.
- Augmented Reality which synthesizes and displays additional information such as computer graphics (CG) and text on input images captured by a user terminal device in real time.
- AR Augmented Reality
- This technology has been introduced, according to the augmented reality technology, it is possible to provide a better environment to the user because the user can visually superimpose the additional information on the screen containing the real world that the user actually sees.
- the object of the present invention is to solve all the above-mentioned problems.
- the present invention automatically refers to a viewing frustum that is specified by using the user terminal device as a viewpoint, and automatically generates an interest in an object existing in the real world. Another purpose is to be able to differentially provide additional information about an object according to the interest of the object.
- the present invention provides additional information on objects present on the map in the form of augmented reality on the user terminal device by referring to context information such as demographic attribute information and time zone information of the user. It is another object to be able to provide suitable additional information.
- a method for providing information about an object using a viewing frustum comprising: (a) specifying at least one viewing frustum of a user terminal device as a viewpoint; And (b) refer to an object that the first viewing frustum using the first user terminal device as the viewpoint and the second viewing frustum using the second user terminal device as the viewpoint in common. Comprising a step of calculating the degree of interest for the object is provided.
- a system for providing information about an object using a viewing frustum comprising: a viewing frising that specifies at least one viewing frustum with the user terminal device as the viewpoint
- a viewing frising that specifies at least one viewing frustum with the user terminal device as the viewpoint
- the viewpoint With reference to the object which the term determination part and the 1st viewing frustum which makes a 1st user terminal device the viewpoint, and the 2nd viewing frustum which makes the 2nd user terminal apparatus the viewpoint contain in common
- a system includes a viewing frustum analyzer that calculates an interest in the object.
- the user's interest in an object existing in the real world is automatically calculated with reference to the viewing frustum, and additional information about the object is differentially calculated according to the calculated interest of the object. Since it can be provided, it is possible to achieve the effect that can effectively provide to other users additional information about the object that has received a lot of attention from users.
- additional information on an object suitable for a user's situation for example, additional information on a building of high interest to a man in his 30s, additional information on a store of high interest during the daytime, etc.
- additional information on a building of high interest to a man in his 30s for example, additional information on a store of high interest during the daytime, etc.
- the type of the user can be subdivided to achieve the effect of providing additional information about an object of interest to the user.
- FIG. 1 is a diagram schematically illustrating a configuration of an entire system for providing information on an object with reference to a viewing frustum according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating an internal configuration of the object information providing system 200 according to an embodiment of the present invention.
- FIG. 3 is a diagram exemplarily illustrating a shape of a viewing frustum according to an exemplary embodiment of the present invention.
- FIG. 4 is a diagram illustrating a positional relationship between a viewing frustum and an object according to an embodiment of the present invention.
- control unit 260 control unit
- a viewing frustum refers to a three-dimensional area included in a field of view of a photographing apparatus when an image is photographed by a photographing apparatus such as a camera or input in a preview state. It is specified by the point of view of the photographing device, and depending on the type of the photographing lens, an infinite region in the form of a cone or a polygonal cone (near plane or far plane in which the cone or polygonal cone is perpendicular to the direction of the line of sight).
- the viewing frustum referred to in this specification may be formed to pass at least a part of an object (a building, etc.) existing in the real world, and each of which is specified using different user terminal devices as a viewpoint.
- the viewing frustum may have a common area that overlaps each other.
- FIG. 1 is a diagram schematically illustrating a configuration of an entire system for providing information on an object with reference to a viewing frustum according to an embodiment of the present invention.
- the entire system may include a communication network 100, an object information providing system 200, and a user terminal device 300.
- the communication network 100 may be configured regardless of its communication mode such as wired and wireless, and may include a local area network (LAN), a metropolitan area network (MAN), and a wide area network (WAN). Network).
- LAN local area network
- MAN metropolitan area network
- WAN wide area network
- the communication network 100 according to the present invention may be a known World Wide Web (WWW).
- WWW World Wide Web
- the object information providing system 200 refers to the viewing frustum which is specified by using the corresponding user terminal device as a viewpoint and refers to the terminal device.
- a function of automatically calculating an interest of an object existing in the input image and differentially providing additional information about the object according to the calculated interest of the object may be performed.
- the user terminal device 300 is a digital device including a function to enable communication after connecting to the object information providing system 200, a personal computer (for example, a desktop computer) , A laptop computer, etc.), a digital device having memory means such as a workstation, a PDA, a web pad, a mobile phone, and the like with a microprocessor can be adopted as the user terminal device 300 according to the present invention. Can be.
- FIG. 2 is a diagram illustrating an internal configuration of the object information providing system 200 according to an embodiment of the present invention.
- the object information providing system 200 according to an embodiment of the present invention, the viewing frustum determination unit 210, the viewing frustum analysis unit 220, the object information providing unit 230, a database 240, a communication unit 250, and a control unit 260 may be included.
- the viewing frustum determiner 210, the viewing frustum analyzer 220, the object information provider 230, the database 240, the communicator 250, and the controller 260 are provided. At least some of them may be program modules that communicate with the user terminal device 300. Such program modules may be included in the object information providing system 200 in the form of an operating system, an application module, and other program modules, and may be physically stored on various known storage devices.
- the viewing frustum determiner 210 sets the user terminal device as a viewpoint when an image is captured or inputted in the preview state by the user terminal device 300. Specifies the viewing frustum and saves it.
- the user may use the user terminal device 300 of the user to perform a meaningful action such as image capturing, image preview, etc. on the object of interest to the user.
- the viewing frustum specified by using 300 as a viewpoint may include all or a part of an object such as a building, a store, etc. that the user is interested in.
- the viewing frustum may be a three-dimensional region defined in three-dimensional space, and may be defined as an infinite region in the form of a cone or a polygon having a vertex as a vertex and an infinite height.
- the shape of the viewing frustum may vary depending on the type of the photographing lens.
- the viewing frustum may be defined as a polygonal pyramid such as a square pyramid.
- the shape of the viewing frustum according to the present invention is not limited to the above-listed embodiments, and the viewing frustum is a trapezoidal cylinder cut by a near plane or a far plane perpendicular to the line of sight. Or a finite region such as a trapezoidal cube shape.
- FIG. 3 is a diagram exemplarily illustrating a shape of a viewing frustum according to an exemplary embodiment of the present invention.
- the viewing frustum 310 may have the user terminal device 300 as a viewpoint according to a perspective view, and the near plane 320 and the circular plane 330. It can be defined as a finite region partitioned by.
- the distances from the viewpoint to the near plane 320 and the circle plane 330 may be uniformly set for all viewing frustums or may be adaptively set according to an object to be photographed. For example, when the object to be photographed is a building, the distances from the viewpoint to the near plane 320 and the circle plane 330 are 10 m and 10, respectively, so that the building can be completely included in the viewing frustum. It can be set to be 5000m.
- the distance to the circular plane 330 may be defined as an infinite region.
- the position and direction of the viewing frustum may be determined by referring to the position and direction of the user terminal device 300. That is, the viewing frustum of the user terminal device 300 as a viewpoint may be specified in the three-dimensional space on the map by the location and the direction of the user terminal device 300.
- the information about the position and the direction of the user terminal device 300 may be obtained by a known Global Positioning System (GPS) module and acceleration sensing means, respectively.
- GPS Global Positioning System
- the position and direction of the user terminal device 300 may be changed at any time due to the movement or manipulation of the user.
- the position and the direction of the viewing frustum may also be changed at any time.
- the viewing frustum may be defined to be specified from a plurality of frames constituting the video.
- the viewing frustum determiner 210 may perform a function of acquiring the situation information at the time when the viewing frustum is specified.
- the contextual information on the viewing frustum may include a time zone in which the viewing frustum is specified, demographic information such as gender, age, etc. of the user of the user terminal device at which the viewing frustum starts.
- the contextual information may be utilized when the object information providing unit 230 to be described later provides object information targeted to the user.
- the viewing frustum analyzer 220 analyzes a plurality of viewing frustums specified from the plurality of user terminal devices 300 to determine an interest in an object existing in the real world. Perform the function of calculating.
- FIG. 4 is a diagram illustrating a positional relationship between a viewing frustum and an object according to an embodiment of the present invention.
- the first and second viewing frustum (410 and 420, respectively) is a viewing frustum with a different point (415 and 425, respectively) as a view point to be an infinite region in the form of a square pyramid. Let's assume.
- a common area 430 which is an area overlapping each other, may exist between the first and second viewing frustums 410 and 420 specified in a three-dimensional space on a map, and the common area may be a user of the user. It may be evaluated that the first and second users who have taken an image by using the terminal device and are interested in both of the first and second users who have specified the first and second frustums 410 and 420, respectively. Accordingly, the viewing frustum analyzer 220 according to an embodiment of the present invention includes the object A 440 in the common area 430 and the object B 450 in the area of the first viewing frustum 410.
- object A 440 When included only in object A 440 may determine that object A is of high interest from both the first and second users and object B 450 is object of high interest only from the first user. It may be determined that the interest in 440 is higher than the interest in object B 450.
- the viewing frustum analyzer 220 indicates that an object located in a common area having a large number of crossings by different viewing frustums has a higher interest in the object. You can decide.
- the viewing frustum analysis unit 220 as the distance between the object located in the viewing frustum and the distance from the viewpoint of the viewing frustum, the degree of interest in the object The higher the ratio of the area occupied by the object in the common area, the higher the interest in the object.
- the viewing frustum may not overlap each other even though they are extended toward the same object.
- a shooting frustum that is specified when photographing a high-rise portion of the front side of the building B and a low-rise portion of the side of the building B that is specified may not have common areas overlapping each other even though they all extend toward Building B.
- the viewing frustum analyzer 220 may refer to information about the position and direction of the viewing frustum or the object even though there is no common area where the corresponding viewing frustums overlap each other.
- Recognition techniques are used to first recognize which objects are included in all or part of the viewing frustum, and as a result, when an object is recognized as being contained in different viewing frustums, the object is viewed by the object.
- the function of determining that the frustums have the same degree of interest as in the case where the frustums are located in the common area overlapping each other may be performed.
- a method for recognizing an object included in the viewing frustum may be performed in a three-dimensional space on a map by referring to the position and direction of the viewing frustum obtained through a GPS (Global Positioning System) module and an acceleration sensing means.
- a method of recognizing an object included in the viewing frustum may be applied, and analyzing the appearance (contour, pattern, etc.) of the object displayed on the input image corresponding to the viewing frustum to recognize what the object is. The method may be applied.
- the object information providing unit 230 differentially adds additional information on at least one object with reference to the degree of interest of the object calculated by the viewing frustum analyzer 220. Perform the function provided by.
- the object information providing unit 230 may provide additional information on a point of interest (POI) displayed in augmented reality. Only the additional information about an object that is greater than or equal to the set level may be provided or the size of a visual effect such as an icon pointing to the object may be differentially displayed according to the interest of the object.
- POI point of interest
- the user B located near the station 3 is provided with additional information about an object in the form of augmented reality using a user terminal device to which the photographing device is attached.
- the user B may selectively receive additional information only for the "Gangnam Finance Tower” determined to be of high interest to the plurality of users, while selectively providing additional information for the "Sacred Heights Building” determined to be of low interest.
- only useful information can be intensively provided according to preference.
- the object information providing unit 230 provides additional information only on the object corresponding to the situation of the user receiving the information with reference to the situation information on the viewing frustum. Can be performed.
- the situation information on the viewing frustum may include a time zone in which the viewing frustum is specified, demographic information such as gender, age, etc. of the user of the user terminal device that is the viewpoint of the viewing frustum. have.
- the object information providing unit 230 refers to the interest of the object calculated by referring only to the viewing frustum specified during the day time zone by other users corresponding to women in their 20s. Additional information may only be provided for objects suitable for the situation of user C (eg, department stores, coffee shops, etc.).
- the viewing frustum defined in the three-dimensional space on the map has been mainly described, but the viewing frustum according to the present invention is not necessarily limited to the above embodiment.
- the viewing frustum analysis unit 220 projects the viewing frustum specified on the three-dimensional space in the two-dimensional space on the map and refers to a common area between different projection areas. You can also calculate the degree of interest.
- the database 240 stores information about the shape, near plane, circle plane, position, direction, etc. of the viewing frustum specified by using at least one user terminal device as a viewpoint.
- Each view frustum may be associated with and stored contextual information.
- the contextual information stored in association with the viewing frustum may include a time zone in which the viewing frustum is specified, demographic information such as gender, age, etc. of the user of the user terminal device that is the time of the viewing frustum. have.
- a database is a concept that includes not only a negotiated database but also a database in a broad sense including data recording based on a computer file system, and even a simple set of arithmetic processing logs can be retrieved to extract predetermined data. It should be understood that if possible, it can be included in the database referred to in the present invention.
- the database 240 is illustrated as being included in the object information providing system 200 in FIG. 2, according to the needs of those skilled in the art for implementing the present invention, the database 240 may be provided as the object information providing system 200. It may be configured separately from).
- the communication unit 250 performs a function of allowing the object information providing system 200 to communicate with an external device such as the user terminal device 300.
- the control unit 260 is the data between the viewing frustum determination unit 210, the viewing frustum analysis unit 220, the object information providing unit 230, the database 240, and the communication unit 250.
- the controller 260 controls the flow of data from the outside or between the respective components of the object information providing system 200, thereby the viewing frustum determiner 210, the viewing frustum analyzer 220, and the object information.
- the providing unit 230, the database 240, and the communication unit 250 controls to perform a unique function, respectively.
- Embodiments according to the present invention described above may be implemented in the form of program instructions that may be executed by various computer components, and may be recorded in a computer-readable recording medium.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- Program instructions recorded on the computer-readable recording medium may be those specially designed and configured for the present invention, or may be known and available to those skilled in the computer software arts.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
- the hardware device may be configured to operate as one or more software modules to perform the process according to the invention, and vice versa.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Navigation (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims (33)
- 뷰잉 프러스텀(viewing frustum)을 이용하여 객체에 대한 정보를 제공하기 위한 방법으로서,(a) 사용자 단말 장치를 시점(視點)으로 하는 적어도 두 개의 뷰잉 프러스텀을 특정하는 단계, 및(b) 제1 사용자 단말 장치를 시점(視點)으로 하는 제1 뷰잉 프러스텀 및 제2 사용자 단말 장치를 시점(視點)으로 하는 제2 뷰잉 프러스텀이 공통으로 포함하는 객체를 참조로 하여 상기 객체에 대한 관심도를 산출하는 단계를 포함하는 방법.
- 제1항에 있어서,상기 뷰잉 프러스텀은, 상기 사용자 단말 장치에 의하여 영상이 촬영되거나 상기 사용자 단말 장치를 통하여 프리뷰 상태로 영상이 입력되는 경우에 상기 사용자 단말 장치의 시야에 포함되는 영역인 것을 특징으로 방법.
- 제1항에 있어서,상기 뷰잉 프러스텀은, 상기 시점을 꼭지점으로 하는 무한 원뿔 또는 다각뿔이거나 상기 무한 원뿔 또는 다각뿔의 높이 방향에 수직한 한 개 또는 두 개의 평면에 의하여 구획되어 획득되는 사다리꼴 원기둥 또는 다면체의 형태인 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 뷰잉 프러스텀의 위치 및 방향은 상기 사용자 단말 장치의 위치 및 방향을 참조로 하여 결정되는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (b) 단계에서,상기 제1 및 제2 뷰잉 프러스텀이 서로 겹쳐지는 공통 영역을 참조로 하여 상기 객체에 대한 관심도를 산출하는 것을 특징으로 하는 방법.
- 제5항에 있어서,상기 (b) 단계에서,상기 공통 영역에 적어도 일부가 포함되는 객체의 관심도를 상기 공통 영역에 전혀 포함되지 않는 객체의 관심도보다 높게 결정하는 것을 특징으로 하는 방법.
- 제5항에 있어서,상기 (b) 단계에서,상기 공통 영역에서 객체가 차지하는 영역의 비율이 클수록 상기 객체의 관심도를 높게 결정하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (b) 단계에서,상기 객체를 공통으로 포함하는 뷰잉 프러스텀의 수가 많을수록 상기 객체에 대한 관심도를 높게 결정하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (b) 단계에서,상기 뷰잉 프러스텀의 시점(視點)으로부터의 거리가 가까운 객체일수록 관심도를 높게 결정하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (b) 단계에서,상기 제1 및 제2 뷰잉 프러스텀에 각각 적어도 일부가 포함되는 제1 및 제2 객체를 인식하여 상기 제1 및 제2 객체가 서로 동일한 객체인지 여부를 판단하여 상기 객체에 대한 관심도를 산출하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (b) 단계는,(b1) 상기 제1 및 제2 뷰잉 프러스텀을 지도 상의 2차원 공간 상에 투영하여 각각 제1 및 제2 투영 영역을 획득하는 단계, 및(b2) 상기 지도 상의 2차원 공간 상에서, 상기 제1 및 제2 투영 영역이 공통으로 포함하는 객체를 참조로 하여 상기 객체에 대한 관심도를 산출하는 단계를 포함하는 것을 특징으로 하는 방법.
- 제1항에 있어서,(c) 상기 산출된 관심도에 따라 상기 객체에 대한 부가 정보를 차등적으로 제공하는 단계를 더 포함하는 것을 특징으로 하는 방법.
- 제12항에 있어서,상기 (c) 단계에서,상기 산출된 관심도가 기설정된 임계값 이상인 객체에 대하여만 부가 정보를 제공하는 것을 특징으로 하는 방법.
- 제12항에 있어서,상기 (c) 단계에서,상기 부가 정보는, 증강 현실(Augmented Reality: AR)의 형태로 제공되는 것을 특징으로 하는 방법.
- 제12항에 있어서,상기 (c) 단계에서,상기 뷰잉 프러스텀에 대한 상황 정보 및 상기 부가 정보를 제공 받는 제3 사용자의 상황 정보를 참조로 하여, 상기 제3 사용자의 상황에 타겟팅된 객체에 대한 부가 정보를 제공하는 것을 특징으로 하는 방법.
- 제15항에 있어서,상기 뷰잉 프러스텀에 대한 상황 정보에는, 뷰잉 프러스텀이 특정된 시간대 및 뷰잉 프러스텀의 시점(視點)이 되는 사용자 단말 장치의 사용자의 인구학적 정보 중 적어도 하나가 포함되고, 상기 제3 사용자에 대한 상황 정보에는 상기 제3 사용자가 부가 정보를 제공 받는 장소, 시간대 및 상기 제3 사용자의 인구학적 정보 중 적어도 하나가 포함되는 것을 특징으로 하는 방법.
- 뷰잉 프러스텀(viewing frustum)을 이용하여 객체에 대한 정보를 제공하기 위한 시스템으로서,사용자 단말 장치를 시점(視點)으로 하는 적어도 두 개의 뷰잉 프러스텀을 특정하는 뷰잉 프러스텀 결정부, 및제1 사용자 단말 장치를 시점(視點)으로 하는 제1 뷰잉 프러스텀 및 제2 사용자 단말 장치를 시점(視點)으로 하는 제2 뷰잉 프러스텀이 공통으로 포함하는 객체를 참조로 하여 상기 객체에 대한 관심도를 산출하는 뷰잉 프러스텀 분석부를 포함하는 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀은, 상기 사용자 단말 장치에 의하여 영상이 촬영되거나 상기 사용자 단말 장치를 통하여 프리뷰 상태로 영상이 입력되는 경우에 상기 사용자 단말 장치의 시야에 포함되는 영역인 것을 특징으로 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀은, 상기 시점을 꼭지점으로 하는 무한 원뿔 또는 다각뿔이거나 상기 무한 원뿔 또는 다각뿔의 높이 방향에 수직한 한 개 또는 두 개의 평면에 의하여 구획되어 획득되는 사다리꼴 원기둥 또는 다면체의 형태인 것을 특징으로 하는 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀의 위치 및 방향은 상기 사용자 단말 장치의 위치 및 방향을 참조로 하여 결정되는 것을 특징으로 하는 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀 분석부는, 상기 제1 및 제2 뷰잉 프러스텀이 서로 겹쳐지는 공통 영역을 참조로 하여 상기 객체에 대한 관심도를 산출하는 것을 특징으로 하는 시스템.
- 제21항에 있어서,상기 뷰잉 프러스텀 분석부는, 상기 공통 영역에 적어도 일부가 포함되는 객체의 관심도를 상기 공통 영역에 전혀 포함되지 않는 객체의 관심도보다 높게 결정하는 것을 특징으로 하는 시스템.
- 제21항에 있어서,상기 뷰잉 프러스텀 분석부는, 상기 공통 영역에서 객체가 차지하는 영역의 비율이 클수록 상기 객체의 관심도를 높게 결정하는 것을 특징으로 하는 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀 분석부는, 상기 객체를 공통으로 포함하는 뷰잉 프러스텀의 수가 많을수록 상기 객체에 대한 관심도를 높게 결정하는 것을 특징으로 하는 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀 분석부는, 상기 뷰잉 프러스텀의 시점(視點)으로부터의 거리가 가까운 객체일수록 관심도를 높게 결정하는 것을 특징으로 하는 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀 분석부는, 상기 제1 및 제2 뷰잉 프러스텀에 각각 적어도 일부가 포함되는 제1 및 제2 객체를 인식하여 상기 제1 및 제2 객체가 서로 동일한 객체인지 여부를 판단하여 상기 객체에 대한 관심도를 산출하는 것을 특징으로 하는 시스템.
- 제17항에 있어서,상기 뷰잉 프러스텀 분석부는, 상기 제1 및 제2 뷰잉 프러스텀을 지도 상의 2차원 공간 상에 투영하여 각각 제1 및 제2 투영 영역을 획득하고, 지도 상의 2차원 공간 상에서, 상기 제1 및 제2 투영 영역이 공통으로 포함하는 객체를 참조로 하여 상기 객체에 대한 관심도를 산출하는 것을 특징으로 하는 시스템.
- 제17항에 있어서,상기 산출된 관심도에 따라 상기 객체에 대한 부가 정보를 차등적으로 제공하는 객체 정보 제공부를 더 포함하는 것을 특징으로 하는 시스템.
- 제28항에 있어서,상기 객체 정보 제공부는, 상기 산출된 관심도가 기설정된 임계값 이상인 객체에 대하여만 부가 정보를 제공하는 것을 특징으로 하는 시스템.
- 제28항에 있어서,상기 부가 정보는, 증강 현실(Augmented Reality: AR)의 형태로 제공되는 것을 특징으로 하는 시스템.
- 제28항에 있어서,상기 객체 정보 제공부는, 상기 뷰잉 프러스텀에 대한 상황 정보 및 상기 부가 정보를 제공 받는 제3 사용자의 상황 정보를 참조로 하여, 상기 제3 사용자의 상황에 타겟팅된 객체에 대한 부가 정보를 제공하는 것을 특징으로 하는 시스템.
- 제31항에 있어서,상기 뷰잉 프러스텀에 대한 상황 정보에는, 뷰잉 프러스텀이 특정된 시간대 및 뷰잉 프러스텀의 시점(視點)이 되는 사용자 단말 장치의 사용자의 인구학적 정보 중 적어도 하나가 포함되고, 상기 제3 사용자에 대한 상황 정보에는 상기 제3 사용자가 부가 정보를 제공 받는 장소, 시간대 및 상기 제3 사용자의 인구학적 정보 중 적어도 하나가 포함되는 것을 특징으로 하는 시스템.
- 제1항 내지 제16항 중 어느 한 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP10842309.6A EP2525288A4 (en) | 2010-01-11 | 2010-12-23 | METHOD, SYSTEM AND COMPUTER-READABLE RECORDING MEDIUM FOR PROVIDING INFORMATION ON AN OBJECT USING A VISUALIZATION CONE TRUNK |
JP2012547946A JP5263748B2 (ja) | 2010-01-11 | 2010-12-23 | ビューイングフラスタムを用いて客体に関する情報を提供するための方法、システム及びコンピュータ読み取り可能な記録媒体 |
US13/378,400 US8587615B2 (en) | 2010-01-11 | 2010-12-23 | Method, system, and computer-readable recording medium for providing information on an object using viewing frustums |
AU2010340461A AU2010340461B2 (en) | 2010-01-11 | 2010-12-23 | Method, system, and computer-readable recording medium for providing information on an object using a viewing frustum |
CN201080061211.6A CN102792266B (zh) | 2010-01-11 | 2010-12-23 | 使用视见平截头体提供关于目标的信息的方法、系统和计算机可读记录介质 |
US14/019,824 US8842134B2 (en) | 2010-01-11 | 2013-09-06 | Method, system, and computer-readable recording medium for providing information on an object using viewing frustums |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20100002340A KR100975128B1 (ko) | 2010-01-11 | 2010-01-11 | 뷰잉 프러스텀을 이용하여 객체에 대한 정보를 제공하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
KR10-2010-0002340 | 2010-01-11 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/378,400 A-371-Of-International US8587615B2 (en) | 2010-01-11 | 2010-12-23 | Method, system, and computer-readable recording medium for providing information on an object using viewing frustums |
US14/019,824 Continuation US8842134B2 (en) | 2010-01-11 | 2013-09-06 | Method, system, and computer-readable recording medium for providing information on an object using viewing frustums |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2011083929A2 true WO2011083929A2 (ko) | 2011-07-14 |
WO2011083929A3 WO2011083929A3 (ko) | 2011-11-03 |
Family
ID=42759487
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2010/009278 WO2011083929A2 (ko) | 2010-01-11 | 2010-12-23 | 뷰잉 프러스텀을 이용하여 객체에 대한 정보를 제공하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
Country Status (7)
Country | Link |
---|---|
US (2) | US8587615B2 (ko) |
EP (1) | EP2525288A4 (ko) |
JP (1) | JP5263748B2 (ko) |
KR (1) | KR100975128B1 (ko) |
CN (1) | CN102792266B (ko) |
AU (1) | AU2010340461B2 (ko) |
WO (1) | WO2011083929A2 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101666561B1 (ko) * | 2015-07-13 | 2016-10-24 | 한국과학기술원 | 증강 공간 내 부분 공간 획득 시스템 및 방법 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100989663B1 (ko) | 2010-01-29 | 2010-10-26 | (주)올라웍스 | 단말 장치의 시야에 포함되지 않는 객체에 대한 정보를 제공하기 위한 방법, 단말 장치 및 컴퓨터 판독 가능한 기록 매체 |
US8798926B2 (en) | 2012-11-14 | 2014-08-05 | Navteq B.V. | Automatic image capture |
CN105205127B (zh) * | 2015-09-14 | 2019-06-04 | 北京航空航天大学 | 一种液体质量/体积特性数据库的自适应步长建库方法和系统 |
EP3506214A1 (en) * | 2017-12-28 | 2019-07-03 | Dassault Systèmes | Method for defining drawing planes for the design of a 3d object |
US20200082576A1 (en) | 2018-09-11 | 2020-03-12 | Apple Inc. | Method, Device, and System for Delivering Recommendations |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6018348A (en) * | 1997-12-31 | 2000-01-25 | Intel Corporation | Method for visibility culling |
US7883415B2 (en) * | 2003-09-15 | 2011-02-08 | Sony Computer Entertainment Inc. | Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion |
CA2406131A1 (en) * | 2002-09-30 | 2004-03-30 | Idelix Software Inc. | A graphical user interface using detail-in-context folding |
WO2004042662A1 (en) * | 2002-10-15 | 2004-05-21 | University Of Southern California | Augmented virtual environments |
JP4008333B2 (ja) * | 2002-10-25 | 2007-11-14 | 株式会社リアルビズ | 複数台のプロジェクタによるマルチ映像投影方法、同方法を使用するためのプロジェクタ装置、プログラム及び記録媒体 |
JP4488233B2 (ja) * | 2003-04-21 | 2010-06-23 | 日本電気株式会社 | 映像オブジェクト認識装置、映像オブジェクト認識方法、および映像オブジェクト認識プログラム |
US7978887B2 (en) * | 2003-06-17 | 2011-07-12 | Brown University | Methods and apparatus for identifying subject matter in view data |
JP4262011B2 (ja) * | 2003-07-30 | 2009-05-13 | キヤノン株式会社 | 画像提示方法及び装置 |
US20060195858A1 (en) * | 2004-04-15 | 2006-08-31 | Yusuke Takahashi | Video object recognition device and recognition method, video annotation giving device and giving method, and program |
FR2875320A1 (fr) * | 2004-09-15 | 2006-03-17 | France Telecom | Procede et systeme d'identification d'un objet dans une photo, programme, support d'enregistement, terminal et serveur pour la mise en oeuvre du systeme |
US7773832B2 (en) * | 2005-02-28 | 2010-08-10 | Fujifilm Corporation | Image outputting apparatus, image outputting method and program |
US7599894B2 (en) * | 2005-03-04 | 2009-10-06 | Hrl Laboratories, Llc | Object recognition using a cognitive swarm vision framework with attention mechanisms |
US7573489B2 (en) * | 2006-06-01 | 2009-08-11 | Industrial Light & Magic | Infilling for 2D to 3D image conversion |
US8269822B2 (en) * | 2007-04-03 | 2012-09-18 | Sony Computer Entertainment America, LLC | Display viewing system and methods for optimizing display view based on active tracking |
US8326048B2 (en) * | 2007-10-04 | 2012-12-04 | Microsoft Corporation | Geo-relevance for images |
US8390618B2 (en) * | 2008-03-03 | 2013-03-05 | Intel Corporation | Technique for improving ray tracing performance |
US8595218B2 (en) * | 2008-06-12 | 2013-11-26 | Intellectual Ventures Holding 67 Llc | Interactive display management systems and methods |
US8745090B2 (en) | 2008-12-22 | 2014-06-03 | IPointer, Inc. | System and method for exploring 3D scenes by pointing at a reference object |
-
2010
- 2010-01-11 KR KR20100002340A patent/KR100975128B1/ko not_active IP Right Cessation
- 2010-12-23 WO PCT/KR2010/009278 patent/WO2011083929A2/ko active Application Filing
- 2010-12-23 EP EP10842309.6A patent/EP2525288A4/en not_active Withdrawn
- 2010-12-23 CN CN201080061211.6A patent/CN102792266B/zh not_active Expired - Fee Related
- 2010-12-23 US US13/378,400 patent/US8587615B2/en active Active
- 2010-12-23 JP JP2012547946A patent/JP5263748B2/ja not_active Expired - Fee Related
- 2010-12-23 AU AU2010340461A patent/AU2010340461B2/en not_active Ceased
-
2013
- 2013-09-06 US US14/019,824 patent/US8842134B2/en active Active
Non-Patent Citations (1)
Title |
---|
See references of EP2525288A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101666561B1 (ko) * | 2015-07-13 | 2016-10-24 | 한국과학기술원 | 증강 공간 내 부분 공간 획득 시스템 및 방법 |
WO2017010614A1 (ko) * | 2015-07-13 | 2017-01-19 | 한국과학기술원 | 증강 공간 내 부분 공간 획득 시스템 및 방법 |
US10409447B2 (en) | 2015-07-13 | 2019-09-10 | Korea Advanced Institute Of Science And Technology | System and method for acquiring partial space in augmented space |
Also Published As
Publication number | Publication date |
---|---|
EP2525288A4 (en) | 2015-05-13 |
EP2525288A2 (en) | 2012-11-21 |
US8842134B2 (en) | 2014-09-23 |
KR100975128B1 (ko) | 2010-08-11 |
US8587615B2 (en) | 2013-11-19 |
US20120162258A1 (en) | 2012-06-28 |
JP5263748B2 (ja) | 2013-08-14 |
WO2011083929A3 (ko) | 2011-11-03 |
JP2013516687A (ja) | 2013-05-13 |
AU2010340461B2 (en) | 2013-09-12 |
AU2010340461A1 (en) | 2012-07-26 |
CN102792266A (zh) | 2012-11-21 |
US20140002499A1 (en) | 2014-01-02 |
CN102792266B (zh) | 2016-01-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011136608A9 (ko) | 단말 장치로 입력되는 입력 영상 및 상기 입력 영상에 관련된 정보를 이용하여 증강 현실을 제공하기 위한 방법, 단말 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2011139115A2 (ko) | 증강 현실을 이용하여 인물의 정보에 접근하기 위한 방법, 서버 및 컴퓨터 판독 가능한 기록 매체 | |
WO2011083929A2 (ko) | 뷰잉 프러스텀을 이용하여 객체에 대한 정보를 제공하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
WO2015174729A1 (ko) | 공간 정보를 제공하기 위한 증강 현실 제공 방법과 시스템, 그리고 기록 매체 및 파일 배포 시스템 | |
WO2011034308A2 (ko) | 그래프 구조를 이용하여 파노라마 이미지에 대한 이미지 매칭을 수행하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
KR100985737B1 (ko) | 단말 장치의 시야에 포함되는 객체에 대한 정보를 제공하기 위한 방법, 단말 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2012093811A1 (ko) | 입력 이미지에 포함된 객체에 대한 콜렉션을 수행할 수 있도록 지원하기 위한 방법, 단말 장치 및 컴퓨터 판독 가능한 기록 매체 | |
CN110619314A (zh) | 安全帽检测方法、装置及电子设备 | |
EP2444942A1 (en) | Apparatus and method for providing augmented reality (AR) information | |
CN106165386A (zh) | 用于照片上传和选择的自动化技术 | |
CN116134405A (zh) | 用于扩展现实的私有控制接口 | |
CN111062255A (zh) | 三维点云的标注方法、装置、设备及存储介质 | |
CN110555876B (zh) | 用于确定位置的方法和装置 | |
CN108712644A (zh) | 一种tw_ar智能导览系统及导览方法 | |
CN111967664A (zh) | 一种游览路线规划方法、装置及设备 | |
CN107084740A (zh) | 一种导航方法和装置 | |
KR20180120456A (ko) | 파노라마 영상을 기반으로 가상현실 콘텐츠를 제공하는 장치 및 그 방법 | |
WO2011078596A2 (ko) | 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법, 시스템, 및 컴퓨터 판독 가능한 기록 매체 | |
WO2011034306A2 (ko) | 파노라마 이미지 사이의 중복을 제거하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
KR20180133052A (ko) | 360도 이미지 및 비디오 기반의 증강현실 콘텐츠 저작 방법 | |
WO2021210725A1 (ko) | 점군 정보 가공 장치 및 방법 | |
CN111858987A (zh) | Cad图像的问题查看方法、电子设备及相关产品 | |
CN110580275A (zh) | 地图显示方法及装置 | |
CN115460388B (zh) | 扩展现实设备的投影方法及相关设备 | |
JP7490743B2 (ja) | 情報処理システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080061211.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10842309 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13378400 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010340461 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010842309 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012547946 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2010340461 Country of ref document: AU Date of ref document: 20101223 Kind code of ref document: A |