WO2008059086A1 - System and method for displaying an enhanced image by applying enhanced-reality techniques - Google Patents

System and method for displaying an enhanced image by applying enhanced-reality techniques Download PDF

Info

Publication number
WO2008059086A1
WO2008059086A1 PCT/ES2007/000645 ES2007000645W WO2008059086A1 WO 2008059086 A1 WO2008059086 A1 WO 2008059086A1 ES 2007000645 W ES2007000645 W ES 2007000645W WO 2008059086 A1 WO2008059086 A1 WO 2008059086A1
Authority
WO
WIPO (PCT)
Prior art keywords
real
image
user
dimensional
zoom
Prior art date
Application number
PCT/ES2007/000645
Other languages
Spanish (es)
French (fr)
Inventor
Jose Ignacio Torres Sancho
Maria Teresa LINAZA Saldaña
Original Assignee
The Movie Virtual, S.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Movie Virtual, S.L. filed Critical The Movie Virtual, S.L.
Publication of WO2008059086A1 publication Critical patent/WO2008059086A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/40Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0127Head-up displays characterised by optical features comprising devices increasing the depth of field
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • This invention relates to a method and a system for viewing an enlarged scene, which makes it possible to increase with real content a real scene or image that is within the viewpoint of a user.
  • the invention is mainly oriented to apparatuses for the tourism and leisure sector of the type of binoculars for the observation of a sight, although it is also designed for other environments such as cultural institutions, fairs or nature tourism.
  • binoculars placed on a hill for tourists to observe the surrounding area for a few minutes when introducing a coin offer a panoramic view of the buildings and streets of an urban area, their natural and cultural resources. Additionally, in some cases, they allow you to zoom in on certain tourist attractions to see them closer. This view can arouse in tourists the interest of visiting these attractions later, helping them in identifying new points of interest for their subsequent visit. However, it is often difficult to find something in the field of vision of binoculars other than nearby forests or the sky itself. Even when a tourist attraction that may be interesting has been found, that interest is often lost due to the lack of images and information about that attraction. People are used to receiving information in a simple and entertaining way through different channels such as Internet or television, using hyperlinks to obtain multimedia content that responds to their request.
  • the present invention satisfies the previously mentioned needs by means of a system and a method for displaying an enlarged image with additional multimedia content.
  • the invention detects the position of a display device comprised in the system, increases the user's view with graphic representations of graphic objects recovered from a database and allows the user to interact with the multimedia information provided.
  • Another aspect of the invention provides a system and method for zooming in on the enlarged image, detecting the position of the display device and fixing the relative position of the graphic objects when zooming on the view.
  • FIG. 1 shows a system according to an embodiment of the invention.
  • Figure 2 shows the functional scheme of the invention.
  • Figure 3 shows the operation of the system by an example of use of the invention.
  • FIG. 5 shows an example of the method of the invention for zooming.
  • RA Augmented Reality
  • VA Augmented Virtuality
  • the invention is based on the application of Augmented Reality (RA) technologies to the traditional concept of binoculars in tourist environments, that is, the invention relates to a system that allows users to see a scene (for example, the view from a nearby hill) augmented by superimposing virtual information relative to the scene the user is observing using RA technologies.
  • RA Augmented Reality
  • the increase of the scene increases the user's entertainment experience, since virtual information provides additional content to the scene (for example, the arrows that indicate buildings and show their name).
  • the system also allows the user to interact with virtual information and obtain additional content (for example, access to text files, images or videos related to a specific tourist resource).
  • the system allows you to customize the contents based on different user profiles.
  • FIG 1 shows a typical system (1) according to an embodiment of the invention.
  • the system (1) basically comprises a real camera (2) that registers a real image (6) or real-time image from the user's point of view (13), and that sends this real image (6) to a unit of processing (3).
  • the processing unit (3) includes a database (4) that stores three-dimensional graphic objects (5) - not represented - that will be superimposed on the real image (6) using RA methods.
  • the RA methods convert the three-dimensional graphic objects (5) into two-dimensional virtual representations (5 '), and then the two-dimensional virtual representations (5') of said three-dimensional graphic objects (5) are superimposed on the real image (6), forming the enlarged image (6 ').
  • Three-dimensional graphic objects (5) may include additional multimedia information (7) -not represented- capable of providing additional and more complex information about three-dimensional graphic objects (5).
  • the system (1) also includes a display device (8) by which the user (13) can observe the enlarged image (6 '), that is, the real image (6) to which the representations have been added two-dimensional virtual (5 ') of the three-dimensional graphic objects (5).
  • the system (1) includes a tracking system (9) to detect the current position of the display device (8) as a fundamental element in the augmentation process. As shown in Figure 1, the display device (8) and the real camera (2) are preferably mounted on the same mechanical axis so that they move in a solidary manner. In this way, the tracking system (9) calculates the position of the display device (8) and therefore, the position of the real camera (2).
  • the system (1) also includes interaction means (11) that fulfill the user interface function for the system (1).
  • the system (1) may include a payment system (12) based on the insertion of coins or other means of payment.
  • the real camera (2) can be any type of camera that includes "autofocus" lenses for optical zoom, so that the user (13) can increase or decrease the zoom to observe small or distant objects more clearly.
  • the real camera (2) must be able to be controlled from an external control device capable of accessing and modifying the zoom and other settings or control parameters of the camera, for example, by an RS-232C serial cable connection. There are many cameras of this type available commercially.
  • the tracking system (9) is preferably based on inertial sensors. However, there is a large number of alternative positioning systems that can be used effectively in other embodiments. Since the system (1) is supposed to be used primarily in two directions (right / left, up / down), the simplest tracking techniques provide sufficient accuracy. However, the invention can use any tracking system that is able to determine the position of the display device (8) robustly and send the coordinates to the processing unit (3).
  • the display device (8) by which the user (13) perceives the enlarged image (6 ') that includes the real image (6) and the two-dimensional virtual representations (5') of the three-dimensional graphic objects (5) together with the additional multimedia content (7) is preferably a metaphor for conventional binoculars.
  • the display device (8) preferably comprises a display that is basically a non-transparent video display system, which can be non-stereoscopic, stereoscopic and autostereoscopic.
  • the display device (8) can be one of the virtual binoculars or telescopes available in the market with stereoscopic capability.
  • semi-transparent and autostereoscopic devices may be used to visualize the enlarged image (6 ').
  • the preferred embodiment of the invention includes seven buttons for interacting with the system (1) in a simple and ergonomic manner.
  • the buttons (11) have been distributed between the left side (19) and the right side (20) of the system (1).
  • On the left side (19) of the system (1) there are two buttons that allow you to interact with the enlarged image (6 ').
  • the user (13) can zoom to enlarge the image with one of the buttons and zoom to reduce it with the other.
  • On the right side (20) of the system (1) there are five buttons: a central button (preferably one color) and four buttons around it (preferably a color different from the central button).
  • the middle button is the "enter” button, which is used to click on the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5) and also to choose between the menus of additional multimedia content (7).
  • the buttons that surround it serve to move through the menus that allow the selection of additional multimedia content (7).
  • the system (1) preferably includes two different databases (4): a database for augmentation and a database of self-managed content.
  • the database for the increase includes the three-dimensional graphic objects (5) associated with the points of interest for the user (13), including the name of the main tourist attractions and other graphic objects.
  • the self-managed content database stores additional multimedia content (7), including videos, movie clips, interactive 3D panoramas or even three-dimensional models of existing and non-existent tourist attractions.
  • the preferred embodiment of the invention also includes an authoring tool to simplify the manipulation of three-dimensional graphic objects (5), their position in the enlarged image (6 ') and the additional multimedia content (7) that they can display.
  • the program can work with XML files to change them or create new ones suitable for the main program.
  • the preferred embodiment of the invention includes up to ten points of interest with a maximum of five sub-objects to choose from within the menu.
  • These three-dimensional graphic objects (5) include the points of tourist interest and their corresponding markers in the enlarged image (6 '), while the sub-objects represent different types of additional multimedia content (7) that are shown when choosing the two-dimensional virtual representations ( 5 ') of the three-dimensional graphic objects (5).
  • Figure 2 shows a functional scheme of the system (1) of Figure 1.
  • the real camera (2) captures the user's point of view (13), that is, records a video image in real time called real image (6).
  • the tracking system (9) calculates and sends a position information (14) to the processing unit (3), informing about the current location and orientation of the display device (8).
  • the processing unit (3) then performs a graphics adaptation process (15), which converts the three-dimensional graphic objects (5) stored in the two-dimensional virtual representations (5 ') based on an orientation vector that is obtained from of position information (14).
  • the processing unit (3) adapts a view of the three-dimensional graphic objects (5) to obtain a virtual scene, controlling a "virtual camera” that uses the same orientation vector than the real camera (2), that is, by orienting the virtual camera exactly the same as the real camera (2). Therefore, the real scene (6) and the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5) are synchronized according to the real world. This synchronization allows composing the real and virtual scenes of the enlarged image (6 ').
  • the processing unit (3) performs an augmentation process (16) in which the real scene (6) is augmented with the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5) to obtain the image augmented (6 '), which is sent back to the display device (8).
  • the user (13) looks through the display device (8), he can see the enlarged image (6 '). If the user (13) rotates the system (1) or makes any movement that changes the position of the display device (8), the tracking system (9) reports the change to the processing unit (3).
  • the graphics adaptation process (15) updates the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5), so that they coincide with the new real image (6). Therefore, when the user (13) turns the system (1) or changes its position, the enlarged image (6 ') changes in its entirety, that is, the real image (6) and the two-dimensional virtual representations (5') They adapt to that change.
  • Figure 3 shows the operation of the invention using an example of using the system (1).
  • the graphic at the top shows an example of an enlarged image (6 ') displayed by the user (13).
  • This enlarged image (6 ') is composed of the real image (6) of a town (17) and some mountains (18), in which one of the buildings and one of the mountains are indicated by the corresponding two-dimensional virtual representations ( 5').
  • the user (13) discovers an interesting object on which he wishes to gather more information - for example, the high-rise building in the center of the screen marked with the text "objl" - acts with the means of interaction (11) until selecting the two-dimensional virtual representation (5 ') of said interesting object.
  • the system (1) displays the additional multimedia content (7) related to said two-dimensional virtual representation (5 ').
  • the system (1) can show a screen in which some interesting text and a video of a person explaining certain characteristics of the building are displayed, and / or offer new navigation options.
  • the system (1) can also offer personalized information, so that different contents can be displayed to different user profiles, including aspects such as multilingualism. For example, an English-speaking tourist who has a cultural profile can be a type of user profile. This user will receive more complete additional information in English about the history of the selected building than the information received by other profiles.
  • Figure 4 shows the method according to the invention whereby three-dimensional graphic objects (5) are placed in the virtual world using a spherical environment.
  • the registration information (24) that determines the placement of the three-dimensional graphic objects (5) in the enlarged image (6 ') is defined using the authoring tool.
  • the processing unit (3) receives the registration information (24) of the different three-dimensional graphic objects (5) and the position information (14) of the display device (8), in order to carry out the process of adapting the graphics (fifteen).
  • a virtual world (21) is constructed, that is, a three-dimensional model in which the three-dimensional graphic objects (5) are placed in their previously defined spatial position.
  • a virtual camera (22) is placed with respect to the three-dimensional graphic objects (5) based on the position information (14) of the real camera (2).
  • three-dimensional graphic objects (5) are placed in virtual points that simulate the distance of real objects from the user's point of view (13).
  • the process of adapting graphics (15) of three-dimensional virtual objects (5) into two-dimensional virtual representations (5 ') is based on the information about the angle that three-dimensional graphic objects (5) differ from a Zero-reference value View- Vector, which represents the center point of a coordinate system.
  • the graphics adaptation process (15) converts the three-dimensional graphic objects (5) into their respective two-dimensional virtual representations (5 ').
  • the result is the placement and adaptation of two-dimensional virtual representations (5 ') so that they provide a feeling of depth to the user (13).
  • the magnification process (16) combines the two-dimensional virtual representations (5 ') with the real image (6) recorded by the real camera (2) to obtain the enlarged image (6').
  • the user (13) can enjoy the landscape and the information provided by the two-dimensional virtual representations
  • the process of transforming the spatial coordinates defined by the three-dimensional graphic objects (5) into a real environment in two-dimensional coordinates that define the pixels that are displayed in a display device (8) is a key element in the present invention.
  • this process is carried out by applying an innovative approach, building a "virtual world” and capturing an image of said "virtual world” in the same way that the real camera (2) records the real image (6).
  • the augmented image (6 ') superimposes the real world and the virtual world to achieve the effect of magnification, bringing both worlds into a new "augmented world.”
  • the invention allows zooming in on the enlarged image (6 ').
  • the two-dimensional virtual representations (5 ') are not only rescaled, but also recovered, so that they continue to be placed on the increasing objects. This means that, by zooming in, the invention guarantees that the views of the real camera (2) and the virtual camera (22) are adjusted at all times.
  • the user (13) activates the zoom process by interacting with the two buttons on the left side (19) of the system (1).
  • the buttons are connected to the processing unit (3), so that when the user (13) acts on them, the processing unit (3) modifies the zoom of the virtual camera (22) and sends a command to the camera real (2) to change its zoom control parameters synchronously.
  • the adjustment of the control parameters of the real camera (2) and the virtual camera (22) must be done dynamically in real time, while the system (1) is operating, in order to correctly align the real image (6) and two-dimensional virtual representations (5 '). Due to the mechanical zoom system of the real camera (2), there is a delay between the adjustment of the virtual camera zoom value (22) and the mechanical update of the real camera zoom (2). This delay in the change between two zoom positions is perceived by the user (13). Another mechanical limitation of the real camera system (2) is that the zoom speed of said real camera (2) is not linear, due to the acceleration and deceleration processes at the beginning and the end.
  • the invention proposes a method for updating and adjusting the zoom values of the real (2) and virtual (22) cameras, an example of which is shown in Figure 5, so that both values are coincident at all times despite the mechanical limitations and delays of the real camera (2).
  • the initial viewing angle of both real (2) and virtual (22) cameras is set at a value of 48 °.
  • the user (13) interacts with one of the buttons on the left side (19) of the system (1) to zoom in, increasing the image, the virtual camera (22) and the real camera (2) begin to zoom in
  • there are some delays in the real camera (2) to follow the zoom process that is, the real camera (2) zooms slower than the virtual camera (22).
  • the processing unit (3) receives an adjustment command (25) from the virtual camera (22) indicating its instant zoom value.
  • the processing unit (3) sets the zoom of the real camera (2) with the zoom value of the virtual camera (22).
  • the zoom value of the virtual camera (22) continues to change. Therefore, this process is repeated until the user (13) stops the zoom (until a time t according to Figure 5 elapses).
  • the adjustment between the zooms of the real camera (2) and the virtual camera (22) is performed in a discrete iterative manner, so that the relatively slower real camera (2) follows the instantaneous zoom of the virtual camera (22).
  • the zoom of the real camera (2) can usually only adopt a set of discrete values due to its mechanical configuration.
  • the processing unit (3) receives information of the specific zoom value of the virtual camera (22), for example, 45 °
  • the processing unit (3) tries to fix the zoom of the real camera (2) at the value of 45 °.
  • the zoom of the real camera (2) reaches this specific value with an "e" error (positive or negative). Due to this error, at the end of the process, the processing unit (3) must read the final zoom value of the real camera (2), which will be slightly different from the final zoom value of the virtual camera (22) due to the error in”.
  • the processing unit (3) sends a tuning command (26) to the virtual camera (22), so that the final value of the virtual camera (22) is updated with the final value of the camera zoom real (2).
  • the zoom control parameter of some real cameras (2) used in practice is encoded in hexadecimal format. Therefore, when the processing unit (3) receives an adjustment command (25) from the virtual camera (22) indicating the current value of the virtual camera zoom (22), the processing unit (3) transforms this value in a hexadecimal code before sending it to the real camera (2) and vice versa.
  • the specifications of the actual cameras (2) existing in the market usually include a table that relates a set of view angles to the corresponding hexadecimal values of the control parameters. For example, in the preferred embodiment, the hexadecimal values of the zoom control parameters range from 0 to 4000, representing viewing angles between 4.8 ° (maximum zoom) and 48 ° (without any zoom).
  • the invention proposes the use of interpolation algorithms based on the specifications table of the real camera (2). For example, for the real camera (2) of the preferred embodiment of the invention, the following interpolation equations are proposed:
  • y -0.2274x 3 + 22.593x 2 - 949.94x + 18690, where y is the value of the zoom control parameter in hexadecimal and x is the current angle of view of the real camera (2).
  • the processing unit (3) uses this equation to calculate the corresponding hexadecimal code that is sent to the real camera (2).
  • x is the angle of view of the real camera (2) and y is the hexadecimal value of the zoom control parameter.
  • the processing unit (3) uses this equation to calculate the corresponding view angle that will be sent to the virtual camera (22) as tuning command (26).
  • the invention applies to the cultural tourism sector, which is one of the key future areas for the creation and strengthening of cultural industries.
  • the Augmented Reality techniques proposed to access and understand tourist and cultural content are highly visual and interactive forms of presentation. Therefore you are Technological approaches are an added value, which allows visitors to experience the history associated with a real environment in a personalized way.
  • the reader can imagine the view of a city from a nearby hill.
  • the invention is placed at the top of the hill to allow a "visit" to some of the tourist attractions and elements of the environment, and to receive information about each of these attractions. For example, you can choose a cathedral or an island in the middle of the bay to receive additional multimedia information.
  • the embodiments of the invention are not limited to tourist environments, but can be extended to a wide variety of recreational and cultural experiences. Similar scenarios can be described in situations such as cultural institutions, exhibition halls, nature tourism, fairs or any other scenario in which objects and resources appear that can be augmented by Computer Graphics or additional multimedia content.
  • a third example may be the increase of objects shown at trade shows.
  • the machine tool is usually shown at large fairs.
  • One of the main problems of the manufacturers is that they cannot show all the functionalities of the machines at the fair.
  • the invention can provide more information about the components and the actual operation of the machines.
  • the reader can consider the case of hikers who want to know more about landscapes, flora and fauna while walking through the countryside.
  • the invention can provide additional information at the top of the mountains, such as the name of the different peaks of the environment, the way of access or the diversity of fauna and flora.

Abstract

Interactive display system (1) based on enhanced-reality technologies for tourism and leisure applications, which enables a user (13) to view a real image (6) that is enhanced with a certain amount of interesting information. The system (1) includes a real camera (2), a processing unit (3), a database (4), a display device (8) and a tracking system (9). The processing unit (3) converts three-dimensional graphic objects (5) stored in the database (4) into two-dimensional virtual representations (5') as a function of the position of the real camera (2) calculated by the tracking system (9). It then aligns the two-dimensional virtual representations (5') and the real image (6), generating the enhanced image (6'). The system (1) allows zooming onto the enhanced image (63), simultaneous and adjusted zooming onto the real image (6) and the two-dimensional virtual representations (5') being performed.

Description

SISTEMA Y MÉTODO PARA LA VISUALIZACIÓN DE UNA IMAGEN AUMENTADA APLICANDO TÉCNICAS DE REALIDAD SYSTEM AND METHOD FOR VISUALIZING AN INCREASED IMAGE APPLYING REALITY TECHNIQUES
AUMENTADAINCREASED
DESCRIPCIÓNDESCRIPTION
Sector de la técnicaTechnical sector
Esta invención se refiere a un método y un sistema para la visualización de una escena aumentada, que permita aumentar con contenidos interesantes una escena o imagen real que se encuentra dentro del punto de vista de un usuario. La invención está principalmente orientada a aparatos para el sector del turismo y del ocio del tipo de unos prismáticos para la observación de una vista, aunque también está concebido para otros entornos como las instituciones culturales, ferias o turismo de naturaleza.This invention relates to a method and a system for viewing an enlarged scene, which makes it possible to increase with real content a real scene or image that is within the viewpoint of a user. The invention is mainly oriented to apparatuses for the tourism and leisure sector of the type of binoculars for the observation of a sight, although it is also designed for other environments such as cultural institutions, fairs or nature tourism.
Estado de Ia técnicaState of the art
Desde los anales de la ciencia moderna, la observación visual ha precedido al conocimiento. La observación visual, la contemplación y el conocimiento se encuentran íntimamente ligados en la sociedad occidental. Incluso puede afirmarse que la sociedad moderna es una sociedad basada en lo visual. El sentido de la vista ha sido siempre el sentido dominante y el que ha jugado un papel preponderante en la comunicación entre las personas.Since the annals of modern science, visual observation has preceded knowledge. Visual observation, contemplation and knowledge are closely linked in Western society. It can even be said that modern society is a society based on the visual. The sense of sight has always been the dominant sense and the one that has played a leading role in communication between people.
El concepto de unos prismáticos colocados sobre una colina para que los turistas observen la zona circundante durante unos minutos al introducir una moneda está muy generalizado. Dichos prismáticos ofrecen una vista panorámica de los edificios y calles de un área urbana, sus recursos naturales y culturales. Adicionalmente, en algunos casos, permiten hacer zoom sobre determinadas atracciones turísticas para verlas de forma más cercana. Esta vista puede despertar en los turistas el interés de visitar dichas atracciones posteriormente, ayudándoles en la identificación de nuevos puntos de interés para su posterior visita. Sin embargo, muchas veces resulta complejo encontrar algo en el campo de visión de los prismáticos que no sean los bosques cercanos o el propio cielo. Incluso cuando se ha encontrado una atracción turística que puede resultar interesante, muchas veces se pierde ese interés por la falta de imágenes e información sobre dicha atracción. Las personas están acostumbradas a recibir información de una forma sencilla y entretenida a través de diferentes canales como Internet o la televisión, utilizando hipervínculos ("hyperlinks") para obtener contenidos multimedia que respondan a su petición.The concept of binoculars placed on a hill for tourists to observe the surrounding area for a few minutes when introducing a coin is very widespread. These binoculars offer a panoramic view of the buildings and streets of an urban area, their natural and cultural resources. Additionally, in some cases, they allow you to zoom in on certain tourist attractions to see them closer. This view can arouse in tourists the interest of visiting these attractions later, helping them in identifying new points of interest for their subsequent visit. However, it is often difficult to find something in the field of vision of binoculars other than nearby forests or the sky itself. Even when a tourist attraction that may be interesting has been found, that interest is often lost due to the lack of images and information about that attraction. People are used to receiving information in a simple and entertaining way through different channels such as Internet or television, using hyperlinks to obtain multimedia content that responds to their request.
La mayoría de los turistas aprenden sobre el mundo a través de diferentes procesos, que incluyen características como la representación, la imaginación y la observación. Cada vez más, las tecnologías visuales (fotografías, cine, vídeo, televisión, imágenes digitales, etc.) forman parte de la experiencia cotidiana de muchas personas y tienen además una influencia directa en la forma en la que las personas experimentan los nuevos entornos. Estas "visiones" o "interpretaciones" socialmente generadas que plasman el mundo y su patrimonio en términos visuales no sólo traen consigo nuevos retos de investigación, sino que generan numerosas oportunidades de examinar de forma crítica el papel y la función del turismo como un vehículo en la comunicación de la observación, el ser y el conocimiento.Most tourists learn about the world through different processes, which include features such as representation, imagination and observation. Increasingly, visual technologies (photographs, cinema, video, television, digital images, etc.) are part of the daily experience of many people and also have a direct influence on the way in which people experience new environments. These socially generated "visions" or "interpretations" that capture the world and its heritage in visual terms not only bring new research challenges, but also generate numerous opportunities to critically examine the role and function of tourism as a vehicle in communication of observation, being and knowledge.
De forma adicional, se puede afirmar que existe una gran cantidad de información en formato digital, como contenido audiovisual, textos electrónicos, aplicaciones multimedia o sistemas de información geográfica. Hasta la actualidad, esta información se ha usado de forma puntual en guías electrónicas, permaneciendo inaccesible para el visitante. Además, las presentaciones multimedia existentes se encuentran físicamente alejadas de los entornos reales, lo que implica que el turista debe dejar la atracción turística para disponer de información adicional.Additionally, it can be said that there is a large amount of information in digital format, such as audiovisual content, electronic texts, multimedia applications or geographic information systems. Until now, this information has been used in a timely manner in electronic guides, remaining inaccessible to the visitor. In addition, existing multimedia presentations are physically away from real environments, which means that tourists must leave the tourist attraction to have additional information.
Si las organizaciones turísticas quieren acceder a un público más extenso, deberán diseñar contenidos multimedia suficientemente interesantes como para atraer a los turistas. Por ello, son necesarios nuevos sistemas que permitan estas aplicaciones innovadoras y proporcionen contenidos de valor añadido. Descripción breve de la invenciónIf tourism organizations want to access a wider audience, they must design multimedia content interesting enough to attract tourists. Therefore, new systems that allow these innovative applications and provide added value content are necessary. Brief Description of the Invention
La presente invención satisface las necesidades previamente mencionadas mediante un sistema y un método para la visualización de una imagen aumentada con contenido multimedia adicional. Para ello, la invención detecta la posición de un dispositivo de visualización comprendido en el sistema, aumenta la vista del usuario con representaciones gráficas de objetos gráficos recuperados de una base de datos y permite la interacción del usuario con la información multimedia proporcionada.The present invention satisfies the previously mentioned needs by means of a system and a method for displaying an enlarged image with additional multimedia content. For this, the invention detects the position of a display device comprised in the system, increases the user's view with graphic representations of graphic objects recovered from a database and allows the user to interact with the multimedia information provided.
Otro aspecto de la invención proporciona un sistema y un método para hacer zoom sobre la imagen aumentada, detectando la posición del dispositivo de visualización y fijando la posición relativa de los objetos gráficos cuando se hace zoom sobre la vista.Another aspect of the invention provides a system and method for zooming in on the enlarged image, detecting the position of the display device and fixing the relative position of the graphic objects when zooming on the view.
Descripción breve de las figurasBrief description of the figures
Los detalles de la invención se aprecian en las figuras que se acompañan, no pretendiendo éstas ser limitativas del alcancé de la invención:The details of the invention can be seen in the accompanying figures, not intended to be limiting of the scope of the invention:
- La Figura 1 muestra un sistema de acuerdo con un modo de realización de la invención.- Figure 1 shows a system according to an embodiment of the invention.
La Figura 2 muestra el esquema funcional de la invención. - La Figura 3 muestra el funcionamiento del sistema mediante un ejemplo de uso de la invención.Figure 2 shows the functional scheme of the invention. - Figure 3 shows the operation of the system by an example of use of the invention.
- La Figura 4 muestra el método preferente de acuerdo con la invención.- Figure 4 shows the preferred method according to the invention.
- La Figura 5 muestra un ejemplo del método de la invención para hacer zoom.- Figure 5 shows an example of the method of the invention for zooming.
Descripción detallada de Ia invenciónDetailed description of the invention
En el presente documento, se definen las tecnologías de Realidad Aumentada (RA) como aquellas tecnologías que aumentan un escenario real mediante la adición de información virtual, mientras que un entorno que consta principalmente de información virtual aumentada con algunos objetos reales se conoce como Virtualidad Aumentada (VA). En otras palabras, las tecnologías RA pueden generar un entorno aumentado mediante la adición de cierta información virtual sobre un entorno inicial totalmente real.In this document, the technologies of Augmented Reality (RA) are defined as those technologies that increase a real scenario by adding virtual information, while an environment consisting primarily of augmented virtual information with some real objects is known as Augmented Virtuality (VA). In other words, RA technologies can generate an augmented environment by adding some virtual information about a completely real initial environment.
La invención se basa en la aplicación de las tecnologías de Realidad Aumentada (RA) al concepto tradicional de unos prismáticos en entornos turísticos, es decir, la invención se refiere a un sistema que permite que los usuarios vean una escena (por ejemplo, la vista desde una colina cercana) aumentada mediante la superposición de información virtual relativa a la escena que está observando el usuario utilizando las tecnologías de RA. El aumento de la escena incrementa la experiencia lúdica del usuario, dado que la información virtual proporciona contenido adicional a la escena (por ejemplo, las flechas que señalan edificios y muestran su nombre). El sistema también permite que el usuario interaccione con la información virtual y obtenga contenidos adicionales (por ejemplo, acceso a ficheros de textos, imágenes o vídeos relacionados con un recurso turístico en concreto). El sistema permite personalizar los contenidos en función de diferentes perfiles de usuario.The invention is based on the application of Augmented Reality (RA) technologies to the traditional concept of binoculars in tourist environments, that is, the invention relates to a system that allows users to see a scene (for example, the view from a nearby hill) augmented by superimposing virtual information relative to the scene the user is observing using RA technologies. The increase of the scene increases the user's entertainment experience, since virtual information provides additional content to the scene (for example, the arrows that indicate buildings and show their name). The system also allows the user to interact with virtual information and obtain additional content (for example, access to text files, images or videos related to a specific tourist resource). The system allows you to customize the contents based on different user profiles.
La Figura 1 muestra un sistema (1) típico de acuerdo con un modo de realización de la invención. El sistema (1) comprende básicamente una cámara real (2) que registra una imagen real (6) o imagen en tiempo real desde el punto de vista del usuario (13), y que envía esta imagen real (6) a una unidad de procesamiento (3). La unidad de procesamiento (3) incluye una base de datos (4) que almacena objetos gráficos tridimensionales (5) -no representados- que se superpondrán a la imagen real (6) utilizando métodos RA. Los métodos RA convierten los objetos gráficos tridimensionales (5) en representaciones virtuales bidimensionales (5'), y a continuación, se superponen las representaciones virtuales bidimensionales (5') de dichos objetos gráficos tridimensionales (5) a la imagen real (6), conformando la imagen aumentada (6'). Los objetos gráficos tridimensionales (5) pueden incluir información multimedia adicional (7) -no representada- capaz de proporcionar información adicional y más compleja sobre los objetos gráficos tridimensionales (5). El sistema (1) también incluye un dispositivo de visualización (8) mediante el cual el usuario (13) puede observar la imagen aumentada (6'), es decir, la imagen real (6) a la que se le han añadido las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5). El sistema (1) incluye un sistema de seguimiento (9) para detectar la posición actual del dispositivo de visualización (8) como un elemento fundamental dentro del proceso de aumento. Tal y como se muestra en la Figura 1, el dispositivo de visualización (8) y la cámara real (2) se encuentran preferiblemente montados sobre el mismo eje mecánico de modo que se mueven de manera solidaria. De este modo, el sistema de seguimiento (9) calcula la posición del dispositivo de visualización (8) y por lo tanto, la posición de la cámara real (2).Figure 1 shows a typical system (1) according to an embodiment of the invention. The system (1) basically comprises a real camera (2) that registers a real image (6) or real-time image from the user's point of view (13), and that sends this real image (6) to a unit of processing (3). The processing unit (3) includes a database (4) that stores three-dimensional graphic objects (5) - not represented - that will be superimposed on the real image (6) using RA methods. The RA methods convert the three-dimensional graphic objects (5) into two-dimensional virtual representations (5 '), and then the two-dimensional virtual representations (5') of said three-dimensional graphic objects (5) are superimposed on the real image (6), forming the enlarged image (6 '). Three-dimensional graphic objects (5) may include additional multimedia information (7) -not represented- capable of providing additional and more complex information about three-dimensional graphic objects (5). The system (1) also includes a display device (8) by which the user (13) can observe the enlarged image (6 '), that is, the real image (6) to which the representations have been added two-dimensional virtual (5 ') of the three-dimensional graphic objects (5). The system (1) includes a tracking system (9) to detect the current position of the display device (8) as a fundamental element in the augmentation process. As shown in Figure 1, the display device (8) and the real camera (2) are preferably mounted on the same mechanical axis so that they move in a solidary manner. In this way, the tracking system (9) calculates the position of the display device (8) and therefore, the position of the real camera (2).
Adicionalmente, se incluyen unos altavoces (10) para que el usuario (13) pueda escuchar las presentaciones u otro contenido en forma de audio incluido dentro del contenido multimedia adicional (7). El sistema (1) también incluye medios de interacción (11) que cumplen la función de interfaz de usuario para el sistema (1). Finalmente, el sistema (1) puede incluir un sistema de pago (12) basado en la inserción de monedas u otros medios de pago.Additionally, speakers (10) are included so that the user (13) can listen to the presentations or other content in the form of audio included in the additional multimedia content (7). The system (1) also includes interaction means (11) that fulfill the user interface function for the system (1). Finally, the system (1) may include a payment system (12) based on the insertion of coins or other means of payment.
La cámara real (2) puede ser cualquier tipo de cámara que incluya lentes "autofocus" para el zoom óptico, de modo que el usuario (13) pueda aumentar o disminuir el zoom para observar objetos pequeños o lejanos de forma más nítida. La cámara real (2) debe poder ser controlada desde un dispositivo de control externo capaz de acceder y modificar el zoom y otros ajustes o parámetros de control de la cámara, por ejemplo, mediante una conexión por cable de tipo serie RS-232C. Existen gran cantidad de cámaras de este tipo disponibles comercialmente.The real camera (2) can be any type of camera that includes "autofocus" lenses for optical zoom, so that the user (13) can increase or decrease the zoom to observe small or distant objects more clearly. The real camera (2) must be able to be controlled from an external control device capable of accessing and modifying the zoom and other settings or control parameters of the camera, for example, by an RS-232C serial cable connection. There are many cameras of this type available commercially.
El sistema de seguimiento (9) se basa preferentemente en sensores inerciales. Sin embargo, existe un gran número de sistemas de posicionamiento alternativos que pueden usarse de manera eficaz en otros modos de realización. Dado que se supone que el sistema (1) se va a utilizar principalmente en dos direcciones (derecha/izquierda, arriba/abajo), las técnicas de seguimiento más sencillas proporcionan precisión suficiente. Sin embargo, la invención puede utilizar cualquier sistema de seguimiento que sea capaz de determinar la posición del dispositivo de visualización (8) de forma robusta y de enviar las coordenadas a la unidad de procesamiento (3).The tracking system (9) is preferably based on inertial sensors. However, there is a large number of alternative positioning systems that can be used effectively in other embodiments. Since the system (1) is supposed to be used primarily in two directions (right / left, up / down), the simplest tracking techniques provide sufficient accuracy. However, the invention can use any tracking system that is able to determine the position of the display device (8) robustly and send the coordinates to the processing unit (3).
El dispositivo de visualización (8) mediante el que el usuario (13) percibe la imagen aumentada (6') que incluye la imagen real (6) y las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5) junto con el contenido multimedia adicional (7), es preferentemente una metáfora de los prismáticos convencionales. El dispositivo de visualización (8) comprende preferentemente un visualizador que es básicamente un sistema de visualización de vídeo no transparente, que puede ser de tipo no-estereoscópico, estereoscópico y autoestereoscópico. El dispositivo de visualización (8) puede ser uno de los prismáticos o telescopios virtuales disponibles en el mercado con capacidad estereoscópica. En futuros modos de realización de la invención, se podrán utilizar dispositivos semitransparentes y autoestereoscópicos para visualizar la imagen aumentada (6').The display device (8) by which the user (13) perceives the enlarged image (6 ') that includes the real image (6) and the two-dimensional virtual representations (5') of the three-dimensional graphic objects (5) together with the additional multimedia content (7) is preferably a metaphor for conventional binoculars. The display device (8) preferably comprises a display that is basically a non-transparent video display system, which can be non-stereoscopic, stereoscopic and autostereoscopic. The display device (8) can be one of the virtual binoculars or telescopes available in the market with stereoscopic capability. In future embodiments of the invention, semi-transparent and autostereoscopic devices may be used to visualize the enlarged image (6 ').
En lo referente a los medios de interacción (11), el modo de realización preferente de la invención incluye siete botones para interaccionar con el sistema (1) de una forma sencilla y ergonómica. Tal y como muestra la Figura 1, los botones (11) se han distribuido entre el lado izquierdo (19) y el lado derecho (20) del sistema (1). En el lado izquierdo (19) del sistema (1) hay dos botones que permiten interactuar con la imagen aumentada (6'). El usuario (13) puede hacer zoom para aumentar la imagen con uno de los botones y hacer zoom para reducirla con el otro. En el lado derecho (20) del sistema (1) hay cinco botones: un botón central (preferentemente de un color) y cuatro botones a su alrededor (preferentemente de un color diferente al botón central). El botón central es el botón de "intro", que se utiliza para hacer clic en las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5) y también para elegir entre los menús del contenido multimedia adicional (7). Los botones que lo rodean sirven para moverse por los menús que permiten la selección del contenido multimedia adicional (7).With regard to the interaction means (11), the preferred embodiment of the invention includes seven buttons for interacting with the system (1) in a simple and ergonomic manner. As Figure 1 shows, the buttons (11) have been distributed between the left side (19) and the right side (20) of the system (1). On the left side (19) of the system (1) there are two buttons that allow you to interact with the enlarged image (6 '). The user (13) can zoom to enlarge the image with one of the buttons and zoom to reduce it with the other. On the right side (20) of the system (1) there are five buttons: a central button (preferably one color) and four buttons around it (preferably a color different from the central button). The middle button is the "enter" button, which is used to click on the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5) and also to choose between the menus of additional multimedia content (7). The buttons that surround it serve to move through the menus that allow the selection of additional multimedia content (7).
El sistema (1) incluye preferentemente dos bases de datos (4) diferentes: una base de datos para el aumento y una base de datos de contenidos autoadministrables. La base de datos para el aumento incluye los objetos gráficos tridimensionales (5) asociados a los puntos de interés para el usuario (13), incluyendo el nombre de las principales atracciones turísticas y otros objetos gráficos. La base de datos de contenidos autoadministrables almacena el contenido multimedia adicional (7), incluyendo vídeos, clips de películas, panoramas 3D interactivos o incluso modelos tridimensionales de atracciones turísticas existentes y no existentes.The system (1) preferably includes two different databases (4): a database for augmentation and a database of self-managed content. The database for the increase includes the three-dimensional graphic objects (5) associated with the points of interest for the user (13), including the name of the main tourist attractions and other graphic objects. The self-managed content database stores additional multimedia content (7), including videos, movie clips, interactive 3D panoramas or even three-dimensional models of existing and non-existent tourist attractions.
El modo de realización preferente de la invención también incluye una herramienta de autor para simplificar la manipulación de los objetos gráficos tridimensionales (5), su posición en la imagen aumentada (6') y el contenido multimedia adicional (7) que pueden mostrar. El programa puede trabajar con archivos XML para cambiarlos o crear otros nuevos adecuados para el programa principal. El modo de realización preferente de la invención incluye hasta diez puntos de interés con un máximo de cinco subobjetos para elegir dentro del menú. Estos objetos gráficos tridimensionales (5) incluyen los puntos de interés turístico y sus marcadores correspondientes en la imagen aumentada (6'), mientras que los subobjetos representan diferentes tipos de contenido multimedia adicional (7) que se muestran al elegir las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5).The preferred embodiment of the invention also includes an authoring tool to simplify the manipulation of three-dimensional graphic objects (5), their position in the enlarged image (6 ') and the additional multimedia content (7) that they can display. The program can work with XML files to change them or create new ones suitable for the main program. The preferred embodiment of the invention includes up to ten points of interest with a maximum of five sub-objects to choose from within the menu. These three-dimensional graphic objects (5) include the points of tourist interest and their corresponding markers in the enlarged image (6 '), while the sub-objects represent different types of additional multimedia content (7) that are shown when choosing the two-dimensional virtual representations ( 5 ') of the three-dimensional graphic objects (5).
La Figura 2 muestra un esquema funcional del sistema (1) de la Figura 1. Tal y como se ha explicado, la cámara real (2) captura el punto de vista del usuario (13), es decir, graba una imagen de video en tiempo real que se denomina imagen real (6). El sistema de seguimiento (9) calcula y envía una información de posición (14) a la unidad de procesamiento (3), informando sobre la localización y orientación actual del dispositivo de visualización (8). La unidad de procesamiento (3) realiza entonces un proceso de adaptación de gráficos (15), que convierte los objetos gráficos tridimensionales (5) almacenados en las representaciones virtuales bidimensionales (5') en función de un vector de orientación que se obtiene a partir de la información de posición (14). Es decir, la unidad de procesamiento (3) adapta una vista de los objetos gráficos tridimensionales (5) para obtener una escena virtual, controlando una "cámara virtual" que utiliza el mismo vector de orientación que la cámara real (2), es decir, orientando la cámara virtual exactamente igual que la cámara real (2). Por ello, la escena real (6) y las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5) se sincronizan de acuerdo con el mundo real. Esta sincronización permite componer las escenas real y virtual de la imagen aumentada (6').Figure 2 shows a functional scheme of the system (1) of Figure 1. As explained, the real camera (2) captures the user's point of view (13), that is, records a video image in real time called real image (6). The tracking system (9) calculates and sends a position information (14) to the processing unit (3), informing about the current location and orientation of the display device (8). The processing unit (3) then performs a graphics adaptation process (15), which converts the three-dimensional graphic objects (5) stored in the two-dimensional virtual representations (5 ') based on an orientation vector that is obtained from of position information (14). That is, the processing unit (3) adapts a view of the three-dimensional graphic objects (5) to obtain a virtual scene, controlling a "virtual camera" that uses the same orientation vector than the real camera (2), that is, by orienting the virtual camera exactly the same as the real camera (2). Therefore, the real scene (6) and the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5) are synchronized according to the real world. This synchronization allows composing the real and virtual scenes of the enlarged image (6 ').
A continuación, la unidad de procesamiento (3) realiza un proceso de aumento (16) en el que la escena real (6) se aumenta con las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5) para obtener la imagen aumentada (6'), que se envía de nuevo al dispositivo de visualización (8). Cuando el usuario (13) mira a través del dispositivo de visualización (8), puede ver la imagen aumentada (6'). Si el usuario (13) gira el sistema (1) o realiza algún movimiento que cambia la posición del dispositivo de visualización (8), el sistema de seguimiento (9) informa del cambio a la unidad de procesamiento (3). El proceso de adaptación de gráficos (15) actualiza entonces las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5), de modo que coincidan con la nueva imagen real (6). Por ello, cuando el usuario (13) gira el sistema (1) o cambia su posición, la imagen aumentada (6') cambia en su totalidad, es decir, la imagen real (6) y las representaciones virtuales bidimensionales (5') se adecúan a ese cambio.Next, the processing unit (3) performs an augmentation process (16) in which the real scene (6) is augmented with the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5) to obtain the image augmented (6 '), which is sent back to the display device (8). When the user (13) looks through the display device (8), he can see the enlarged image (6 '). If the user (13) rotates the system (1) or makes any movement that changes the position of the display device (8), the tracking system (9) reports the change to the processing unit (3). The graphics adaptation process (15) then updates the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5), so that they coincide with the new real image (6). Therefore, when the user (13) turns the system (1) or changes its position, the enlarged image (6 ') changes in its entirety, that is, the real image (6) and the two-dimensional virtual representations (5') They adapt to that change.
La Figura 3 muestra el funcionamiento de la invención utilizando un ejemplo de uso del sistema (1). El gráfico de la parte superior muestra un ejemplo de una imagen aumentada (6') visualizada por el usuario (13). Esta imagen aumentada (6') está compuesta por la imagen real (6) de un pueblo (17) y algunos montes (18), en la que aparecen señalados uno de los edificios y uno de los montes mediante las correspondientes representaciones virtuales bidimensionales (5'). Cuando el usuario (13) descubre un objeto interesante sobre el que desea recabar más información - por ejemplo, el edificio de gran altura en el centro de la pantalla señalado con el texto "objl"-, actúa con los medios de interacción (11) hasta seleccionar la representación virtual bidimensional (5') de dicho objeto interesante. A continuación, el sistema (1) visualiza el contenido multimedia adicional (7) relacionado con dicha representación virtual bidimensional (5'). Por ejemplo, tal y como se muestra en la Figura 3, el sistema (1) puede mostrar una pantalla en la que se visualice algún texto interesante y un vídeo de una persona que explica ciertas características del edificio, y/o ofrece nuevas opciones de navegación. El sistema (1) puede ofertar también información personalizada, de modo que los diferentes contenidos pueden mostrarse a diferentes perfiles de usuario, incluyendo aspectos como el multilingüismo. Por ejemplo, un turista angloparlante que tiene un perfil cultural puede ser un tipo de perfil de usuario. Este usuario recibirá información adicional más completa en inglés sobre la historia del edificio seleccionado que la información recibida por otros perfiles.Figure 3 shows the operation of the invention using an example of using the system (1). The graphic at the top shows an example of an enlarged image (6 ') displayed by the user (13). This enlarged image (6 ') is composed of the real image (6) of a town (17) and some mountains (18), in which one of the buildings and one of the mountains are indicated by the corresponding two-dimensional virtual representations ( 5'). When the user (13) discovers an interesting object on which he wishes to gather more information - for example, the high-rise building in the center of the screen marked with the text "objl" - acts with the means of interaction (11) until selecting the two-dimensional virtual representation (5 ') of said interesting object. Next, the system (1) displays the additional multimedia content (7) related to said two-dimensional virtual representation (5 '). For example, as shown In Figure 3, the system (1) can show a screen in which some interesting text and a video of a person explaining certain characteristics of the building are displayed, and / or offer new navigation options. The system (1) can also offer personalized information, so that different contents can be displayed to different user profiles, including aspects such as multilingualism. For example, an English-speaking tourist who has a cultural profile can be a type of user profile. This user will receive more complete additional information in English about the history of the selected building than the information received by other profiles.
La Figura 4 muestra el método de acuerdo con la invención por el que los objetos gráficos tridimensionales (5) se colocan en el mundo virtual utilizando un entorno esférico. La información de registro (24) que determina la colocación de los objetos gráficos tridimensionales (5) en la imagen aumentada (6') se define utilizando la herramienta de autor. La unidad de procesamiento (3) recibe la información de registro (24) de los diferentes objetos gráficos tridimensionales (5) y la información de posición (14) del dispositivo de visualización (8), para poder realizar el proceso de adaptación de los gráficos (15). Se construye un mundo virtual (21), es decir, un modelo tridimensional en el que los objetos gráficos tridimensionales (5) se colocan en su posición espacial previamente definida. En este mundo virtual (21), se coloca una cámara virtual (22) con respecto a los objetos gráficos tridimensionales (5) en función de la información de posición (14) de la cámara real (2). En otras palabras, los objetos gráficos tridimensionales (5) se colocan en los puntos virtuales que simulan la distancia de los objetos reales desde el punto de vista del usuario (13).Figure 4 shows the method according to the invention whereby three-dimensional graphic objects (5) are placed in the virtual world using a spherical environment. The registration information (24) that determines the placement of the three-dimensional graphic objects (5) in the enlarged image (6 ') is defined using the authoring tool. The processing unit (3) receives the registration information (24) of the different three-dimensional graphic objects (5) and the position information (14) of the display device (8), in order to carry out the process of adapting the graphics (fifteen). A virtual world (21) is constructed, that is, a three-dimensional model in which the three-dimensional graphic objects (5) are placed in their previously defined spatial position. In this virtual world (21), a virtual camera (22) is placed with respect to the three-dimensional graphic objects (5) based on the position information (14) of the real camera (2). In other words, three-dimensional graphic objects (5) are placed in virtual points that simulate the distance of real objects from the user's point of view (13).
El proceso de adaptación de gráficos (15) de los objetos virtuales tridimensionales (5) en representaciones virtuales bidimensionales (5') se basa en la información sobre el ángulo que los objetos gráficos tridimensionales (5) difieren respecto a un valor de referencia Zero-View- Vector, que representa el punto central de un sistema coordenado. A continuación, el proceso de adaptación de gráficos (15) convierte los objetos gráficos tridimensionales (5) en sus respectivas representaciones virtuales bidimensionales (5'). El resultado es la colocación y adecuación de las representaciones virtuales bidimensionales (5') de modo que proporcionen una sensación de profundidad al usuario (13).The process of adapting graphics (15) of three-dimensional virtual objects (5) into two-dimensional virtual representations (5 ') is based on the information about the angle that three-dimensional graphic objects (5) differ from a Zero-reference value View- Vector, which represents the center point of a coordinate system. Next, the graphics adaptation process (15) converts the three-dimensional graphic objects (5) into their respective two-dimensional virtual representations (5 '). The result is the placement and adaptation of two-dimensional virtual representations (5 ') so that they provide a feeling of depth to the user (13).
Seguidamente, el proceso de aumento (16) combina las representaciones virtuales bidimensionales (5') con la imagen real (6) grabada por la cámara real (2) para obtener la imagen aumentada (6'). Al mirar esta imagen aumentada (6'), el usuario (13) puede disfrutar del paisaje y de la información proporcionada por las representaciones virtuales bidimensionalesNext, the magnification process (16) combines the two-dimensional virtual representations (5 ') with the real image (6) recorded by the real camera (2) to obtain the enlarged image (6'). By looking at this enlarged image (6 '), the user (13) can enjoy the landscape and the information provided by the two-dimensional virtual representations
(5'). Esta información es útil (ya que contiene los nombres de las atracciones turísticas y cualquier otra información interesante) y clara (ya que se presenta en perspectiva, simulando los objetos en el entorno).(5'). This information is useful (since it contains the names of tourist attractions and any other interesting information) and clear (since it is presented in perspective, simulating objects in the environment).
El proceso de transformación de las coordinadas espaciales definidas por los objetos gráficos tridimensionales (5) en un entorno real en coordenadas bidimensionales que definen los pixels que se visualizan en un dispositivo de visualización (8) es un elemento clave en la presente invención. Tal y como se ha explicado previamente, este proceso se realiza aplicando una aproximación innovadora, construyendo un "mundo virtual" y capturando una imagen de dicho "mundo virtual" de la misma forma en la que la cámara real (2) graba la imagen real (6). A continuación, la imagen aumentada (6') superpone el mundo real y el mundo virtual para conseguir el efecto de aumento, haciendo confluir ambos mundos en un nuevo "mundo aumentado".The process of transforming the spatial coordinates defined by the three-dimensional graphic objects (5) into a real environment in two-dimensional coordinates that define the pixels that are displayed in a display device (8) is a key element in the present invention. As explained previously, this process is carried out by applying an innovative approach, building a "virtual world" and capturing an image of said "virtual world" in the same way that the real camera (2) records the real image (6). Next, the augmented image (6 ') superimposes the real world and the virtual world to achieve the effect of magnification, bringing both worlds into a new "augmented world."
Adicionalmente, y tal y como muestra la Figura 4, la invención permite hacer zoom sobre la imagen aumentada (6'). Al hacer zoom, las representaciones virtuales bidimensionales (5') no sólo se reescalan, sino que también se reco locan, de modo que continúan situadas sobre los objetos que aumentan. Esto quiere decir que, al hacer zoom, la invención garantiza que los puntos de vista de la cámara real (2) y de la cámara virtual (22) están ajustados en todos los instantes.Additionally, and as shown in Figure 4, the invention allows zooming in on the enlarged image (6 '). When zooming, the two-dimensional virtual representations (5 ') are not only rescaled, but also recovered, so that they continue to be placed on the increasing objects. This means that, by zooming in, the invention guarantees that the views of the real camera (2) and the virtual camera (22) are adjusted at all times.
El usuario (13) activa el proceso de zoom interactuando con los dos botones que se encuentran en el lado izquierdo (19) del sistema (1). Los botones están conectados a la unidad de procesamiento (3), de modo que cuando el usuario (13) actúa sobre ellos, la unidad de procesamiento (3) modifica el zoom de la cámara virtual (22) y envía un comando a la cámara real (2) para que cambie sus parámetros de control del zoom de forma sincronizada.The user (13) activates the zoom process by interacting with the two buttons on the left side (19) of the system (1). The buttons are connected to the processing unit (3), so that when the user (13) acts on them, the processing unit (3) modifies the zoom of the virtual camera (22) and sends a command to the camera real (2) to change its zoom control parameters synchronously.
El ajuste de los parámetros de control de la cámara real (2) y la cámara virtual (22) debe hacerse de forma dinámica en tiempo real, mientras funciona el sistema (1), para conseguir alinear correctamente la imagen real (6) y las representaciones virtuales bidimensionales (5'). Debido al sistema mecánico de zoom de la cámara real (2), existe un retardo entre el ajuste del valor del zoom de la cámara virtual (22) y la actualización mecánica del zoom de la cámara real (2). Este retardo en el cambio entre dos posiciones del zoom es percibido por el usuario (13). Otra limitación mecánica del sistema de la cámara real (2) es que la velocidad del zoom de dicha cámara real (2) no es lineal, debido a los procesos de aceleración y deceleración al inicio y al final.The adjustment of the control parameters of the real camera (2) and the virtual camera (22) must be done dynamically in real time, while the system (1) is operating, in order to correctly align the real image (6) and two-dimensional virtual representations (5 '). Due to the mechanical zoom system of the real camera (2), there is a delay between the adjustment of the virtual camera zoom value (22) and the mechanical update of the real camera zoom (2). This delay in the change between two zoom positions is perceived by the user (13). Another mechanical limitation of the real camera system (2) is that the zoom speed of said real camera (2) is not linear, due to the acceleration and deceleration processes at the beginning and the end.
Teóricamente, sería necesario disponer de una conexión y un intercambio de datos continuos entre la cámara virtual (22) y la cámara real (2) para que existiera una correspondencia entre los parámetros de ambas cámaras en cualquier instante de tiempo. Sin embargo, en la práctica, no es posible conseguir un intercambio de datos continuos debido a las limitaciones mecánicas de la cámara real (2). Además, existe un retardo de tiempo desde que la unidad de procesamiento (3) envía un comando para conocer el estado de la cámara real (2) y hasta que la cámara real (2) devuelve su estado a la unidad de procesamiento (3). Este retardo se incrementa debido a la baja velocidad de actualización de los valores de los parámetros del zoom de la cámara real (2).Theoretically, it would be necessary to have a connection and a continuous exchange of data between the virtual camera (22) and the real camera (2) so that there was a correspondence between the parameters of both cameras at any time. However, in practice, it is not possible to achieve continuous data exchange due to the mechanical limitations of the real camera (2). In addition, there is a time delay from when the processing unit (3) sends a command to know the status of the real camera (2) and until the real camera (2) returns its status to the processing unit (3) . This delay is increased due to the low refresh rate of the values of the zoom parameters of the real camera (2).
La invención propone un método para actualizar y ajustar los valores del zoom de las cámaras real (2) y virtual (22), un ejemplo del cual se muestra en la Figura 5, de modo que ambos valores sean coincidentes en todos los instantes de tiempo a pesar de las limitaciones mecánicas y los retrasos de la cámara real (2). En la Figura 5, el ángulo de vista inicial de ambas cámaras real (2) y virtual (22) se fija en un valor de 48°. De acuerdo con el modo de realización preferente de la invención, cuando el usuario (13) interactúa con uno de los botones del lado izquierdo (19) del sistema (1) para hacer zoom aumentando la imagen, la cámara virtual (22) y la cámara real (2) comienzan a hacer zoom. Como se ha mencionado previamente, existen algunos retrasos en la cámara real (2) para seguir el proceso de zoom, es decir, la cámara real (2) hace zoom de forma más lenta que la cámara virtual (22). Para sincronizar el valor del zoom de la cámara real (2) con el valor instantáneo del zoom de la cámara virtual (22), la unidad de procesamiento (3) recibe un comando de ajuste (25) de la cámara virtual (22) indicando su valor instantáneo de zoom. La unidad de procesamiento (3) fija entonces el zoom de la cámara real (2) con el valor del zoom de la cámara virtual (22). Sin embargo, si el usuario (13) continúa haciendo zoom, el valor del zoom de la cámara virtual (22) sigue cambiando. Por ello, este proceso se repite hasta que el usuario (13) detiene el zoom (hasta que transcurre un tiempo t de acuerdo con la Figura 5). En otras palabras, el ajuste entre los zooms de la cámara real (2) y de la cámara virtual (22) se realiza de una manera discreta iterativa, de modo que la cámara real (2) relativamente más lenta sigue al zoom instantáneo de la cámara virtual (22).The invention proposes a method for updating and adjusting the zoom values of the real (2) and virtual (22) cameras, an example of which is shown in Figure 5, so that both values are coincident at all times despite the mechanical limitations and delays of the real camera (2). In Figure 5, the initial viewing angle of both real (2) and virtual (22) cameras is set at a value of 48 °. According to the preferred embodiment of the invention, when the user (13) interacts with one of the buttons on the left side (19) of the system (1) to zoom in, increasing the image, the virtual camera (22) and the real camera (2) begin to zoom in As previously mentioned, there are some delays in the real camera (2) to follow the zoom process, that is, the real camera (2) zooms slower than the virtual camera (22). To synchronize the zoom value of the real camera (2) with the instantaneous zoom value of the virtual camera (22), the processing unit (3) receives an adjustment command (25) from the virtual camera (22) indicating its instant zoom value. The processing unit (3) then sets the zoom of the real camera (2) with the zoom value of the virtual camera (22). However, if the user (13) continues to zoom, the zoom value of the virtual camera (22) continues to change. Therefore, this process is repeated until the user (13) stops the zoom (until a time t according to Figure 5 elapses). In other words, the adjustment between the zooms of the real camera (2) and the virtual camera (22) is performed in a discrete iterative manner, so that the relatively slower real camera (2) follows the instantaneous zoom of the virtual camera (22).
Hay que mencionar que el zoom de la cámara real (2) sólo suele poder adoptar un conjunto de valores discretos debido a su configuración mecánica. Tal y como se muestra en la Figura 5, cuando la unidad de procesamiento (3) recibe información del valor específico del zoom de la cámara virtual (22), por ejemplo, 45°, la unidad de procesamiento (3) trata de fijar el zoom de la cámara real (2) en el valor de 45°. Sin embargo, el zoom de la cámara real (2) alcanza este valor específico con un error "e" (positivo o negativo). Debido a este error, al final del proceso, la unidad de procesamiento (3) debe leer el valor final del zoom de la cámara real (2), que será ligeramente diferente al valor final del zoom de la cámara virtual (22) debido al error "en". A continuación, la unidad de procesamiento (3) envía un comando de afinamiento (26) a la cámara virtual (22), de modo que el valor final de la cámara virtual (22) se actualiza con el valor final del zoom de la cámara real (2).It should be mentioned that the zoom of the real camera (2) can usually only adopt a set of discrete values due to its mechanical configuration. As shown in Figure 5, when the processing unit (3) receives information of the specific zoom value of the virtual camera (22), for example, 45 °, the processing unit (3) tries to fix the zoom of the real camera (2) at the value of 45 °. However, the zoom of the real camera (2) reaches this specific value with an "e" error (positive or negative). Due to this error, at the end of the process, the processing unit (3) must read the final zoom value of the real camera (2), which will be slightly different from the final zoom value of the virtual camera (22) due to the error in". Next, the processing unit (3) sends a tuning command (26) to the virtual camera (22), so that the final value of the virtual camera (22) is updated with the final value of the camera zoom real (2).
El parámetro de control de zoom de algunas cámaras reales (2) utilizadas en la práctica está codificado en formato hexadecimal. Por ello, cuando la unidad de procesamiento (3) recibe un comando de ajuste (25) de la cámara virtual (22) indicando el valor actual del zoom de la cámara virtual (22), la unidad de procesamiento (3) transforma este valor en un código hexadecimal antes de enviarlo a la cámara real (2) y viceversa. Las especificaciones de las cámaras reales (2) existentes en el mercado suelen incluir una tabla que relaciona un conjunto de ángulos de vista con los correspondientes valores hexadecimales de los parámetros de control. Por ejemplo, en el modo de realización preferente, los valores hexadecimales de los parámetros de control del zoom oscilan entre 0 y 4000, representando ángulos de vista entre 4.8° (zoom máximo) y 48° (sin ningún tipo de zoom). Esto significa que se conocen los valores de los ángulos de vista para estos valores hexadecimales específicos. Sin embargo, la mayor parte de los valores hexadecimales y sus ángulos correspondientes no se incluyen en la tabla. Para poder calcular cualquier código hexadecimal para cualquier ángulo de vista y viceversa, la invención propone la utilización de algoritmos de interpolación basados en la tabla de especificaciones de la cámara real (2). Por ejemplo, para la cámara real (2) del modo de realización preferente de la invención, se proponen las siguientes ecuaciones de interpolación:The zoom control parameter of some real cameras (2) used in practice is encoded in hexadecimal format. Therefore, when the processing unit (3) receives an adjustment command (25) from the virtual camera (22) indicating the current value of the virtual camera zoom (22), the processing unit (3) transforms this value in a hexadecimal code before sending it to the real camera (2) and vice versa. The specifications of the actual cameras (2) existing in the market usually include a table that relates a set of view angles to the corresponding hexadecimal values of the control parameters. For example, in the preferred embodiment, the hexadecimal values of the zoom control parameters range from 0 to 4000, representing viewing angles between 4.8 ° (maximum zoom) and 48 ° (without any zoom). This means that the view angle values are known for these specific hexadecimal values. However, most of the hexadecimal values and their corresponding angles are not included in the table. In order to calculate any hexadecimal code for any angle of view and vice versa, the invention proposes the use of interpolation algorithms based on the specifications table of the real camera (2). For example, for the real camera (2) of the preferred embodiment of the invention, the following interpolation equations are proposed:
y = -0.2274x3 + 22.593x2 - 949.94x + 18690, donde y es el valor del parámetro de control del zoom en hexadecimal y x es el ángulo de vista actual de la cámara real (2). Cuando recibe un comando de ajuste (25) que incluye el ángulo de vista de la cámara virtual (22), la unidad de procesamiento (3) utiliza esta ecuación para calcular el código hexadecimal correspondiente que se envía a la cámara real (2).y = -0.2274x 3 + 22.593x 2 - 949.94x + 18690, where y is the value of the zoom control parameter in hexadecimal and x is the current angle of view of the real camera (2). When you receive an adjustment command (25) that includes the viewing angle of the virtual camera (22), the processing unit (3) uses this equation to calculate the corresponding hexadecimal code that is sent to the real camera (2).
x ~-2.87*1012y3+1.99*107y2 * -0.00525077y+48, donde x es el ángulo de vista de la cámara real (2) e y es el valor hexadecimal del parámetro de control del zoom. Una vez leído el valor del parámetro de control del zoom de la cámara real (2) en formato hexadecimal, la unidad de procesamiento (3) utiliza esta ecuación para calcular el ángulo de vista correspondiente que se enviará a la cámara virtual (22) como comando de afinamiento (26).x ~ -2.87 * 10 12 y 3 + 1.99 * 10 7 y 2 * -0.00525077y + 48, where x is the angle of view of the real camera (2) and y is the hexadecimal value of the zoom control parameter. After reading the value of the zoom control parameter of the real camera (2) in hexadecimal format, the processing unit (3) uses this equation to calculate the corresponding view angle that will be sent to the virtual camera (22) as tuning command (26).
Tal y como se ha explicado previamente, la invención se aplica al sector del turismo cultural, que es una de las áreas futuras clave para la creación y el afianzamiento de las industrias culturales. Las técnicas de Realidad Aumentada propuestas para acceder y entender los contenidos turísticos y culturales son formas de presentación altamente visuales e interactivas. Por lo tanto, estas aproximaciones tecnológicas son un valor añadido, que permite que los visitantes experimenten la historia asociada a un entorno real de una forma personalizada.As explained previously, the invention applies to the cultural tourism sector, which is one of the key future areas for the creation and strengthening of cultural industries. The Augmented Reality techniques proposed to access and understand tourist and cultural content are highly visual and interactive forms of presentation. Therefore you are Technological approaches are an added value, which allows visitors to experience the history associated with a real environment in a personalized way.
Como un primer ejemplo de aplicación de la invención, el lector puede imaginar la vista de una ciudad desde una colina cercana. La invención está colocada en lo alto de la colina para permitir una "visita" a algunas de las atracciones turísticas y elementos del entorno, y para recibir información sobre cada una de estas atracciones. Por ejemplo, se puede elegir una catedral o una isla en medio de la bahía para recibir información multimedia adicional.As a first example of application of the invention, the reader can imagine the view of a city from a nearby hill. The invention is placed at the top of the hill to allow a "visit" to some of the tourist attractions and elements of the environment, and to receive information about each of these attractions. For example, you can choose a cathedral or an island in the middle of the bay to receive additional multimedia information.
Sin embargo, los modos de realización de la invención no están limitados a los entornos turísticos, sino que se pueden extender a una gran variedad de experiencias lúdicas y culturales. Escenarios similares pueden describirse en situaciones como las instituciones culturales, salas de exposición, turismo de naturaleza, ferias o cualquier otro escenario en el que aparezcan objetos y recursos que pueden aumentarse mediante Gráficos por Ordenador o contenidos multimedia adicionales.However, the embodiments of the invention are not limited to tourist environments, but can be extended to a wide variety of recreational and cultural experiences. Similar scenarios can be described in situations such as cultural institutions, exhibition halls, nature tourism, fairs or any other scenario in which objects and resources appear that can be augmented by Computer Graphics or additional multimedia content.
Como un segundo ejemplo, se puede mencionar la aplicación de la invención para piezas de arte culturales y artísticas, monumentos y piezas únicas. El uso de técnicas de restauración actuales como los rayos infrarrojos se está generalizando entre las instituciones culturales. Así, se ha encontrado información oculta detrás de la primera capa de los cuadros. El visitante de un museo puede utilizar la invención delante de un cuadro para obtener información adicional invisible e interactuar con las diferentes partes del lienzo. Los museos y las instituciones culturales pueden definir un precio por el contenido que se muestra utilizando la invención.As a second example, the application of the invention for cultural and artistic art pieces, monuments and unique pieces can be mentioned. The use of current restoration techniques such as infrared rays is becoming widespread among cultural institutions. Thus, information hidden behind the first layer of the pictures has been found. The visitor of a museum can use the invention in front of a painting to obtain additional invisible information and interact with the different parts of the canvas. Museums and cultural institutions can define a price for the content shown using the invention.
Un tercer ejemplo puede ser el aumento de objetos que se muestran en ferias comerciales. La máquina herramienta se suele mostrar en grandes ferias. Uno de los principales problemas de los fabricantes es que no pueden mostrar todas las funcionalidades de las máquinas en la feria. La invención puede proporcionar más información sobre los componentes y el funcionamiento real de las máquinas. En un ejemplo final de la invención, el lector puede considerar el caso de los excursionistas que quieren conocer más datos sobre los paisajes, la flora y la fauna mientras que caminan por el medio rural. La invención puede proporcionar información adicional en la cima de las montañas, como el nombre de los diferentes picos del entorno, la forma de acceso o la diversidad en fauna y flora.A third example may be the increase of objects shown at trade shows. The machine tool is usually shown at large fairs. One of the main problems of the manufacturers is that they cannot show all the functionalities of the machines at the fair. The invention can provide more information about the components and the actual operation of the machines. In a final example of the invention, the reader can consider the case of hikers who want to know more about landscapes, flora and fauna while walking through the countryside. The invention can provide additional information at the top of the mountains, such as the name of the different peaks of the environment, the way of access or the diversity of fauna and flora.
Aunque esta invención se ha presentado y descrito en relación a sus modos de realización preferentes, resulta evidente que se realizarán algunos cambios y modificaciones adicionales a los previamente mencionados sobre las características básicas de la invención. Adicionalmente, existe una gran diversidad de tipos de software y hardware que se pueden utilizar a la hora de materializar la invención, y la invención no se limita a los ejemplos previamente descritos. Por ello, los solicitantes protegen todas las variaciones y modificaciones dentro del ámbito de aplicación de la presente invención. La invención se define mediante las siguientes reivindicaciones, incluyendo todas las equivalentes: Although this invention has been presented and described in relation to its preferred embodiments, it is evident that some additional changes and modifications will be made to those previously mentioned on the basic features of the invention. Additionally, there is a great diversity of types of software and hardware that can be used when the invention is realized, and the invention is not limited to the previously described examples. Therefore, applicants protect all variations and modifications within the scope of the present invention. The invention is defined by the following claims, including all equivalents:

Claims

REIVINDICACIONES
1. Sistema (1) para la visualización de una imagen aumentada (6!) que comprende una imagen real (6) o imagen en el campo de vista de un usuario (13) aumentada con ciertos contenidos interesantes, que se caracteriza porque comprende:1. System (1) for displaying an enlarged image (6 ! ) Comprising a real image (6) or image in the field of view of a user (13) augmented with certain interesting contents, characterized in that it comprises:
una cámara real (2) para grabar la imagen real (6) desde el punto de vista del usuario (13), - una base de datos (4) con un conjunto de objetos gráficos tridimensionales (5) que proporcionan información gráfica capaz de aumentar la imagen real (6), un sistema de seguimiento (9) que calcula y registra la posición del dispositivo de visualización (8), - una unidad de procesamiento (3), que convierte los objetos gráficos tridimensionales (5) en representaciones virtuales bidimensionalesa real camera (2) to record the real image (6) from the user's point of view (13), - a database (4) with a set of three-dimensional graphic objects (5) that provide graphical information capable of increasing the real image (6), a tracking system (9) that calculates and records the position of the display device (8), - a processing unit (3), which converts the three-dimensional graphic objects (5) into two-dimensional virtual representations
(5') en función de la posición del dispositivo de visualización (8) calculada por el sistema de seguimiento (9), y que construye la imagen aumentada (6') mediante la composición de las representaciones virtuales bidimensionales (5') de los objetos gráficos (5) y la imagen real (6), un dispositivo de visualización (8) para la visualización de la imagen aumentada (6'), medios de interacción (11) que proporcionan una interfaz de usuario.(5 ') depending on the position of the display device (8) calculated by the tracking system (9), and which builds the enlarged image (6') by composing the two-dimensional virtual representations (5 ') of the graphic objects (5) and the real image (6), a display device (8) for displaying the enlarged image (6 '), interaction means (11) that provide a user interface.
2. Sistema (1), según la reivindicación 1, que se caracteriza porque la cámara real (2) comprende lentes de zoom, los medios de interacción (11) proporcionan una interfaz de usuario para variar el zoom de la imagen aumentada (6'), y porque cuando el usuario (13) actúa sobre los medios de interacción (11), la unidad de procesamiento (3) ajusta el zoom de la cámara real (2) y adecuadamente reconvierte los objetos gráficos tridimensionales (5) en representaciones virtuales bidimensionales (5').2. System (1) according to claim 1, characterized in that the real camera (2) comprises zoom lenses, the interaction means (11) provide a user interface to vary the zoom of the enlarged image (6 ' ), and because when the user (13) acts on the interaction means (11), the processing unit (3) adjusts the zoom of the real camera (2) and appropriately converts the three-dimensional graphic objects (5) into virtual representations two-dimensional (5 ').
3. Sistema (1), según la reivindicación 1, que se caracteriza porque el sistema (1) incluye al menos un altavoz (10) y porque la base de datos (4) incluye una base de datos para el aumento de los objetos gráficos tridimensionales (5) y una base de datos de contenidos autoadministrables para la información multimedia adicional (7) que puede ser visualizada en el dispositivo de visualización (8) y escuchada gracias al altavoz (10).3. System (1) according to claim 1, characterized in that the system (1) includes at least one speaker (10) and because the database (4) includes a database for the increase of three-dimensional graphic objects (5) and a database of self-administered contents for additional multimedia information (7) that can be displayed on the display device (8) and heard thanks to the speaker (10).
4. Sistema (1), según la reivindicación 1, que se caracteriza porque el sistema de seguimiento (9) comprende un sensor inercial.4. System (1) according to claim 1, characterized in that the tracking system (9) comprises an inertial sensor.
5. Sistema (1), según la reivindicación 1, que se caracteriza porque el dispositivo de visualización (8) es un dispositivo que permite la visualización de imágenes cerca de los ojos del usuario (13) de una forma similar a un prismático convencional.5. System (1) according to claim 1, characterized in that the display device (8) is a device that allows the display of images near the user's eyes (13) in a manner similar to a conventional prismatic.
• 6. Sistema (1), según la reivindicación 1, que se caracteriza porque comprende un sistema de pago (12).• 6. System (1), according to claim 1, characterized in that it comprises a payment system (12).
7. Sistema (1), según la reivindicación 1, que se caracteriza porque el sistema (1) es un aparato orientado hacia los sectores del turismo y del ocio.7. System (1), according to claim 1, characterized in that the system (1) is an apparatus oriented towards the tourism and leisure sectors.
8. Método para la generación de una imagen aumentada (6') que comprende una imagen real (6) o imagen desde el punto de vista de un usuario (13) aumentada con algunos contenidos interesantes, que se caracteriza por que comprende los siguientes pasos:8. Method for generating an enlarged image (6 ') comprising a real image (6) or image from the point of view of a user (13) augmented with some interesting contents, characterized in that it comprises the following steps :
proveer a una unidad de procesamiento (3) con una base de datosprovide a processing unit (3) with a database
(4) que incluye una serie de objetos gráficos tridimensionales (5) que proporcionan contenidos interesantes capaces de aumentar la imagen real (6), - grabar la imagen real (6) desde el punto de vista del usuario (13) utilizando una cámara real (2), seguir la posición de un dispositivo de visualización (8) mediante un sistema de seguimiento (9), transformar los objetos gráficos tridimensionales (5) en representaciones virtuales bidimensionales (5') en función de la posición del dispositivo de visualización (8) calculada por el sistema de seguimiento (9), y generar la imagen aumentada (6') alineando las representaciones virtuales bidimensionales (5') y la imagen real (6), - visualizar la imagen aumentada (6') mediante un dispositivo de visualización (8), adaptar la imagen aumentada (6') en función de los cambios en la posición del dispositivo de visualización (8) o en función de los comandos del usuario a través de los medios de interacción (11), permitiendo que el usuario (13) interaccione con áreas de interés de la imagen real (6).(4) that includes a series of three-dimensional graphic objects (5) that provide interesting content capable of increasing the real image (6), - recording the real image (6) from the user's point of view (13) using a real camera (2), follow the position of a display device (8) by means of a tracking system (9), transform the three-dimensional graphic objects (5) into two-dimensional virtual representations (5 ') depending on the position of the display device (8) calculated by the tracking system (9), and generate the enlarged image (6 ') by aligning the two-dimensional virtual representations (5') and the real image (6), - visualizing the enlarged image ( 6 ') by means of a display device (8), adapt the enlarged image (6') depending on the changes in the position of the display device (8) or on the basis of user commands through the interaction means (11), allowing the user (13) to interact with areas of interest of the real image (6).
9. Método, según la reivindicación 8, que se caracteriza porque incluye el paso adicional de visualizar, en el dispositivo de visualización (8), información multimedia adicional (7) almacenada en la base de datos (4).Method according to claim 8, characterized in that it includes the additional step of displaying, in the display device (8), additional multimedia information (7) stored in the database (4).
10. Método, según la reivindicación 8, que se caracteriza porque comprende además los siguientes pasos:10. Method according to claim 8, characterized in that it further comprises the following steps:
- el usuario (13) interactúa con los medios de interacción (11) para cambiar el grado de zoom de la imagen aumentada (6'), la unidad de procesamiento (3) cambia el valor del zoom de la cámara real (2) y cambia adecuadamente las representaciones virtuales bidimensionales (5') de los objetos gráficos tridimensionales (5), la unidad de procesamiento (3) recompone la imagen aumentada (6') en función de la nueva posición del zoom.- the user (13) interacts with the interaction means (11) to change the zoom degree of the enlarged image (6 '), the processing unit (3) changes the zoom value of the real camera (2) and properly change the two-dimensional virtual representations (5 ') of the three-dimensional graphic objects (5), the processing unit (3) recomposes the enlarged image (6') depending on the new zoom position.
11. Método, según la reivindicación 10, que se caracteriza porque cuando el usuario (13) actúa sobre los medios de interacción (11) durante un tiempo t, el proceso de variar el zoom de la cámara real (2) y readaptar los objetos gráficos tridimensionales (5) en sus correspondientes representaciones virtuales bidimensionales (5') se realiza un cierto número de veces, en intervalos discretos, durante el tiempo t. Method according to claim 10, characterized in that when the user (13) acts on the interaction means (11) for a time t, the process of varying the zoom of the real camera (2) and readjusting the objects Three-dimensional graphics (5) in their corresponding two-dimensional virtual representations (5 ') is performed a certain number of times, at discrete intervals, during time t.
12. Método, según la reivindicación 10, que se caracteriza porque comprende el paso de leer el valor final del zoom de la cámara real (2) y fijar el valor del zoom de la cámara virtual (22) a dicho valor final del zoom de la cámara real (2).Method according to claim 10, characterized in that it comprises the step of reading the final zoom value of the real camera (2) and setting the virtual camera zoom value (22) to said final zoom value of the real camera (2).
13. Método, según la reivindicación 10, que se caracteriza porque la unidad de procesamiento (3) lee el valor del zoom de la cámara real (2) en forma de un valor hexadecimal y, y reconvierte los objetos gráficos tridimensionales (5) en sus correspondientes representaciones virtuales bidimensionales (5') en función de un ángulo de vista x dado por x =- 2.87*1012y3+1.99*107y2 * -0.00525077y+48.13. Method according to claim 10, characterized in that the processing unit (3) reads the zoom value of the real camera (2) in the form of a hexadecimal value and, and converts the three-dimensional graphic objects (5) into their corresponding two-dimensional virtual representations (5 ') based on an angle of view x given by x = - 2.87 * 10 12 and 3 + 1.99 * 10 7 and 2 * -0.00525077y + 48.
14. Método, según la reivindicación 10, que se caracteriza porque la unidad de procesamiento (3) reconvierte los objetos gráficos tridimensionales (5) en sus correspondientes representaciones virtuales bidimensionales (5') en función de un ángulo de vista x, y a continuación, cambia la cámara real (2) en función de un valor del zoom codificado como un valor hexadecimal y dado por y=-0.2274x3 + 22.593x2 - 949.94x + 18690. 14. Method according to claim 10, characterized in that the processing unit (3) converts the three-dimensional graphic objects (5) into their corresponding two-dimensional virtual representations (5 ') based on an angle of view x, and then, change the real camera (2) based on a zoom value encoded as a hexadecimal value and given by y = -0.2274x 3 + 22.593x 2 - 949.94x + 18690.
PCT/ES2007/000645 2006-11-16 2007-11-13 System and method for displaying an enhanced image by applying enhanced-reality techniques WO2008059086A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ES2006002922 2006-11-16
ES200602922A ES2300204B1 (en) 2006-11-16 2006-11-16 SYSTEM AND METHOD FOR THE DISPLAY OF AN INCREASED IMAGE APPLYING INCREASED REALITY TECHNIQUES.

Publications (1)

Publication Number Publication Date
WO2008059086A1 true WO2008059086A1 (en) 2008-05-22

Family

ID=39015743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2007/000645 WO2008059086A1 (en) 2006-11-16 2007-11-13 System and method for displaying an enhanced image by applying enhanced-reality techniques

Country Status (2)

Country Link
ES (1) ES2300204B1 (en)
WO (1) WO2008059086A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2998680A1 (en) * 2012-11-26 2014-05-30 Laurent Desombre Method for navigation associated with interactive virtual reality periscope in real environment, involves viewing selected content through periscope when selected content falls within visual field of virtual topography during navigation
EP3055833A4 (en) * 2013-10-10 2017-06-14 Selverston, Aaron Outdoor, interactive 3d viewing apparatus
US20180250589A1 (en) * 2017-03-06 2018-09-06 Universal City Studios Llc Mixed reality viewer system and method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3104290B1 (en) 2019-12-05 2022-01-07 Airbus Defence & Space Sas SIMULATION BINOCULARS, AND SIMULATION SYSTEM AND METHODS

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020082498A1 (en) * 2000-10-05 2002-06-27 Siemens Corporate Research, Inc. Intra-operative image-guided neurosurgery with augmented reality visualization
US20020163521A1 (en) * 1993-09-10 2002-11-07 John Ellenby Electro-optic vision systems
EP1435737A1 (en) * 2002-12-30 2004-07-07 Abb Research Ltd. An augmented reality system and method
DE102004044718A1 (en) * 2004-09-10 2006-03-16 Volkswagen Ag Augmented reality help instruction generating system for e.g. aircraft, has control unit producing help instruction signal, representing help instruction in virtual space of three-dimensional object model, as function of interaction signal
DE102004046144A1 (en) * 2004-09-23 2006-03-30 Volkswagen Ag Augmented reality system used for planning a production plant layout has camera system to enter units into a central controller where they are combined and then stored in data base

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064398A (en) * 1993-09-10 2000-05-16 Geovector Corporation Electro-optic vision systems
US7050078B2 (en) * 2002-12-19 2006-05-23 Accenture Global Services Gmbh Arbitrary object tracking augmented reality applications
JP2008510566A (en) * 2004-08-23 2008-04-10 ゲームキャスター インコーポレイテッド Apparatus, method, and system for viewing and operating virtual environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020163521A1 (en) * 1993-09-10 2002-11-07 John Ellenby Electro-optic vision systems
US20020082498A1 (en) * 2000-10-05 2002-06-27 Siemens Corporate Research, Inc. Intra-operative image-guided neurosurgery with augmented reality visualization
EP1435737A1 (en) * 2002-12-30 2004-07-07 Abb Research Ltd. An augmented reality system and method
DE102004044718A1 (en) * 2004-09-10 2006-03-16 Volkswagen Ag Augmented reality help instruction generating system for e.g. aircraft, has control unit producing help instruction signal, representing help instruction in virtual space of three-dimensional object model, as function of interaction signal
DE102004046144A1 (en) * 2004-09-23 2006-03-30 Volkswagen Ag Augmented reality system used for planning a production plant layout has camera system to enter units into a central controller where they are combined and then stored in data base

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2998680A1 (en) * 2012-11-26 2014-05-30 Laurent Desombre Method for navigation associated with interactive virtual reality periscope in real environment, involves viewing selected content through periscope when selected content falls within visual field of virtual topography during navigation
EP3055833A4 (en) * 2013-10-10 2017-06-14 Selverston, Aaron Outdoor, interactive 3d viewing apparatus
US20180250589A1 (en) * 2017-03-06 2018-09-06 Universal City Studios Llc Mixed reality viewer system and method
WO2018165041A1 (en) * 2017-03-06 2018-09-13 Universal City Studios Llc Mixed reality viewer system and method
US10289194B2 (en) 2017-03-06 2019-05-14 Universal City Studios Llc Gameplay ride vehicle systems and methods
CN110382066A (en) * 2017-03-06 2019-10-25 环球城市电影有限责任公司 Mixed reality observer system and method
KR20190124766A (en) * 2017-03-06 2019-11-05 유니버셜 시티 스튜디오스 엘엘씨 Mixed Reality Viewer System and Methods
US10528123B2 (en) 2017-03-06 2020-01-07 Universal City Studios Llc Augmented ride system and method
US10572000B2 (en) 2017-03-06 2020-02-25 Universal City Studios Llc Mixed reality viewer system and method
KR102145140B1 (en) 2017-03-06 2020-08-18 유니버셜 시티 스튜디오스 엘엘씨 Mixed reality viewer system and method
CN110382066B (en) * 2017-03-06 2023-10-13 环球城市电影有限责任公司 Mixed reality observer system and method

Also Published As

Publication number Publication date
ES2300204A1 (en) 2008-06-01
ES2300204B1 (en) 2009-05-01

Similar Documents

Publication Publication Date Title
Schmalstieg et al. Augmented reality: principles and practice
US11854149B2 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
US11024088B2 (en) Augmented and virtual reality
US10225545B2 (en) Automated 3D photo booth
ES2688643T3 (en) Apparatus and augmented reality method
CN109584295A (en) The method, apparatus and system of automatic marking are carried out to target object in image
US20080246759A1 (en) Automatic Scene Modeling for the 3D Camera and 3D Video
JP2008520052A5 (en)
US20060114251A1 (en) Methods for simulating movement of a computer user through a remote environment
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
CA2669409A1 (en) Method for scripting inter-scene transitions
JP2003264740A (en) Observation scope
US20130249792A1 (en) System and method for presenting images
US20210312887A1 (en) Systems, methods, and media for displaying interactive augmented reality presentations
Hoberman et al. Immersive training games for smartphone-based head mounted displays
US11532138B2 (en) Augmented reality (AR) imprinting methods and systems
ES2300204B1 (en) SYSTEM AND METHOD FOR THE DISPLAY OF AN INCREASED IMAGE APPLYING INCREASED REALITY TECHNIQUES.
US20030090487A1 (en) System and method for providing a virtual tour
Woletz Interfaces of immersive media
Cohen et al. A multiuser multiperspective stereographic QTVR browser complemented by java3D visualizer and emulator
Kim Lim et al. A low-cost method for generating panoramic views for a mobile virtual heritage application
DeHart Directing audience attention: cinematic composition in 360 natural history films
Tatzgern et al. Embedded virtual views for augmented reality navigation
Bolhassan et al. VR_4U_2C: A Multiuser Multiperspective Panorama and Turnorama Browser Using QuickTime VR and Java Featuring Multimonitor and Stereographic Display
Beckwith et al. Parallax: Dancing the Digital Space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07823050

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07823050

Country of ref document: EP

Kind code of ref document: A1