CN104969264A - Method and apparatus for adding annotations to a plenoptic light field - Google Patents

Method and apparatus for adding annotations to a plenoptic light field Download PDF

Info

Publication number
CN104969264A
CN104969264A CN201280077894.3A CN201280077894A CN104969264A CN 104969264 A CN104969264 A CN 104969264A CN 201280077894 A CN201280077894 A CN 201280077894A CN 104969264 A CN104969264 A CN 104969264A
Authority
CN
China
Prior art keywords
note
view
light
data
explain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201280077894.3A
Other languages
Chinese (zh)
Inventor
M·蒙尼
L·莱姆
S·艾尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidinoti SA
Original Assignee
Vidinoti SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vidinoti SA filed Critical Vidinoti SA
Publication of CN104969264A publication Critical patent/CN104969264A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography

Abstract

A method comprising the steps of: retrieving data (100) representing a light field with a plenoptic capture device (4); executing program code for matching the retrieved data with corresponding reference data (101); executing program code for retrieving at least one annotation (61, 63, 64) in a plenoptic format associated with an element of said reference data (102); executing program code for generating annotated data in a plenoptic format from said retrieved data and said annotation (103).

Description

For will the method and apparatus adding full light light field to be explained
Background technology
The present invention relates to augmented reality (augmented reality) method and apparatus, particularly for note being added to method corresponding to the data of scene (scene) and various equipment.
The novel features and application that relate to image procossing have been caused comprising at the developing rapid advances of handheld portable devices such as smart mobile phone, palmtop computer, portable electronic device, PDA(Personal Digital Assistant) device etc.Such as augmented reality application be known, wherein mancarried device is pointed to the drawing in scene such as landscape, buildings, placard or museum by user, and display by image with relate to this scene superposition information together with illustrate.Such information can comprise: such as the title in mountain and residence; Name; For the historical information of buildings; And business information, such as advertisement, such as restaurant menu.The example of such system is described in EP1246080 and in EP2207113.
It is known that annotating information is supplied to mancarried device by server within a wireless communication network.It is also known for comprising with the annotation system of the communication network of server and mancarried device and annotation method.
Many annotation method comprise the step of image compared with the set of the reference picture stored in a database of 2D image or the Practical computer teaching such as produced by the standard pinhole camera with standard CC D or cmos sensor by image.Check that because actual angle and lighting condition can be different about the image stored in a database, so the target of comparison algorithm is the impact of removing these parameters.
Such as, WO2008134901 describes and wherein uses the digital camera be associated with communication terminal to take the method for the first image.The inquiry data relevant to the first image are transferred to remote identification server via communication network, identify coupling reference picture at this remote identification server place.By the part of at least part of replacement first image with the image explained, generate at communication terminal place and display enhancing image.To occur in plane space with the enhancing of the first image of camera shooting and only process two dimensional image and object.
Light information, the direction of the light such as in each point in space, is abandoned in traditional imagery annotation system.The note of light information is not had to make the real view of the scene of note more difficult.Such as, catch or show texture requirement light information on the surface of the object.Although each object has different textures on its surface, in current annotation system, add texture information is impossible.This causes the note reality of adhering to not to be integrated in scene.
In addition, the quick growth of augmented reality application may cause spreading unchecked of note in future.Some scenes such as in city contain the many elements be associated from different note, thus cause the image of following note: the image of this note is with the note of unusual big figure, and this note covers the major part of background image.In many cases, user is only interested in those notes a limited number of, and other notes only can make people divert one's attention.Therefore, often by desirably restriction explain number and the mode of the note providing a kind of selection to be shown.
In addition, for the scene explained is checked, computational expense is major issue.Needs are reduced computational expense.
Therefore target of the present invention is the above-mentioned problem solving or at least alleviate existing augmented reality system.
Summary of the invention
According to invention, these targets are realized by the method comprised the steps:
The data of light field are represented with full light trap setting retrieval;
Perform the program code being used for the data caught to mate with corresponding reference data;
Perform the program code for retrieving the note in All-optical format be associated with the element of described reference data;
Perform and be used for from the data of described seizure and the described program code explaining the data generating note All-optical format.
Invention is also by realizing for the equipment catching and explain corresponding to the data of scene, and described equipment comprises:
Full light trap setting, for catching the data representing light field;
Processor;
Display;
Program code, when executed, retrieve for causing described processor at least one in All-optical format be associated with the element of the data caught with described full light trap setting to explain, and for reproduce on the display from the data genaration caught and comprise at least one view explained described.
Invention is also provided for the equipment determining to explain, and described equipment comprises:
Processor;
Reservoir;
Program code, when executed, for cause described processor to receive to represent light field data, described data are mated with a reference data, determine from described reservoir the note All-optical format that is associated with described reference data and be sent to remote-control device by the described note in All-optical format or corresponding to the data of the image of the note in All-optical format.
The claimed interpolation of the note in All-optical format to allow in All-optical format the more real integrated of note in the picture; Explain the element of the scene seemingly caught, and be not only the text be superimposed upon on image.Note (being also referred to as in this application " full light note ") in All-optical format describes containing the light field more complete than traditional note, comprises the information how revising light.
Depend on focal length that is that selected by user at the reproduction period of image or that such as automatically select based on his interest and/or viewpoint, the note in All-optical format provides the note also allowing to select be shown.
Be in identical space (i.e. full light space) compared with the data caught owing to explaining, so reduce the computational expense for the process of note.
Especially, the computational expense for being reproduced in the all-optical data in the intelligible form of people is reduced.In fact, because the image in All-optical format and full light note are in same space, so reproducing processes is equivalent to both.In one embodiment, single reproducing processes can be used to reproduced image and the note be associated.In this situation, (the selection of the change of such as viewpoint, the degree of depth, focus of the projective parameter selected for full light reproducing processes ...) be also applied on full light note.Such as, when changing focus or the viewpoint of full light image, identical conversion can be used to show full light in various distance and explain.In another embodiment, the effect of note is applied to the full light image of seizure, and performs the reproduction of the full light image of amendment.
Therefore, full light is explained, the note namely in All-optical format, provides the real mode of display note, allows to comprise the more eurypalynous note of the veined note of band and improve counting yield.
Unlike traditional note, full light is explained and can be contained and the as many information about light of image caught by full light trap setting.Thus, likely in the light field caught, directly synthesize note, and do not lose by the light information caused to the projection on 2D image.Such as, explain the feature of the light reflection that can be retained on the surface of the object of note, this is impossible with traditional annotation system.In this sense, the view of note seems more real.
Can promote to calculate to the direct amendment of light, such as generate the scene of note from multiple viewpoint simultaneously.In the example that the scene explained generates, the process of the note of scene and the such as fuzzy or sharpening of other extra process are directly applied once with All-optical format, instead of the attachment on the 2D image generated of each viewpoint is explained and applied extra process.Therefore, in All-optical format, directly the full light image of synthesis and full light note can cause reducing computational expense.
The present invention also relates to for the method by explaining the reference picture be attached in All-optical format, described method comprises:
The described reference picture in All-optical format is presented on reader;
Select to explain;
Select position for described note and one or more direction with described reader, described note can be seen from described one or more direction;
In memory described position and described direction are associated with the described reference picture in All-optical format and described note.
This method can perform by the such as suitable software application of suitable authoring system or website.
Accompanying drawing explanation
Invent to being given as examples and being better understood by the auxiliary lower of description of the graphic embodiment of accompanying drawing, in the accompanying drawings:
Fig. 1 schematic illustrations is for catching the full light trap setting of the data of the light field of the scene represented with the object being in the first distance.
Fig. 2 schematic illustrations is for catching the full light trap setting of the data of the light field of the scene represented with the object being in second distance.
Fig. 3 schematic illustrations is for catching the full light trap setting of the data of the light field of the scene represented with the object being in the 3rd distance.
Fig. 4 schematic illustrations comprises the system of the various equipment components embodying invention together.
Fig. 5 A to 5B illustrates the view of note reproduced from identical all-optical data, wherein changes between two views in the viewpoint that reproduction period is selected by user, thus causes the identical note of reproducing by different way.
Fig. 6 A to 6B illustrates the view of the note of reproducing from identical all-optical data, the viewpoint wherein selected by user at reproduction period changes between two views, thus causes making the first note become visible on the first view and making the second note become visible on the second view.
Fig. 7 A to 7B illustrates the view of the note of reproducing from identical all-optical data, the focal length wherein selected by user at reproduction period changes between two views, thus causes making the first note become visible on the first view and making the second note become visible on the second view.
Fig. 8 is the block diagram of method for generating and reproduce the view with the note in All-optical format.
The block diagram of the method for the reproduction that amendment is explained when Fig. 9 is for selecting difference to check direction and/or different focal on view as viewer.
Figure 10 is the block diagram for the method note in All-optical format be associated with reference data.
Figure 11 is the block diagram of the method for the continuous note of the full light image of video that such as caught in movement by user of a series of full light image or full light image.
Embodiment
The 2D projection on a sensor of traditional cameras capture scene, and generate instruction is with or without the intensity of the light in the pixel of color data at each tool.On the other hand, known full light trap setting catches the data representing light field like this, namely not only the intensity of pilot light but also instruction about the matrix comprising the more complete information in the direction of light of light field.
Complete light field can comprise nearly 7 parameters for describing each light (or for being described in the light of given position): 3 for position, 2 for direction, 1 for wavelength and (in the situation of video) 1 for the time.All-optical data sent by some current full light cameras, and this all-optical data comprises: 2 parameters for position, 2 parameters for direction and 1 parameter for wavelength.Sensor generates and represents the all-optical data of so-called full light light field, i.e. at least the position of guidance light and the matrix in direction.It represents that the traditional 2D view data generated by traditional 2D camera by the all-optical data ratio of full light trap setting generation contains the more information about light field.
So far, at least Liang Ge company Lytro and Raytrix proposes the full optical sensor that can record so full light light field.Their two camera is slightly different at design aspect, but main thought decomposes the different directions of light, and this light is considered to drop on the single light position (photosite) (or pixel) in standard camera sensor.In order to that target, as graphic on Fig. 1, the array of lenticule 20 is placed on after main lens 1, replaces the sensor of traditional camera.
Like that, lenticule 21 makes light change direction according to the incident angle of light, and changes the different pixels 210 that nyctitropic light arrives sensor 21.The direction of the light beam of the lenticule 20 hit before that subimage is depended on by the light quantity of each measurement formed in N × M pixel 210 of subimage.
Fig. 1 is to the simple one dimension sensor of 3 diagram, and this simple one dimension sensor comprises n=9 subimage, and each subimage has a line of N × M pixel (or light position) 210, and N equals 3 and M equals 1 in this illustration.Many full optical sensors have the pixel of the subimage of higher number and the higher number for each subimage, such as 9 × 9 pixels, thus allow to distinguish between the N × M=81 of the light on a lenticule 20 different orientation.Suppose that all objects of scene are in focus, thus each subimage comprises instruction from the sticking patch of various direction to the brightness value of the light quantity on that subimage.
In this builds, the array of lenticule 20 is positioned on the plane of delineation that formed by the main lens 1 of full light trap setting and sensor 21 is positioned at from lenticule distance f, and wherein f is lenticular focal length.But this design allows high angular resolution suffers the spatial resolution of relative mistake (effective number of the pixel of the image often reproduced equals lenticular number).This problem is processed by other full light trap setting, and in this other full light trap setting, micro lens is on the plane of delineation of main lens, thus between lenticule and the plane of delineation, creates gap.The cost will paid in such design is poor angular resolution.
If Fig. 1 to 3 on observe, correspond in this illustration and depend on distance from point 3 to main lens 1 with the full light light field of the scene of a single point 3.On Fig. 1, all light beams from this object arrive identical lenticule 20, thus cause full light light field, wherein correspond to the positive light intensity of all pixel records first in this lenticular subimage, and other pixel records all corresponding to other lens are different, empty light intensity.Wherein on object 3 Fig. 2 closer to lens 1, some light beams coming from a little 3 arrive the pixel of other subimage (subimage be namely associated with lenticular two lenticules hit before vicinity).Object 3 is in from Fig. 3 of the larger distance of lens 1 wherein, and some light beams coming from a little 3 arrive the different pixels be associated with lenticular two lenticules hit before vicinity.Therefore, the numerical data 22 of being sent by sensor 21 depends on the distance of object 3.
Thus full optical sensor 21 sends all-optical data 22, and this all-optical data 22 contains the set of (N × M) value of the light quantity from various direction on the lens of instruction more than this subimage for each subimage corresponding to lenticule 20.For the object-point of given focusing, each pixel of subimage corresponds to the ionization meter hitting the light of sensor with specific incident angle φ (in the plane of the page) and θ (with the plane orthogonal of the page).
Fig. 4 schematic illustrations embodies the block diagram of the annotation system of invention.System comprises user's set 4, such as hand-held device, smart mobile phone, panel computer, camera, glasses, safety goggles, contact lenses etc.Device 4 comprises: full light trap setting 41, such as graphic camera in Fig. 1 is to 3, for catching the data of the light field represented in scene 3; Processor, such as with the microprocessor 400 of suitable program code; With communication module 401, such as WIFI and/or cellular interface, be connected to remote server 5 such as Cloud Server for the network by such as internet 6 by device 4.Server 5 comprises: reservoir 50, with the set etc. of the database of such as SQL database, the set of XML document, the image in All-optical format, for storing the collection of the reference all-optical data representing image and/or one or more world model; With processor 51, comprise microprocessor, this microprocessor is with the computer code for causing microprocessor to perform the operation needed in annotation method.Explain and also can be stored in reservoir 50 together with reference to all-optical data with corresponding position.
The program code performed by user's set 4 can comprise than if the application software being downloaded by user and be arranged in user's set 4 or application program (app).Program code also can comprise the part of the operation code of user's set 4.Program code also can be included in code that is that embed in webpage or that perform in a browser, and this code packages is containing such as Java, Javascript, HTML5 code etc.Program code can be stored as the computer program in tangible device computer-readable recording medium (such as flash memory, hard disk or any type that is permanent or semipermanent store).
For causing this microprocessor, the feature of at least some corresponded in the data acquisition of the seizure of light field or those data acquisitions is sent to remote server 5 by the microprocessor 400 executive routine code in user's set 4.Program code is arranged to the data be sent in " All-optical format ", does not namely lose the information in the direction about light.Program code also can cause microprocessor 400 to be received in the data of the note All-optical format or the image explained or the note relevant to the all-optical data sent before from server 5, and for reproducing the view of data of the seizure corresponded to note.
Full light annotation method can comprise two parts: off-line procedure and at line process.Usually, the fundamental purpose of off-line procedure be by explain with the reference picture in All-optical format or with other 2D, solid or 3D reference picture be associated.
off-line phase
In the situation of the reference picture in All-optical format, off-line procedure can comprise the step such as:
1. to be received in All-optical format from device 4 and to represent the reference data of light field;
2. the view of the reproduction of full light reference picture is such as presented with full light reader;
3. select full light to explain,
4. select position and the orientation of the note be used in the view reproduced,
5. select the one or more light field parameters explained,
6.(is alternatively) ascribe action to note,
7., based on its position and orientation, be associated with note light with reference to image light in memory.
This off-line procedure on server 5, in user's set 4, or can perform in another equipment such as personal computer, panel computer etc.Typically, this off-line procedure pair each note be associated with reference picture is only performed once.If the note selected is not initially obtainable in All-optical format, then it can be switched in All-optical format.
The fundamental purpose of off-line procedure is explained by full light to add full light image to.Off-line procedure can comprise two stages.First stage can be performed by the program code performed by the microprocessor in server 5, and this microprocessor can comprise executable program or other code of at least some in the task for causing server 5 to perform below:
1. to be received in All-optical format from device 4 and to represent the data of light field;
2. the model (reference picture) of storage before retrieving from database 50 and/or multiple reference data;
3. by a part of the data received from user's set and reference picture, mate with among multiple reference picture respectively,
4. determine and the note that coupling reference picture is associated;
5. the image of the note in All-optical format or the note in All-optical format is sent to device 4.
In various embodiments, replace that the data of seizure are sent to remote server 5 to be used for mating with reference picture in the server, this coupling can complete this locality in the device of user with the set of the reference picture of this locality storage or with the model of local storage in a device.In this embodiment, server 5 is loaded on user's set 4.Several times can be performed according to the requirement of user at line process.
Can be performed by the program code performed by the microprocessor in device 4 in the subordinate phase of line process, this microprocessor can comprise executable program or other code of at least some in the task for causing device 4 to perform below:
1. to be received in the annotating data All-optical format possibly from server 5 together with the action be associated;
2. the annotating data of reception is applied to the full light light field of seizure;
3. the light field of note is rendered to the view that user can check;
4. interpreting user is mutual and perform the note action be associated.
In various embodiments, replace the full light light field note of reception being applied to seizure on device 4, this step can be done on server 5 side.In this situation, the final view reproduced is transmitted the light field of getting back to device 4 or whole note and is transmitted and gets back to device 4.
Therefore, user can be associated explaining with the ad-hoc location of the view of the reproduction about full light reference picture and orientation, and one or more light field parameters that use are explained in instruction in this particular figure.During reproduction view, depend on the viewpoint selected by viewer, can differently reproduce identical note.Because the light field parameter explained can change, if viewer selects different viewpoints, then first explain the second note replacement that can be in same position.
For the example of the process flow diagram of off-line procedure on Figure 10 by diagram.The following method of this process flow diagram diagram: the note that the method allows user to select to be associated with reference picture and the position of explaining about this, orientation and light field parameter, thus this explains the full light image of the seizure by being applied to this full light reference picture of coupling.
This method can use can in the device 4 of user the note authoring system of local runtime.Explain authoring system also by master control on server 5, some instruments to be presented at this server 5 place web platform and explain to manage and make they and full beche-de-mer without spike examine image-related.The such as service of augmented reality Using statistics can be also obtainable from web platform.Explain authoring system also to may operate in different servers or equipment (comprising the personal computer of user, panel computer etc.).
In step 150, user selects reference picture, the image such as in All-optical format.This image to be uploaded on full light authoring system and to be used as the support image of note.
As the part of full light authoring system, reader in the mode of the data making user and can visually upload by the data reproduction uploaded to user.If data are in All-optical format, this All-optical format can not easily be understood by people like this, then this may comprise use full light Rendering module in by the intelligible space of user, reproduce full light model.Reader forms following instrument: this tool control all-optical data and will explain the position and directed place that are placed on expectation about given view, but all process of explaining with full light and combining directly complete in full light space.
In one embodiment, full light model can be reproduced as 2D view, thus user once from a viewpoint and once with focal length this 2D view visual, thus can allow him to understand and edit full light model.In order to from a 2D View Navigation to other, control is obtainable, makes can show another 2D view when asking.
In another embodiment, full light model can be reproduced as part 3D scene, wherein can the different directions of visual light.With the main difference of the complete 3D scene of standard being: when reproducing 3D scene detection from full light model, limiting this 3D scene detection.Such as, view direction and view location are limited to the aspect caught by full light trap setting.
In step 151, user selects him to want its full light be associated with element-specific or the position of full light model to explain.As already mentioned, full light is explained and is limited in full light space and thus describes with light.Other element that those light can describe such as text, image, video or act directly on full light image light.Full light is explained and such as can be retrieved in the storehouse of explaining of the full light from database or in file browser.Full light is explained and also can be created at a gallop as by following manner: catch it with full light trap setting, use text editor input text, drawing image and/or recording voice or video.
In one embodiment, full light is explained and on authoring system, can be rendered as preview with storehouse or list.Full light explains preview corresponding to the reproduction for the note of default view.This default view can be taken as the medial view explaining scope corresponding to the full light about position and direction at random or in a preferred embodiment.Preview allows user to obtain and explains about full light the fast and clearly concept what corresponds to.For the universal class of the note do not acted on model wavelength, namely these notes are not visual like this, and preview diagram is applied to the note of the central authorities of the "current" model view reproduced by authoring system.Therefore, if this type explained only has the effect making all model light rotate 10 °, then the middle body of the view reproduced by "current" model forms by preview, and wherein each light is by rotation 10 °.
In step 152, user explains the position of authoring system selection in the coordinate system of the view of the reproduction of the reference model selected with full light, and in this position, he wants to add full light note.This can come as by following manner: the note Preview List on the top of the position expected from the view of display drags to be explained, and translation possibly, rotation, adjustment size, cutting and/or edit note in addition.As an alternative, user also can input coordinate as value in the control plane.
In step 152', user can regulate the parameter explaining light to generate another view explained.When user use such as the computer mouse pointer that changes the orientation of note to change the parameter of note time, the light of note and the light combinations of full light model, and generate new 2D view for each new position or new being oriented in reader.This is made to become possibility when user's mouse pointer and its movement are projected to full light space.The movement of pointer is applied to note subsequently in the plane parallel with the virtual plane corresponding to the view that 2D reproduces.
Once combine the light of full light model and note, the effect of note is applied to the light of reference picture.The process superposing full light note can be counted as the process revising light.The all-optical data caught can contain the information of the direction about light, the wavelength (i.e. color) for each light, thus explains the amendment that can be regarded as those parameters.Such as, the amendment that text can be counted as the wavelength of the light to specific region place is from the teeth outwards adhered on the surface of the object.
Determined by explaining itself by the type explaining the effect produced.In one embodiment, full light is explained and is such as only made up of opaque text.In this situation, model wavelength of light is replaced by the note wavelength of light for the light mapped completely.Explain for other, by considering the note of the texture changing model, the light of model can make their direction be explained change to reflect new texture.In another example, model ray position can be changed by note.
Full light explains the filtrator that can be counted as revising light.More possibilities of this scene providing display to explain.A further example of this process is the direction of changing light.As embodiment, by illumination effect being applied to the light from the special object incidence the full light image caught in the direction of randomness being added to light.The object of note can be made to become reflexive.Another example is the amendment of the attribute on surface, the amendment of such as texture information.Because full light explains the variable allowing amendment light, such as direction and wavelength, so the surface of likely being revised object by the amendment of union variable is added thereon as texture.Such as, full light is explained and can be realized by amendment direction and wavelength the rough surface that changed to by the plane surface with red color with yellow color.
Describing to explain can be stored in full light note array, as described in step 154 to the information of the effect of model light.
In step 153, user selects one or more note light field parameter.This than if be explain wavelength to change its color.User also can limit the different outward appearances for the identical note of checking from different directions, or the different notes be even associated from the identical element checked from different directions.
As an alternative, once successfully be conditioned in the full light model reproduced, user can select another view navigating to full light reader.Full light is explained and is automatically reported on the new view of full light model.User can determine that editor explains, changes its light field parameter or the outward appearance for this particular figure subsequently.He can proceed in the same manner for all obtainable view of full light model.
The all views occurring to prevent user from avoiding having to pass through full light model between the first and second views that interpolation process can be explained at full light navigate.These two views that full light is explained must not be continuous print.User must specify the outward appearance of note in two views and full light authoring system will generate the view between two parties of full light note automatically.Not yet with explain other view of full light model be associated and will not show it, thus cause the possibility not reproducing note for the certain view of scene or focussing plane.
Full light is explained and can be comprised corresponding to light and the data described with the set of parameter.When reproducing the full light note for the first particular figure, reader sets some parameters and allows user to revise other parameter.From this View Navigation to second view, user changes the parameter must fixed by reader can revise other parameter simultaneously.Interpolation process calculates the light parameter that the full light between these two views is explained automatically.
In one embodiment, the parameter that each full light is explained can be as follows: for 1 parameter of 3 (or may the 2) parameters of the ray position in space, 2 parameters for their direction, the wavelength for them and for the time may 1 parameter.For the particular figure reproduced by full light reader, position, direction and the parameter of time can such as be set by reader.User can change subsequently not by the parameter that reader is fixed, and corresponds to the wavelength of light in this illustration.Let as assume that it is set to the first value v1 by user.Now for another view explained, namely for the different value of position, direction and time parameter, let as assume that user changes the wavelength value for the second view and it is set to such as v2.Interpolation process object is as the note value of view computation between v1 and v2 in the position be associated with the first and second views, in the middle of direction and time parameter.In other embodiments, what interpolation also may be thought of as all-optical data comprises position, direction, wavelength and/or other parameter also calculated value of time.
The concrete example of interpolation such as comprises: the change of the color that full light is explained, such as forwards more micro-red color to from orange color; The change of the visibility explained, wherein for particular figure, note is visible, and for another view, note is hiding.
The distinct methods of interpolation is possible, and the method comprises the interpolation of linear, secondary between two views such as explaining or more high-order.And more senior interpolating method can consider that the further feature of scene or note itself is to generate the new light explained.
In step 153', when explaining on the image being displayed on seizure, action also can be associated with all in note or some.These actions can be triggered by user or use such as timer to be automatically performed.Action comprises: start the web browser with specific URL; Note is enlivened, such as makes one to explain and move, occur or disappear; Play video; Start the menu presenting possibility action further; Start slideshow or displaying audio file.The action allowing amendment to present to the view of the all-optical data of user is also possible, such as allows the action view of all-optical data being focused on given focal length place.
In step 154, full light is explained and is stored and is associated in the position of correspondence, orientation and in the selectable storer with reference to full light model of band, such as in database 51 or in the device of user.The note of known needs, it is possible for storing with All-optical format the note being attached to the full light model of each reference.Each note is stored the full light file for separating.
The reference data that full light is explained is explained from full light reference data and corresponding one or more full light and is generated.The form of file taked by this augmented reality model, and this file contains all information for reproducing required by the full light model of getting back to the note be associated with it.Therefore it is described in the relation between full light reference data and its note.The reference data that full light is explained directly can be reproduced in full light and explain with visualization result in advance on authoring system, and is directly reproduced on client-side to reproduce a certain full light augmented reality.
Describe to explain and the information of the effect of model light is stored in full light annotating data.Be used in model light parameter by explaining being modified for of limiting.Therefore, the amendment that can describe such as model radiation direction, position, time or wavelength is explained.In other words, the function of this information descriptive model light.
When explaining establishment, each light of note is assigned with unique identifier.When application is explained on authoring system, explain the corresponding light that light unique identifier is matched their model.So each light of model is assigned with explains light identifier, when system must by light on model, application is explained time, this note light identifier is used by system subsequently, as such as in on-line stage situation like this substantially.
Annotating information can be stored in 2 dimension arrays, and wherein each light contains about its information to the effect of the model for each parameter.The unique identifier explaining light is used in the array for each parameter, limit corresponding light effects subsequently.In other words, the first dimension of array corresponds to light, and this light is referred to by their identifier, and the second dimension corresponds to their parameter, i.e. light field parameter.Any note can use this form to be expressed completely, because can be expressed in an array for any amendment of the model light of any parameter.
In one embodiment, note such as can revise all model radiation directions for an angle with 10 °.As graphic in hereafter table 1, this 2 dimension array subsequently in the row of parameter corresponding to orientation angle containing 10 °.These row read 10 ° to all light, because suppose that they all work in the same manner.When expecting that effect to the model light of its correspondence is explained in application, system explain first identifying and model light to, extract correspond to explain light unique identifier, inquiry note table to be to check that what effect this note light has so that this change is applied to model light the most at last.In this illustration, will be rotated with 10 ° by the angle of all model light explaining impact.
Array explained by table 1..
As the example of off-line phase, user may want the scene of being added to by text annotations containing buildings.In addition, needs change from a viewpoint to another viewpoint by text annotations color.Step below will be completed by user subsequently:
1. the full light of buildings catches and is uploaded to full light note authoring system
2. reproduce 2D view from the full light image caught and this 2D view is presented to user
3. user selects text annotations type from explaining list of types, inputs his text and dragged to by text annotations the 2D view of reproduction
4. user can move the viewpoint of the 2D view of reproduction or explain position and orientation, thus note just in time occurs as wanted in user
5. user is the viewpoint setting textcolor of current reproduction
6. the viewpoint of the full light image of reproducing is moved to another position by user
7. textcolor is set to another value for this other viewpoint by user
8. full light note model is saved subsequently and prepares the on-line stage for explaining process.
Full light explains authoring system execution task below to generate suitable note model based on the user action step for text annotations described before:
1. based on the viewpoint setting being initially set to default value, 2D view is rendered to user
2. the full light version of text annotations generates to virtual view by following the trail of light from text object.This creates the set of light, and each is described by unique identifier.The set description text of this light.These light represent by corresponding to the array that must be applied to reference to the amendment of full light image in memory.In this situation, array by containing with the value explaining the wavelength that the light that mates of light must be taked
3. explain and be initially located at default location pre-qualified in authoring tools.Explain light with reference to full light image light combinations.Be stored by using light unique identifier for using in the future in these relations between the light of reference picture of explaining
4. when user use such as computer mouse pointer to move/change to explain directed time, other light combinations of the different light rays of note and the full light image of seizure, and for each position or the new 2D view of directed amendment generation.This is made to become possibility when user's mouse pointer is projected in full light space.The translation of pointer is applied to note subsequently in the plane parallel with the virtual plane corresponding to the view that 2D reproduces.When note is moved, changes in the relation explaining the light between reference picture according to note position or directed change and upgrade
5., when user selects the color being used for text for current view point, the wavelength value of note array is changed the color with match selection
6., when selection new viewpoint and when selecting new textcolor, be changed corresponding to being used for the wavelength value of note array of light of the view generating this new reproduction.Wavelength value in the middle of the first viewpoint and the second viewpoint uses standard or point-to-point interpolating method to carry out interpolation
7., when user's preservation model, full light is explained array and is saved the full light reference model uploaded, thus it can be used in on-line stage.
on-line stage
As explained before, when the user catching full light image wants this image to be explained, there is the on-line stage of whole note process.
The on-line stage of note process is applied to the full light image of input to obtain the image of final full light note.This is made up of following: mating input picture from some reference models, retrieve the note of the reference model mated, combining explaining with the full light image of input, with intelligible form the view of note being rendered to user and processing user interactions possibly to be created on the different actions of explaining and limiting.
Because the note content be made up of light to be in All-optical format and the image caught also is in All-optical format, so those two data acquisitions are in identical space.Thus explain and can directly be applied to full light image and not need further projection.The full light space of explaining the amendment be applied to can be projected in such as 2D view subsequently.This be also expressed as projective parameter that full light reproducing processes selects (selection of the change of such as viewpoint, the degree of depth, focus ...) be also impliedly applied on full light note.Such as, when changing focus or the viewpoint of reproducing processes, explaining and will there is the effect being applied to them.
Explain process as online full light graphic on Fig. 8 and comprise first step 100, during this first step 100, represent that the data (all-optical data) of the light field in All-optical format are retrieved.All-optical data can be retrieved by the device 4 catching data with full light trap setting, or is retrieved by the equipment such as server 5 receiving all-optical data from device 4 by communication linkage.
In a step 101, the data of retrieval are mated with reference data.This step can be performed at device 4 and/or in server 5.This step may relate to: determining the set of the feature in the data caught, searching expression with the coupling reference data of the reference picture of matching characteristic and by the data of seizure and reference data registration, as such as described in US13645762.Reference data can represent image in All-optical format or other image, and can be stored in the storer 51 of such as database, and this storer 51 may have access to from multiple device.The identification of coupling reference data can based on the position of user, the time, hour, the signal that receives from the element of scene, by user's and/or the instruction that provides of image similarity.Registration process object is the geometric relationship of searching between customer location and reference data, thus can infer catch full light image light and come Self Matching full light reference picture light between conversion.
In a step 102, such as retrieve from storer 51 the full light be associated with coupling reference data to explain.This note is in All-optical format, namely describes with light.Those explain other element that light can represent such as text, still image, video image, logos and/or act directly on full light image light.
Note can be included in the sound in full light space, such as, be attached to the sound of the particular group of the light of full light reference picture, thus sound will also be only that visible and/or more focal directions are play to the light wherein selected in full light image.
In step 103, the note of the retrieval in All-optical format and the all-optical data of seizure combine to generate the data explained, the image of the note in the data Biao Shi All-optical format of this note.This combination can be carried out in server 5 or in device 4.In the situation of the latter, the data of note can be sent to device 4 by server 5, and this device 4 combines subsequently.Because with reference to image ray cast to catch full light image conversion from coupling step (step 101) be known, so make this note be combined into possibility.Therefore the full light image that also can be applied to seizure is explained.
Full light is explained and is made the full light image that can be applied to seizure in the following method:
1. search conversion, this conversion is for the online full light image light with reference to retrieval in full light image ray cast to the step 100 at Fig. 8;
2. for the note of each retrieval of the full light image of the reference limited in off-line phase:
1., by reading in the note array limited in off-line phase, identifying and selecting which light with reference to full light image must revise according to explaining
2. the ray cast that will identify in point (1) is in online full light image.This is created in the correspondence between the light with reference to the selection of full light image and the light from the full light image caught
3. at point (2) place by each light of the full light image of seizure selected, application transforms to as explained the light limited in array at full light.Array is used as look-up table, the light that wherein can identify due to the selection course of step (1) and (2) and parameter (such as wavelength, the direction of conversion ...) be used as searching key.
Exemplarily, if explain light to represent text, then explaining array will containing single non-NULL light field parameter, and this single non-NULL light field parameter is the wavelength corresponding to textcolor.Thus the full light image light caught is revised by increasing/reducing the wavelength of light with the factor be stored in note array.This factor is searched in an array by the conversion that is used between the light that calculates in registration process.
At step 104, from the data reproduction view explained, such as 2D or three-dimensional view, and view is such as shown on the display 40 or is shown to user/viewer with another equipment.Be described in more detail together with Fig. 9 below this view reproducing processes.
In step 105, make with explain become possibility alternately.The specific action that system limits before can making a response different events to perform in the off-line part of the process of note.Such event can be the mutual of user and note.By means of touch-screen, hand tracing sensor or other input media any, user can point to given note and mutual with given note.This will generate alternative events alternately, and these alternative events can trigger the specific action limited in the off-line phase of the process of note.
Another of event may type be the event triggered when the specific change in scene being detected.As explained after a while in this part, can detect by the obturation of the object of the reference model in the full light image caught.The action limited in the off-line phase of the process of note before this occlusive events can trigger.As another example triggering the Possible event explaining action, sound recognition module can be used to carry out trigger certain actions based on the particular type of the sound detected.
The reproduction of Fig. 9 diagrammatic view and viewer is revised subsequently to the various possibilities of reproduction.As previously indicated, at step 104 from following reproduction augmented reality view: from the data of note of view generation caught and the annotating data All-optical format, as described with Fig. 8 before.The view reproduced can be standard 2D view as produced by pinhole camera, the line holographic projections of three-dimensional view, video, all-optical data or preferably for the number order again focused on and/or change viewpoint to present the dynamic image module of image.Dynamic image module can be can reproduce full light image as the function of bid value or the HTML5/Javascript webpage as Flash object, or allows other technology any dynamically presented of some images.The example of view that can be reproduced during step 104 illustrates on Fig. 5 A, 6A and 7A.View on Fig. 5 A and 6A comprises the object 60 with note 61.The view of Fig. 7 A is also seen and is in different depth and additional objects 62 therefore outside focus.Again focus on or change viewpoint can by user manually (as by selecting on image or around the object of image or position) or automatically (such as when the user is mobile) trigger.
In step 105, user's input is for revising the order of viewpoint to produce novel view from identical all-optical data during step 107, and this novel view corresponds to the same scene from different viewing point.For generating as being known like this from different points of view or the algorithm of various 2D images of checking the scene that direction is seen from all-optical data, and be such as described in US6222937.By this order produce and the example of the 2D image of the amendment performed by viewpoint selecting module 403 be illustrated on Fig. 5 B.If see, not only the skeleton view of object 60 but also the skeleton view of note 61 are by this order amendment.In fact, be directly used in by the full light that represents of input all-optical data spatially owing to explaining, when view generates from full light space, explain and seem to be transformed with full light image same way.This produces more real note.
Some notes can only from check direction first set be visible, but be not visible from other direction.Therefore, as graphic with Fig. 6 B, during step 105, the change of viewpoint can cause new view, but wherein making a note 61 become invisible appears the new note 64 be associated with same object.Multiple note can be associated with the single position of reference picture, but directional correlation joins from different checking.Because in the off-line phase of the process of note, the difference that sets explains light field parameter, when check from first compared with the second different views direction direction reproduces explain time, note itself also can seem difference.The change of outward appearance can be limited by note itself, but it also can be the function inputting full light image.
In the step 106 of Fig. 9, user's input command, this order is used for again focusedimage and new images for focusing on different distance from the data genaration in All-optical format.This order can be completely refocused module 402 and perform.If see on Fig. 7 A and 7B, this can to cause at the first focal length place visible first to explain 61 disappearing or the second focal length place of illustrating on Fig. 7 B becomes so not sharp keen, but second explains 63 and only appear at this second focal length place.
The different command being used for changing the view reproduced in step 105 and 106 also can move about user automatically to be issued.In one embodiment, user moves and follows the trail of with the Inertial Measurement Unit (IMU) be embedded in full light trap setting.By using this module, the view reproduced when the user is mobile is automatically updated.Such as, when user's on the left side moves, check that direction moves to the left side slightly.When user moves forward, identical principle is suitable for, and wherein focusing range also moves forward, and compares the sharper keen object produced in background plane and the softer object in foreground planes with the view reproduced before.The present invention is not constrained to and uses IMU to carry out track user and move.Such as directly use full light image content also can be used with other device of track user movement.
In another embodiment, online full light explains process by the full light image stream being continued to be applied to by the full light trap setting generation of the user in movement.This continues process and allows user to move constantly or move his full light trap setting and full light note is upgraded in real time.The reproduction (step 104 of Fig. 8) of full light image stream and view must be processed in real time, thus user awareness note is parts of scene as them.In this embodiment, then can revise when not needing to have another full light and catching and check that the fact in direction allows the much lower number of the full light image processed from stream with needs to realize same effect.In fact, if we suppose that single full light catches the reproduction allowed at the specific view checked within the scope of direction, as long as and user does not move out this scope, then and the full light image of flowing automatically does not need to be processed and only the step 104 of Fig. 8 needs again to be performed.This opens following new possibility: when user just comes more to calculate efficient real-time tracing by processing new full light image frame asynchronously close to when checking the border of scope, thus when new frame should be processed user awareness less than delay.
Example for the method explaining active full light image is illustrated in Figure 11:
The step 200 of Figure 20,201,202,203 and Fig. 8 in step 100,101,102,103 similar or equivalent.
In step 204, check that direction parameter is calculated, as the result of the registration process of step 201.
In step 205, check that view is reproduced in direction based on what calculate in step before.
In step 206, Inertial Measurement Unit (IMU) is used to determine that user moves about step 200 calculated time.Take following decision subsequently: or get back to step 200 for the treatment of new full light image, or directly check direction parameter to step 204 to move assessment to upgrade based on IMU.Whether the all-optical data caught before amount of movement is used to determine can be used to generate novel view.This typically depends on the visual field of full light trap setting.
The reproduction that full light is explained can consider possible obturation.If the object element explained is positioned at another Objects hide of the full light image of input from the trap setting visual field, then full light note may be inaccessible.
In one embodiment, Rendering module utilizes the All-optical format of the data caught note to be visually hidden in after uncorrelated object.Rendering module knows the attribute of the light of seizure from full light reference data, and the attribute of the light of this seizure should from each element of the full light image caught.If the light caught has the attribute different from the light of the expectation of element, then it can represent that inaccessible object is before element, and thus must not explain for the display of this element.
In similar mode, if the light corresponding to the element in the image caught has the direction different from corresponding direction in a reference image, then this can represent that element is in different depth.Rendering module can use this information to detect obturation.Extraly, the colouring information of light also can be used to determine that whether the key element of seizure is inaccessible.But colouring information is inadequate, because inaccessible object may have the color identical with object element.
application
The augmented reality that is provided as explaining the process of full light image and the note in All-optical format in the space identical with note brings new application.
First example of application is the use of the full light annotation system in social background.In fact, the full light image of object/scene can be caught with their full light trap setting by user.Catch full light image subsequently can by user use various note (before comprising catch and be used as explain full light image) explain.The scene of their note can use social networks to be shared with the friend of user subsequently, thus those friends can experience the scene of note when they catch it with themselves full light trap setting.The advantage using full light to explain process in this situation is added support (leverage) with the following fact: because explaining is full light image, be in full light space so explain.Therefore in identical full light space, carry out note process is more calculate efficiently and produce more real note scene.
The second example of the application of the different information in full light space is utilized to be the use that the full light of particular design in the field of architecture Design is explained.As invention forward part in describe, full light is explained and to be made up of the light with the full light image light combinations in on-line stage.The mode of this light of combination is limited in the off-line part of the process of note.This combination can make not replaced by from other light explained from the light of full light image, but such as only changes their direction.Explaining by limiting in (this wavelength but also amendment such as their direction of not only revising the light of full light image), making the material of the scene of analog capture or the change of texture become possibility.In this situation of architecture Design, full light is explained and can advantageously be used, with simulate such as when different materials is applied to wall particular room or specific buildings how will seem.In another embodiment, the simulation of weather condition can be applied to the full light image of seizure.The note of rain simulation can be applied to scene.This will produce the image explained, the image of this note is with being applied to its rainy effect, thus user can rain or other different weather condition situation in visually see scene will as how, wherein due to all-optical information, different light reflections and refraction are disposed suitably in real mode and are calculated.
As another example, seek treasured is popular application in traditional two-dimentional augmented reality solution.It is made up of following: note is attached to physical object, and by clue is given friend or other people, allow they search these explain (being called as precious deposits).In other words, when someone is close to the object hidden, he can scan the object of surrounding to determine whether they are associated with note with his full light trap setting.By using full light to explain, checking direction or focal length because note visibility can be restricted to some by us, becoming more exciting so seek treasured.Such as, note can be attached to statue by user, and determine only when future searcher to be placed on before statue and therefore he sees statue from that angle time make this explain.Similarly, we can use the focusing properties again in full light space to guarantee that searcher is focused on statue originally and therefore only show note in this situation with it.It makes to seek treasured becomes more attracting, because find precious deposits when it avoids user around random scanning but force him to riddle veritably.
Another application relates to the city guide in urban environment.Such as, let us consider user just he access city in and find tourist spot such as historic monument, place of interest, statue, museum, local restaurant ...Use his augmented reality system, user does not want to make all information appear on his screen immediately certainly: he only can be become chaotic by all these contents of visually overlapping on screen.On the contrary, full light can be made to explain and to depend on user's viewpoint and focus.Such as, the element of the image caught with certain viewing angles (or with certain viewing angles scope) by user can show with the importance lower than the element faced by user.In one embodiment, small significance is explained and only can be shown as title or point on screen (when user clicks on this title or point, they can be expanded), and prior interested point presents more details or the more large scale that has on image or emphasis.
Selection checks that the ability in direction (it is not visible for checking that direction is explained from this) is attractive to vehicle driver, this vehicle driver such as may want to be enhanced real world images on navigating instrument display, but does not want the note being attached to element such as to divert one's attention to the incoherent advertisement of traffic, shop etc.In this situation, those notes allowing people divert one's attention can be associated with the scope of the orientation selected, thus they can not be displayed on from the image of road seizure.
term and restriction
The various operations of method described above can be able to any suitable the device such as various hardware and/or (one or more) software part of executable operations, circuit and/or (one or more) module be performed.Usually, any operation described in application can be able to the functional device of correspondence of executable operations be performed.Various device, logical block and module can comprise various hardware and/or (one or more) software part and/or (one or more) module, and it comprises but is not limited to: circuit, special IC (ASIC) or general processor, digital signal processor (DSP), special IC (ASIC), field programmable gate array signal (FPGA) or other programmable logic device (PLD) (PLD), discrete gate or transistor logic, discrete hardware components or be designed to its any combination performing the function described in this article.General processor can be microprocessor, but processor can be the obtainable processor of any business, controller, microcontroller or state machine in alternative mode.Processor also may be implemented as the combination of calculation element, the combination of such as DSP and microprocessor, multi-microprocessor, together with one or more microprocessor of DSP core or other such configuration any.Server may be implemented as individual machine, the set of machine, virtual server or Cloud Server.
As used in this article, following any data are specified in statement " all-optical data ": these data generate with full light trap setting or calculate from the data of other type and describe the light field image of scene, namely wherein store the image in the brightness of not only light and the direction of color but also this light.Owing to losing this direction of light, so the 2D reproduced from such all-optical data or stereoprojection are not regarded as full light image.
As used in this article, statement " full light space " can specify hyperspace, and the function that namely light field describes the light quantity in each direction in space can describe with this hyperspace.Full light space can be described for a parameter of time possibly by a parameter of at least two parameters of the position for light, two parameters for its orientation and the wavelength for it and (in the situation of video).
As used in this article, various possible element contained in term " note ", other element during this element comprises such as text, still image, video image, logos, sound and/or can be applied or be integrated into otherwise the full light space that represented by all-optical data.More generally, the different modes of the different parameters contained for changing the full light space light represented by all-optical data explained in term.Note can be dynamic and pass the position and/or outward appearance that change them in time.In addition, explain and can be user interactions and when user interactions (such as move or convert) is reacted to the operation of user.
As used in this article, a single monochromatic light position can be specified in term " pixel ", or for detecting multiple adjacent light positions of the light in different colours.Such as, three adjacent light potential energies for detecting redness, green and blue light enough form single pixel.
As used in this article, term " is determined " to contain various action.Such as, " determine " to comprise reckoning, calculate, process, derive, investigate, search (such as searching in table, database or another data structure), find out, assess etc.And " determination " can comprise reception (such as receiving information), access (such as accessing data in memory) etc.And " determination " can comprise resolution, selection, selectes, set up etc.
The image catching scene relates to the brightness using digital pinhole camera to measure the light of the imageing sensor arriving camera.Catch all-optical data and can relate to the full light trap setting of use, maybe can relate to and generate light field data from virtual 3d model or to other description of scene and light source.Retrieving images can relate to catch image or by communication linkage from different device retrieving images.
Express " reproduction view ", such as " reproduce 2D view from all-optical data ", contain following actions: calculate or synthetic image, such as calculate 2D image or hologram image from the information be included in all-optical data.
The method described together with present disclosure or the step of algorithm directly can be embodied in the combination within hardware, in the software module performed by processor or both this.Software module can reside in any form of storage medium as known in the art.Can be comprised by some examples of the storage medium used: random-access memory (ram), ROM (read-only memory) (ROM), flash memory, eprom memory, eeprom memory, register, hard disk, moveable magnetic disc, CD-ROM etc.Software module can comprise single instruction, perhaps multiple instruction, and can stride across multiple storage medium to distribute in some different code sections, among distinct program.Software module can be made up of following: the program of executable program, the part used in complete routine or routine or storehouse, multiple interconnection, " application program ", the widget that are performed by many smart mobile phones, platform computer or computing machine, Flash apply, the part etc. of HTML code.Storage medium can be coupled to processor, makes processor to read information from storage medium and information is write storage medium.In alternative mode, storage medium can be indispensable to processor.Database may be implemented as any structurized set of data, and it comprises SQL database, the set of XML document, semantic database or by the set of the obtainable information of IP network or other suitable structure any.
Thus, particular aspects can comprise the computer program for performing the operation presented in this article.Such as, such computer program can comprise computer-readable medium, and this computer-readable medium has the instruction being stored (and/or coding) thereon, and this instruction can be performed the operation described in this article by one or more processor.For particular aspects, computer program can comprise encapsulating material.
Should be understood that claim is not limited to above graphic accurate configuration and parts.Can various amendment, change and variation be carried out in the layout of method and apparatus described above, operation and details, and not depart from the scope of claim.

Claims (28)

1. a method, comprises step:
The data (100) of light field are represented with full light trap setting (4) retrieval;
Executive routine code is used for the data of retrieval to mate (101) with corresponding reference data;
Executive routine code is explained (61,63,64) (102) for retrieving at least one in All-optical format be associated with the element of described reference data;
Executive routine code is used for the data (103) being created on the note All-optical format from the data of described retrieval and described note.
2. the described method of claim 1, comprises further:
Direction (105) is checked in selection;
The view (107) that the data corresponding to described note are reproduced in direction is checked from described,
The expression of wherein said note (61) checks direction described in depending on.
3. the described method of claim 1, comprises further:
Check that first view (104) of the data corresponding to described note is reproduced in direction from first;
Second is selected to check direction (105);
Check that second view (107) of the data corresponding to described note is reproduced in direction from described second;
The expression of wherein said note (61,61') is changed between described first view and described second view.
4. the described method of in Claim 1-3, comprises further:
Explain (61) by first to be associated with primary importance and first direction;
Explain (64) by second to be associated with second direction with described primary importance;
Reproduce the view of the data corresponding to described note,
Check first or second and select (105) between direction;
If select first to check direction, reproduce and comprise described first and explain but not the described second view explained, if or select second to check direction, reproduce and comprise described second and explain but not the described first view explained.
5. the described method of in claim 1 to 4, comprises further:
Reproduce corresponding to the reference data in All-optical format and correspond to the first view that first checks direction;
In described first view, note is associated with element;
Reproduce corresponding to the described reference data in All-optical format and correspond to the second view that second checks direction;
In described second view, note is associated with described element;
The note of element described in interpolation in the view between two parties that direction and described second is checked between direction is checked described first.
6. the described method of claim 5, comprises further from described first view, the second view and the step calculating the note in All-optical format between two parties view.
7. the described method of in claim 1 to 6, comprises further:
Reproduce the first view (104), described first view corresponds to the data of described note and corresponds to the first focal length;
Amendment focal length (106);
Reproduce the second view (107), described second view corresponds to the data of described note and corresponds to the focal length revised;
The expression of wherein said note (61) is changed between described first view and described second view.
8. the described method of claim 7, comprises further:
Explain (61) by first to be associated with primary importance and first degree of depth;
Explain (63) by second to be associated with described primary importance and second degree of depth;
Reproduce the first view (104), described first view corresponds to the data of described note;
(106) are selected between the first focal length or the second focal length;
If select the first focal length, reproduce and comprise described first and explain (61) but not described second explain second view (107) of (63), if or select the second focal length, reproduce and comprise described second and explain but not the described first view explained.
9. the described method of in claim 1 to 8, at least one in described note is the sound being attached to coordinate and being associated with specific direction.
10. the described method of in claim 1 to 9, at least one in described note is video.
The described method of one in 11. claims 1 to 10, at least one in note serves as the filtrator for changing in the direction of the light of specific location in full light space.
The described method of 12. claims 11, explains the direction of amendment light described in one of them.
The described method of 13. claims 12, explains the amendment texture of object or the attribute on surface described in one of them.
The described method of one in 14. claims 1 to 13, the array that wherein said note limits the amendment in the direction of light or the direction of light by the difference place in full light space limits.
The described method of one in 15. claims 2 to 14, wherein reproduce and comprise: the degree of depth depending on the described element determined from the direction of the light corresponding to described element, determine that the element explaining the light field when be retrieved is inaccessible, or explain the element of the light field when obturation is retrieved.
The described method of one in 16. claims 2 to 15, wherein reproduces and comprises: of retrieving in All-optical format explains and this explained the light field being applied to the multiple continuous print retrievals be in the light field stream of retrieval.
The described method of one in 17. claims 2 to 16, wherein reproduces and comprises: the light explained and the light corresponding to the data retrieved are merged.
18. 1 kinds, for catching and explain the equipment (4) of the data corresponding to scene, comprising:
Full light trap setting (41), for catching the data representing light field;
Processor (400);
Display (40);
Program code, when executed, retrieve for causing described processor at least one in All-optical format be associated with the element of the data caught with described full light trap setting (41) to explain (61,63,64), and for described display (40) upper reproduce from the data genaration caught and comprise at least one view explained described.
The described equipment of 19. claims 18, described program code comprises again focus module (402) further, and described focus module again (402) allows user again to focus on described view and for depending on that the focal length of selection changes presenting of described note.
The described equipment of one in 20. claims 18 or 19, described program code comprises the viewpoint selecting module (403) allowing user to change viewpoint further, and described viewpoint selecting module (403) is for described reproduction and for depending on that the viewpoint of selection changes presenting of described note.
21. 1 kinds, for determining the equipment (5) explained, comprising:
Processor (51);
Reservoir (50);
Program code, when performing described program code, receive for causing described processor represent light field data, described data are mated with a reference data, determine from described reservoir the note (61,63,64) All-optical format that is associated with described reference data and the image of the described note in All-optical format or the note in All-optical format be sent to remote-control device (4).
The described equipment of 22. claims 21, described program code comprises module (510) further, and this module (510) is for the note that is added in All-optical format and they be associated with the position in described reference data and visual angle.
The described equipment of 23. claims 22, comprises storer further, and this storer will explain the array of the amendment of radiation direction or the radiation direction saved as in the difference in full light space.
24. 1 kinds, for the method by explaining the reference picture be attached in All-optical format, comprising:
The described reference picture (150) in All-optical format is presented on reader;
Select to explain (151);
Select with described reader the position (152) and one or more direction (153) that are used for described note, described note can be seen from described one or more direction;
In memory described position and described direction are associated (154) with the described note in All-optical format and described reference picture.
The described method of 25. claims 24, comprises and multiple note being still associated with multiple different directions with single position.
The described method of one in 26. claims 24 to 25, comprises further:
Reproduce corresponding to the reference data in All-optical format and correspond to the first view that first checks direction;
Explain first in described first view and be associated with element;
Reproduce corresponding to the described reference data in All-optical format and correspond to the second view that second checks direction;
Described element is associated with by being different from the described first second note of explaining in described second view.
The described method of one in 27. claims 24 to 26, comprises further:
Reproduce corresponding to the reference data in All-optical format and correspond to the first view that first checks direction;
In described first view, note is associated with element;
Reproduce corresponding to the described reference data in All-optical format and correspond to the second view that second checks direction;
In described second view, note is associated with described element;
The note of element described in interpolation in the view between two parties that direction and described second is checked between direction is checked described first.
28. 1 kinds, for the equipment by explaining the reference picture be attached in All-optical format, comprising:
Processor;
Program code, for causing described processor: be presented on the described reference picture (150) in All-optical format with reader; User is allowed to select to explain (151) and for the position (152) of described note and one or more direction (153), described note can be seen from described one or more direction;
Storer, stores described note, described position and described direction.
CN201280077894.3A 2012-12-21 2012-12-21 Method and apparatus for adding annotations to a plenoptic light field Pending CN104969264A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/076643 WO2014094874A1 (en) 2012-12-21 2012-12-21 Method and apparatus for adding annotations to a plenoptic light field

Publications (1)

Publication Number Publication Date
CN104969264A true CN104969264A (en) 2015-10-07

Family

ID=47553021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280077894.3A Pending CN104969264A (en) 2012-12-21 2012-12-21 Method and apparatus for adding annotations to a plenoptic light field

Country Status (5)

Country Link
EP (1) EP2936442A1 (en)
JP (1) JP2016511850A (en)
KR (1) KR20150106879A (en)
CN (1) CN104969264A (en)
WO (1) WO2014094874A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10921896B2 (en) 2015-03-16 2021-02-16 Facebook Technologies, Llc Device interaction in augmented reality

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3059949A1 (en) 2015-02-23 2016-08-24 Thomson Licensing Method and apparatus for generating lens-related metadata
EP3099077B1 (en) * 2015-05-29 2020-07-15 InterDigital CE Patent Holdings Method for displaying a content from 4d light field data
EP3151534A1 (en) 2015-09-29 2017-04-05 Thomson Licensing Method of refocusing images captured by a plenoptic camera and audio based refocusing image system
JP7209474B2 (en) * 2018-03-30 2023-01-20 株式会社スクウェア・エニックス Information processing program, information processing method and information processing system
US11182872B2 (en) 2018-11-02 2021-11-23 Electronics And Telecommunications Research Institute Plenoptic data storage system and operating method thereof
KR102577447B1 (en) 2018-11-02 2023-09-13 한국전자통신연구원 Plenoptic data storage system and operating method thereof
US10565773B1 (en) 2019-01-15 2020-02-18 Nokia Technologies Oy Efficient light field video streaming
JP2022102041A (en) * 2020-12-25 2022-07-07 時男 後藤 Three-dimensional annotation drawing system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008134901A1 (en) * 2007-05-08 2008-11-13 Eidgenössische Technische Zürich Method and system for image-based information retrieval

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6009188A (en) 1996-02-16 1999-12-28 Microsoft Corporation Method and system for digital plenoptic imaging
US8432414B2 (en) 1997-09-05 2013-04-30 Ecole Polytechnique Federale De Lausanne Automated annotation of a view
JP2002098548A (en) * 2000-09-22 2002-04-05 Casio Comput Co Ltd Guide information transmitter and recording media
JP2006255021A (en) * 2005-03-15 2006-09-28 Toshiba Corp Image display device and method
US20120127203A1 (en) * 2010-11-18 2012-05-24 Canon Kabushiki Kaisha Mixed reality display

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008134901A1 (en) * 2007-05-08 2008-11-13 Eidgenössische Technische Zürich Method and system for image-based information retrieval

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HIROSHI KAWASAKI 等: ""Image-based rendering for mixed reality"", 《IMAGE PROCESSING》 *
INA FRIED: ""With New Features, Lytro Aims to Show Its Futuristic Camera Is No One-Trick Pony"", 《HTTP://ALLTHINGSD.COM/20121115/WITH-NEW-FEATURES-LYTRO-AIMS-TO-SHOW-ITS-FUTURISTIC-CAMERA-IS-NO-ONE-TRICK-PONY/》 *
MARC LEVOY 等: ""Light Field Rendering"", 《PROCEEDINGS OF THE 23RD ANNUAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10921896B2 (en) 2015-03-16 2021-02-16 Facebook Technologies, Llc Device interaction in augmented reality

Also Published As

Publication number Publication date
WO2014094874A1 (en) 2014-06-26
KR20150106879A (en) 2015-09-22
JP2016511850A (en) 2016-04-21
EP2936442A1 (en) 2015-10-28

Similar Documents

Publication Publication Date Title
US11250631B1 (en) Systems and methods for enhancing and developing accident scene visualizations
KR102417645B1 (en) AR scene image processing method, device, electronic device and storage medium
US20140181630A1 (en) Method and apparatus for adding annotations to an image
CN104969264A (en) Method and apparatus for adding annotations to a plenoptic light field
US9542778B1 (en) Systems and methods related to an interactive representative reality
CN107957774B (en) Interaction method and device in virtual reality space environment
CN107957775B (en) Data object interaction method and device in virtual reality space environment
KR101722177B1 (en) Method and apparatus for hybrid displaying of VR(virtual reality) contents
CN102473324B (en) Method for representing virtual information in real environment
KR101854402B1 (en) Method of construction projection based on virtual reality, computer readable storage media containing program for executing the same, and application stored in media for executing the same
US10084994B2 (en) Live streaming video over 3D
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
KR101697713B1 (en) Method and apparatus for generating intelligence panorama VR(virtual reality) contents
Kasapakis et al. Augmented reality in cultural heritage: Field of view awareness in an archaeological site mobile guide
CN104798128A (en) Annotation method and apparatus
US20190244431A1 (en) Methods, devices, and systems for producing augmented reality
US20180239514A1 (en) Interactive 3d map with vibrant street view
KR20130137076A (en) Device and method for providing 3d map representing positon of interest in real time
JP2023503247A (en) METHOD AND SYSTEM FOR SEARCHING IMAGES USING ROTATING GESTURE INPUT
US10956981B1 (en) Systems and methods for visualizing an accident scene
Netek et al. From 360° camera toward to virtual map app: Designing low‐cost pilot study
KR20050061857A (en) 3d space modeling apparatus using space information and method therefor
Ünal et al. Location based data representation through augmented reality in architectural design
CN109923540A (en) The gesture and/or sound for modifying animation are recorded in real time
KR102443049B1 (en) Electric apparatus and operation method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151007

WD01 Invention patent application deemed withdrawn after publication