WO2015016798A2 - A system for an augmented reality application - Google Patents
A system for an augmented reality application Download PDFInfo
- Publication number
- WO2015016798A2 WO2015016798A2 PCT/TR2014/000293 TR2014000293W WO2015016798A2 WO 2015016798 A2 WO2015016798 A2 WO 2015016798A2 TR 2014000293 W TR2014000293 W TR 2014000293W WO 2015016798 A2 WO2015016798 A2 WO 2015016798A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- projector
- objects
- depth sensor
- environment
- platform
- Prior art date
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 15
- 238000013497 data interchange Methods 0.000 claims abstract description 34
- 230000000087 stabilizing effect Effects 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims description 74
- 230000003068 static effect Effects 0.000 claims description 29
- 239000000463 material Substances 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 6
- 239000004744 fabric Substances 0.000 claims description 5
- 238000007726 management method Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 3
- 239000002184 metal Substances 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 8
- 239000003086 colorant Substances 0.000 description 5
- 238000012938 design process Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 235000019646 color tone Nutrition 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 235000013305 food Nutrition 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3185—Geometric adjustment, e.g. keystone or convergence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
Definitions
- This invention is related to a system, wherein a depth sensor and a projector are used, which, in general terms, enables an augmented reality application to be provided and, particularly, a spatial augmented reality application to be provided.
- one of the special computer aided methods used in enriching the use of the projectors is the Projection Mapping application.
- an image projection process is carried out on a building. During this process, the image of the building is captured by means of a camera and the captured image is processed using a computer program to add an extra layer to the image. Then, a projector is placed right where the image of the building was captured using a camera and the projector projects the image from the computer program onto the building.
- One of the most important disadvantages of such applications is that at the slightest movement of the projector, the image projected is no longer projected on the right spot. In each case where the projector is moved, all of the processes need to be repeated.
- Patent No: 7068274 is that the manual calibration process and/or modeling process carried out prior to the image projection process needs to be repeated even at the slightest change in position (movement) of the projector or other parts of the assembly.
- the method described in patent document number US7068274 does not allow for the image to be properly projected onto the object without requiring recalibration in the event of the projector or other parts of the assembly being used in a mobile manner or the projector or the assembly exhibiting changes in position.
- patent document number , US20110205341 of the known state of the art discloses an architecture in which more than one depth camera and more than one projector is used.
- the said architecture allows for models of the objects in the architectural environment to be acquired and for the graphics that mainly serve as a user interface to be projected onto the same objects.
- One of the main disadvantages of the architecture described in patent document number US20110205341 is that the manual calibration process and/or modeling process carried out prior to the image projection process needs to be repeated even at the slightest change in position (movement) of the projector or other parts of the assembly.
- Patent document number US2010315491 in the state of the art describes a method for digitally augmenting or enhancing the surface of a food product, more particularly a cake.
- the method includes generating an augmentation media file based on a projection surface of the food product such as a digital movie or image that is mapped to the 3D topography of the projection surface and that is projected on the food product using a properly aligned projector.
- a projection surface of the food product such as a digital movie or image that is mapped to the 3D topography of the projection surface and that is projected on the food product using a properly aligned projector.
- One of the main disadvantages of the method described in said patent document number US2010315491 is that the manual calibration process and/or modeling process carried out prior to the image projection process needs to be repeated even at the slightest change in position (movement) of the projector or other parts of the assembly.
- the method described in patent document US2010315491 does not contain a control platform that provides for processes such as the selection and editing of the image to be projected onto the cake and which carries out interaction with the user via a graphical interface.
- This situation suggests that the method described in patent document US2010315491 is not suitable for use in applications in which user preferences regarding the image to be projected onto the object are of importance.
- Summary of the Invention is to provide a system that enables to realize an augmented reality application by means of a depth sensor and projector.
- Another objective of the invention is to provide a system that stabilizes the positions of the depth sensor and the projector with regards to each other during any movement by holding them in an integrated manner in a stabilizing structure such as an outer casing.
- Another objective of the invention is to provide a system that stabilizes the positions of the depth sensor and the projector after they are calibrated with regards to each other so that they do not need to be recalibrated during any movement by holding them in an integrated manner in a stabilizing structure such as an outer casing.
- Another objective of the invention is to provide a system using a method based on ray tracing which natively prevents projection based distortion of images projected onto objects and/or spaces during the projection process.
- Another objective of the invention is to provide a system that enables colors in images projected onto objects and/or spaces to be visualized accurately during the projection process.
- FIG. 2 Schematic block diagram of an embodiment of the system of the invention.
- FIG. 3 Schematic block of another embodiment of the system of the invention.
- At least one depth sensor (2) that carries out the task of detecting the environment and objects
- At least one projector (3) that is integrated with the depth sensor (2) and which carries out the task of projecting images
- At least one data interchange unit (4) that, in its most basic state, enables the depth sensor (2) and the projector (3) to exchange data with other related elements of the system (1)
- At least one stabilizing structure (5) that holds the depth sensor (2) and the projector (3) together in an integrated manner
- at least one computer (6) with a processor that evaluates the data acquired by means of the data interchange unit (4) and that transmits information related to the image to be projected to the projector (3) by means of the data interchange unit (4),
- At least one control platform (7) that enables the environment to be modelled, the environment to be traced, the objects to be defined, the objects to be traced, the image to be projected onto the objects to be designed using the data received from the depth sensor (2), at least one projective overlay platform (8) that is a custom graphics engine that processes the images to be projected by the projector (3) in a manner that prevents their physical distortion (Figure 1).
- the system (1) that enables an augmented reality application to be provided by means of projecting an image onto an environment and/or objects further consists of: at least one control circuit (9) that, in its most basic state, enables the commands which enable the input, output and management processes related to the depth sensor (2) and the projector (3) to be carried out to be transmitted to the computer (6) ( Figure 2).
- the system (1) that enables an augmented reality application to be provided by means of projecting an image onto an environment and/or objects further consists of: at least one spectrophotometer (10) that enables color calibration for the image projected to be carried out (Figure 3).
- the system (1) that enables an augmented reality application to be provided by means of projecting an image onto an environment and/or objects further consists of both the control circuit (9) and the spectrophotometer (10).
- the depth sensor (2) is the unit that carries out the task of detecting the environment and the objects.
- the depth sensor (2) is screwed onto the projector (3) or the stabilizing structure (5) or to both in a manner such that its position with regards to the projector (3) does not change during a movement of the stabilizing structure (5).
- the depth sensor (2) is first calibrated together with the projector (3) and then used as calibrated. The intrinsic and extrinsic parameters of the depth sensor (2) and the projector (3) are determined during the calibration.
- the depth sensor (2) transmits the data it acquires from the environment to the control platform (7) operating on the computer (6) by means of the data interchange unit (4).
- the depth sensor (2) is a unit that is operated by the control platform (7) and that can be used in two separate modes. These are the construction and the tracing modes. In both modes, the depth sensor (2) continuously captures the depth frames.
- the projector (3) is a unit that is integrated with the depth sensor (2) and which carries out the task of projecting images.
- the projector (3) is integrated with the depth sensor (2) or the stabilizing structure (5) or both in a manner such that its position with regards to the depth sensor (2) does not change during a movement of the stabilizing structure (5).
- the projector (3) is first calibrated together with the depth sensor (2) and then used in a calibrated manner. The intrinsic and extrinsic parameters of the depth sensor (2) and the projector (3) are determined during the calibration.
- the projector (3) receives the data related to the image it will project onto an environment and/or objects from the projective overlay platform (8) operating on the computer (6) by means of the data interchange unit (4).
- the data interchange unit (4) is a unit that, in its most basic state, enables the depth sensor (2) and the projector (3) to exchange data with the control platform (7) and the projective overlay platform (8) operating on the computer (6).
- the data interchange unit (4) is also a unit that enables the control circuit (9) to exchange data with the control platform (7) and the projective overlay platform (8) operating on the computer (6).
- the data interchange unit (4) is also a unit that enables the spectrophotometer (10) to exchange data with the control platform (7) and the projective overlay platform (8) operating on the computer (6).
- the data interchange unit (4) may be any unit such as an Ethernet port, wireless network card, VGA port, USB port, HDMI port, Firewire (IEEE 1394) port that one skilled in the art may use for data and image transfer locally or over a network.
- a single stabilizing structure (5) may consist of one or more data interchange units (4) and in the case it consists of more than one, these may be of the same or of different type.
- the data interchange unit (4) will be among the ones specified above in an embodiment of the invention is based on how the computer (6) carries out the data exchange.
- the computer (6) is located on a remote server or a cloud system, i.e. it is a computer (6) that communicates with the depth sensor (2) and the projector (3) over a network; the data interchange unit (4) may consist of an Ethernet port and/or wireless network card enabling communication over the network.
- the computer (6) is accessed locally, i.e. it is a computer (6) that communicates directly with the depth sensor (2) and the projector (3); the data interchange unit (4) may consist of one or more ports such as a GA port, USB port, HDMI port, and Firewire port.
- the stabilizing structure (5) may consist of more than one data interchange unit (4) that enables the depth sensor (2) and the projector (3) to communicate with the computer (6) both directly and over the network.
- the stabilizing structure (5) in the most basic state of the invention, is a unit that holds the depth sensor (2) and the projector (3) together in an integrated manner.
- the stabilizing structure (5) is a unit that holds the depth sensor (2), the projector (3) and the data interchange unit (4) together in an integrated manner.
- the stabilizing structure (5) is a metal plate and an outer casing around it.
- the stabilizing structure (5), depth sensor (2), projector (3) and data interchange unit (4) and other system (1) elements the invention holds in other embodiments contains holes and openings which enable them to interact with the environment, objects and other system (1) elements.
- the stabilizing structure (5) is the unit that can hold the control circuit (9).
- the stabilizing structure (5) is the unit that can hold the control circuit (10).
- the computer (6) is a unit consisting of at least one processor that evaluates the data received by means of the data interchange unit (4) and that transmits information related to the image to be projected to the projector (3) by means of the data interchange unit (4).
- the computer (6) term is used to describe all devices with computer properties.
- the computer (6) is the unit that provides the environment necessary for the control platform (7) and the projective overlay platform (8) to operate.
- the computer may also include a GPU (Graphics Processing Unit) and/or any customized microprocessor card.
- the computer (6) is a unit that is located inside the stabilizing structure (5) and that provides for interaction with the user (K) by means of the environment elements that can be reached from within the opening of the stabilizing structure (5).
- An example embodiment of the invention is one where a touchscreen monitor or a screen and keyboard is found on top of the stabilizing structure (5), the user (K) carries out interaction with the computer (6) by means of a touchscreen monitor or a screen and keyboard, and the computer (6) carries out all data exchange processes within a stabilizing structure (5).
- the control platform (7) is a graphics engine unit that enables the environment to be modelled, the environment to be traced, the objects to be defined, the objects to be traced, and the image to be projected onto the objects to be designed using the data received from the depth sensor (2).
- the control platform (7) operates on the computer (6) and, in a preferred embodiment of the invention, also provides a graphical interface that can be used together with the environment elements in order to provide the user (K) with interaction between the user (K) and the system (1).
- the control platform is an integrated or external 3 dimensional (3D) modeling platform.
- the interaction between the control platform (7) and the user (K), in its most basic state, is defined as the user (K) displaying the 3 dimensional model, the user (K) designing the image to be projected by means of the control platform (7) and carrying out a preview from the control platform (7).
- the control platform (7) is a unit that evaluates the data received from the depth sensor in two separate modes. These are the construction and the tracing modes. In the construction mode, the depth sensor (2) transmits the depth frames it acquires by scanning the environment to the control platform (7), while the control platform (7) merges this data together using the sensor fusion algorithms and forms a model.
- the sensor fusion algorithms used in this section may be one of the sensor fusion algorithms found in the current state of the art.
- the depth sensor (2) transmits the depth frames it obtains to the control platform (7), while the control platform (7) uses these data to trace the changes in position and direction of the depth sensor (2) and other "non-static" objects in the environment.
- the control platform (7) traces the changes in position and direction within the model it forms in the construction mode.
- control platform (7) is a unit that combines the data received from each depth sensor to enable their use in the construction mode and/or the tracing mode.
- control platform (7) is also the unit in which static and non-static objects are determined.
- the process of static and non-static objects being determined is carried out automatically by the control platform (7), while in another embodiment of the invention, it is carried out by the user (K) using the environment elements of the interface provided by the control platform (7) and the computer (6).
- the control platform (7) determines, by means of feature matching, whether a model of an object in its memory and determined as being non-static belongs to any of the objects in the environment and this object is determined as being a non-static object.
- the control platform (7) is a unit that consists of a memory.
- This memory stores a 3 dimensional model or models formed on it or received in ready form from another source, 2 dimensional surface parameters for objects, pattern and brightness maps or appearance of fabrics under different lights, data on the material assigned to objects to be projected onto the objects.
- the control platform (7) starts to operate directly in tracing mode without operating in the construction mode.
- the control platform (7) As the control platform (7) continuously evaluates the data continuously received from the depth sensor (2), the control platform (7) is a unit that can recognize new objects that enter the environment and that can properly update the 3 dimensional model if a new projection needs to be applied on these objects or if no projection is to be applied. For this purpose, the control platform (7) uses subtraction algorithms for the volume covered by these objects to be subtracted from the 3 dimensional model. The control platform (7) makes the decision as to whether projection is to be applied or not applied onto an object that newly enters the environment by determining, by means of feature matching, as to whether the model of an object previously added to its memory and for which a command concerning projection not being projected onto it has been specified belongs to any of the objects newly introduced in the environment.
- the projective overlay platform (8) is a custom graphics engine that processes the images to be projected by the projector (3) in a manner that prevents their physical distortion.
- the projective overlay platform (8) is a unit that takes the design formed on the control platform (7) and enables it to attain the structure necessary for it to be sent to the projector (3) for it to be projected.
- the projective overlay platform (8) is a unit that can carry out real-time rendering.
- the projective overlay platform (8) is a unit that contains a memory. The 3 dimensional model or models received from the control platform (7) are stored on this memory.
- the projective overlay platform (8) is a unit that enables projection to be carried out in a manner similar to real world illumination and human vision models.
- the projective overlay platform (8) applies a native method for the images to be projected from the projector (3) to be projected onto the objects or the environment without distortion.
- the projective overlay platform (8) is a ray tracing method based unit.
- the projective overlay platform (8) utilizes a customized aspect of the ray tracing method in the known state of the art.
- the projective overlay platform (8) utilizes this customized method, it manages the formation of each ray to be created by the projector (3) for the image to be projected by the projector (3).
- the process of tracing the path of a ray of the ray tracing method in the known state of the art as it reflects from a certain point on the object and then reaches the human eye or a camera is used to enable the formation of the ray to enable it to be sent from the projective overlay platform (8) to the said point on the object using the same path.
- the projective overlay platform (8) of the system (1) the point of exit of the ray is the point the projector (3) sends the ray and the point of arrival is the point the ray falls onto the object and/or space.
- the projective overlay platform (8) utilizes the ray tracing method in the known state of the art in a reverse manner.
- the relationship between each ray traced in the ray tracing method and of which the path is identified and each ray to be projected from the projector (3) of the system (1) is calculated by the projective overlay platform (8) and each ray is formed by the projective overlay platform (8) in accordance with this calculation.
- the projective overlay platform (8) uses the intrinsic and extrinsic parameters of the projector (3) that will send the ray to determine the angle of refraction the ray will exhibit as it exits the lens of the projector (3).
- rays are formed in a manner such that a distortion arising from the lenses of the projector (3) will not be inflicted onto the image to be projected onto the object and the path of the rays are arranged to send rays to the correct point of the object and the rays are formed taking this calculation into consideration and are transferred by means of the data interchange unit (4) to the projector (3) to enable their projection onto the object.
- the projective overlay platform (8) carries out this process for each ray to form the image to be projected onto the object; in other words, is able to manage the formation of each ray individually and enable their transmission to the projector (3) to ensure they are projected to the correct point on the object.
- the projective overlay platform (8) is a unit that can apply edge blending methods to ensure there is no overlap or distortion in the process of projecting images onto an object and/or space.
- the projective overlay platform (8) and the control platform (7) may operate in such a manner that the preview observed by the user (K) on the graphical interface provided by the control platform (7) is at the same time projected onto the objects and the environment.
- sharing and synchronization of 3 dimensional models between the projective overlay platform (8) and the control platform (7) is carried out.
- the control circuit (9) is a unit that, in its most basic state, provides for the commands which enable the commands enabling the input, output and management processes related to the depth sensor (2) and the projector (3) to be carried out to be sent to the computer (6).
- the control circuit (9) also provides for the commands which enable the commands enabling the input, output and management processes related to the spectrophotometer (10) to be carried out to be sent to the computer (6).
- the computer (6) is located on a remote server or a cloud system, i.e.
- the control circuit (9) carries out the tasks of conveying a request for data transfer from the depth sensor (2) to the computer (6) by means of the data interchange unit (4) and for conveying a request to the computer (6) by means of the data interchange unit (4) for receiving information on the image to be projected from the projector (3).
- Spectrophotometer (10) is the unit that carries out color measurements on the image projected and transmits information on these measurements to the computer (6) by means of the data interchange unit (4). Information on the color measurements carried out by the spectrophotometer (10) is used to ensure the color and color tones of the projected image are projected onto the objects and/or spaces in accordance with how they are designed on the control platform (7).
- the spectrophotometer (1) is located within the system (1), preferably a number of colors and color tones belonging to a wide range of colors is projected onto the objects and/or spaces under the control of the control platform (7) prior to the image projection process and the measurements of these colors and color tones on the objects and/or spaces is collected from the spectrophotometer (10) to be matched by the control platform (7) with the information of the actual color that is desired to be projected and by this means a profile or calibration table is formed within the control platform (7).
- the information regarding as to which color or color tone is to be transmitted to the projective overlay platform (8) to ensure that colors selected in the design using the control platform (7) is displayed correctly on the objects and/or spaces is determined by the control platform (7).
- an augmented reality application is formed by projecting images onto the environment and objects.
- the system (1) carries out three basic steps utilizing its elements as the said processes are realized. These steps are the scanning, designing and projecting steps.
- the scanning process consists of the sub-steps of the environment scanning of the depth sensor (2), it transmitting the obtained data to the control platform (7), the control platform (7) forming a 3 dimensional model and the static and non-static objects being determined.
- the scanning process may commence after the depth sensor (2) is calibrated. After it starts to obtain depth frames, the depth sensor (2) transmits these data by means of the data interchange unit (4) to the control platform (7) that operates on the computer (6) and the data obtained by the depth sensor (2) is used by the control platform (7) in the construction mode, i.e. for the formation of a 3 dimensional model.
- the 3 dimensional model created is transformed into a polygon mesh representation by the control platform (7).
- the control platform (7) may utilize a transformation algorithm found in the known state of the art.
- the static and non-static objects of this model are either automatically determined by the control platform (7) or determined manually by the user ( ) by means of the control platform (7).
- the static objects may be elements such as walls, surfaces found in the environment
- the non-static objects may be elements found in the environment that are mobile or are likely to be mobile. Data regarding this 3 dimensional model formed and static and non-static objects, which enable these objects to be scanned later on, are stored in the memory of the control platform (7).
- the top and side vectors of the scanned object may be determined by the user (K) or the sensors developed and/or an algorithm. These vectors are stored inside the control platform (7) and may be used to support the algorithms or to develop new algorithms.
- the scanning process i.e. scanning of the environment, creating a 3 dimensional (volumetric) model of the environment, and identifying the static and non-static objects
- the depth frames still being received from the depth sensor (2) are only used by the control platform (7) for the tracing mode.
- the scanning process is expressed in terms of the subdivision of the 3 dimensional model and the assignment and arrangement of the material to be projected onto the objects on the control platform (7).
- the 3 dimensional model formed by the control platform (7) is regarded as a background and the background is divided into more than one subdivisions.
- the subdivision process may be carried out automatically, semi-automatically or manually by means of the control platform (7).
- a better appearance may be provided by utilizing edge blending methods to blend the edges between the subdivisions.
- the control platform (7) carries out a 2 dimensional surface parameterization process for each object.
- a different material assignment is carried out on each object using the material data stored in the memory of the control platform.
- the materials assigned may be of different properties and data on characteristics such as diffuse, specular, bump may be found for each object and may exhibit differences.
- a preview of how the materials assigned appear on the objects may be shown to the user (K) by means of a virtual scene provided by the graphical interface of the control platform (7).
- the image is transmitted to the projector (3) by means of the projective overlay platform (8), computer (6) and data interchange unit (4) and is projected onto the environment and/or the objects.
- the projection process is carried out.
- the 3 dimensional model, data on the static and non- static objects, data on the subdivision process, pattern maps and data on material assigned to each object which is stored on the control platform (7) is transmitted to the projective overlay platform (8).
- the projective overlay platform (8) carries out processes and conversions on the data so as to prevent distortions related to the projection.
- the processes carried out by the projective overlay platform (8) are ray tracing based methods.
- the image to be projected by the projector (3) is finalized and is transmitted from the projective overlay platform (8) operating " on the computer (6) to the projector (3) by means of the computer (6) and the data interchange unit (4).
- the projective overlay platform (8) carrying out real time processes and conversions prevents distortions in the projected images being formed based on the movements of the stabilizing structure (5).
- the system (1) of the invention may be used to project patterns representing appearance of fabric onto furniture.
- the stabilizing structure (5) is moved, in other words, the depth sensor (2) obtains depth frames in order to form a depth map of the environment.
- the data obtained by the depth sensor (2) is transmitted by means of the data interchange unit (4) to the control platform (7) operating on the computer (6) and the control platform (7) creates and stores a 3 dimensional model representation of the environment using these data.
- the control platform (7) By means of the control platform (7), the static and non-static objects within this environment are also identified. For example, while the walls and the surface of the environment are identified as static objects, the couch within the environment is stored as a non-static object and these data are stored on the control platform (7).
- the couch which is defined as a non-static object, may be identified even when in another environment.
- the environment is subdivided by the control platform (7) and 2 dimensional surface parameters are identified for the couch to enable overlay maps to be created.
- the material to be projected onto the couch for example, a fabric pattern image
- a preview image in the form of a virtual scene is provided by means of the graphic interface provided by the control platform (7).
- the user (K) may move or turn these overlay maps to provide for the fabric overlay to be different.
- the data created as a result of the trace and design processes is once again transmitted to the memory of the project overlay platform (8) operating on the computer (6) and the project overlay platform (8) carries out processes on the image data to ensure that the distortions that may appear as a result of the projection are prevented and transmits the image to be projected to the projector (3) by means of the interchange unit (4).
- the pattern desired on the couch is displayed in the real environment.
- the control platform (7) is able to subtract the volume of the cushion from the 3 dimensional model and ensure that the projection remains only on the couch.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Projection Apparatus (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Or Creating Images (AREA)
Abstract
This invention is related to a system (1), wherein a depth sensor (2) and a projector (3) are used, which, in general terms, enables an augmented reality application to be provided and, particularly, a spatial augmented reality application to be provided. The system (1) of the invention consists of a depth sensor (2), a projector (3), a data interchange unit (4), a stabilizing structure (5), a computer (6), a control platform (7) and a projective overlay platform (8).
Description
A SYSTEM FOR AN AUGMENTED REALITY APPLICATION
Technical Field
This invention is related to a system, wherein a depth sensor and a projector are used, which, in general terms, enables an augmented reality application to be provided and, particularly, a spatial augmented reality application to be provided.
Background of the Invention
Conventional projectors are designed to operate on flat surfaces such as walls, curtains and special projector curtains. For such projectors, to form advanced images on complex surfaces that are fully compatible with complex surfaces is not possible unless a particular computer aided method is utilized.
Particular computer aided methods provide a wide range of opportunities regarding to the use of traditional projects on real life objects. Such use enriches the appearance of the objects and this enrichment is carried out in a much more cost effective manner due to the overlay being done by means of image projection rather than overlay involving real materials.
In the prior art, one of the special computer aided methods used in enriching the use of the projectors is the Projection Mapping application. In one of the most common and well-known examples of this application, an image projection process is carried out on a building. During this process, the image of the building is captured by means of a camera and the captured image is processed using a computer program to add an extra layer to the image. Then, a projector is placed right where the image of the building was captured using a camera and the projector projects the image from the computer program onto the building. One of
the most important disadvantages of such applications is that at the slightest movement of the projector, the image projected is no longer projected on the right spot. In each case where the projector is moved, all of the processes need to be repeated. Another disadvantage of such applications is the fact that images may not appear realistic due to the application not utilizing a graphics engine. Due to these disadvantages, such applications are not used in cases where the projector can be moved during image projection and where a realistic image projection is a necessity. An application in the state of the art is a computer implemented method described in U.S. Patent No: 7068274. This method discloses an embodiment wherein a 3 dimensional physical object is animated by means of an image. Using the said method, the image is projected onto the 3 dimensional object to provide the desired appearance of the object. The most important shortcoming of the method described in said U.S. Patent No: 7068274 is that the manual calibration process and/or modeling process carried out prior to the image projection process needs to be repeated even at the slightest change in position (movement) of the projector or other parts of the assembly. In terms of this aspect, it can be understood that the method described in patent document number US7068274 does not allow for the image to be properly projected onto the object without requiring recalibration in the event of the projector or other parts of the assembly being used in a mobile manner or the projector or the assembly exhibiting changes in position.
Another application in the state of the art, patent document number , US20110205341 of the known state of the art, discloses an architecture in which more than one depth camera and more than one projector is used. The said architecture allows for models of the objects in the architectural environment to be acquired and for the graphics that mainly serve as a user interface to be projected onto the same objects. One of the main disadvantages of the architecture described in patent document number US20110205341 is that the manual
calibration process and/or modeling process carried out prior to the image projection process needs to be repeated even at the slightest change in position (movement) of the projector or other parts of the assembly. Moreover, as the architecture described in patent document number US201 10205341 is an architecture that serves as a user interface graphic that interacts with the user rather than being an architecture that overlays objects with an image, it requires a flat surface (wall, curtain, etc.). This situation suggests that for objects on which 3 dimensional projection is required, the architecture described in patent document number US201 10205341 is not suited when the said objects do not have a flat surface.
Patent document number US2010315491 in the state of the art describes a method for digitally augmenting or enhancing the surface of a food product, more particularly a cake. The method includes generating an augmentation media file based on a projection surface of the food product such as a digital movie or image that is mapped to the 3D topography of the projection surface and that is projected on the food product using a properly aligned projector. One of the main disadvantages of the method described in said patent document number US2010315491 is that the manual calibration process and/or modeling process carried out prior to the image projection process needs to be repeated even at the slightest change in position (movement) of the projector or other parts of the assembly. Moreover, the method described in patent document US2010315491 does not contain a control platform that provides for processes such as the selection and editing of the image to be projected onto the cake and which carries out interaction with the user via a graphical interface. This situation suggests that the method described in patent document US2010315491 is not suitable for use in applications in which user preferences regarding the image to be projected onto the object are of importance. Summary of the Invention
The objective of the invention is to provide a system that enables to realize an augmented reality application by means of a depth sensor and projector. Another objective of the invention is to provide a system that stabilizes the positions of the depth sensor and the projector with regards to each other during any movement by holding them in an integrated manner in a stabilizing structure such as an outer casing. Another objective of the invention is to provide a system that stabilizes the positions of the depth sensor and the projector after they are calibrated with regards to each other so that they do not need to be recalibrated during any movement by holding them in an integrated manner in a stabilizing structure such as an outer casing.
Another objective of the invention is to provide a system using a method based on ray tracing which natively prevents projection based distortion of images projected onto objects and/or spaces during the projection process. Another objective of the invention is to provide a system that enables colors in images projected onto objects and/or spaces to be visualized accurately during the projection process.
Detailed Description of the Invention
"A system for an augmented reality application" provided in order to achieve the objective of this invention is shown in the attached figures which have been described below;
Figure- 1 Schematic block diagram of the system of the invention.
Figure-2 Schematic block diagram of an embodiment of the system of the invention.
Figure-3 Schematic block of another embodiment of the system of the invention.
The parts shown in the figures have been individually numbered as shown below.
1. System
2. Depth sensor
3. Projector
4. Data interchange unit
5. Stabilizing structure
6. Computer
7. Control platform
8. Projective overlay platform
9. Control circuit
10. Spectrophotometer
K. User
System (1) that enables an augmented reality application to be provided by means of projecting an image onto an environment and/or objects consists of the below:
at least one depth sensor (2) that carries out the task of detecting the environment and objects,
at least one projector (3) that is integrated with the depth sensor (2) and which carries out the task of projecting images,
at least one data interchange unit (4) that, in its most basic state, enables the depth sensor (2) and the projector (3) to exchange data with other related elements of the system (1),
at least one stabilizing structure (5) that holds the depth sensor (2) and the projector (3) together in an integrated manner,
at least one computer (6) with a processor that evaluates the data acquired by means of the data interchange unit (4) and that transmits information related to the image to be projected to the projector (3) by means of the data interchange unit (4),
- at least one control platform (7) that enables the environment to be modelled, the environment to be traced, the objects to be defined, the objects to be traced, the image to be projected onto the objects to be designed using the data received from the depth sensor (2), at least one projective overlay platform (8) that is a custom graphics engine that processes the images to be projected by the projector (3) in a manner that prevents their physical distortion (Figure 1).
In an embodiment of the invention, the system (1) that enables an augmented reality application to be provided by means of projecting an image onto an environment and/or objects further consists of: at least one control circuit (9) that, in its most basic state, enables the commands which enable the input, output and management processes related to the depth sensor (2) and the projector (3) to be carried out to be transmitted to the computer (6) (Figure 2).
In an embodiment of the invention, the system (1) that enables an augmented reality application to be provided by means of projecting an image onto an environment and/or objects further consists of: at least one spectrophotometer (10) that enables color calibration for the image projected to be carried out (Figure 3).
In an embodiment of the invention, the system (1) that enables an augmented reality application to be provided by means of projecting an image onto an
environment and/or objects further consists of both the control circuit (9) and the spectrophotometer (10).
The depth sensor (2) is the unit that carries out the task of detecting the environment and the objects. The depth sensor (2) is screwed onto the projector (3) or the stabilizing structure (5) or to both in a manner such that its position with regards to the projector (3) does not change during a movement of the stabilizing structure (5). During the operation of the system (1), the depth sensor (2) is first calibrated together with the projector (3) and then used as calibrated. The intrinsic and extrinsic parameters of the depth sensor (2) and the projector (3) are determined during the calibration.
The depth sensor (2) transmits the data it acquires from the environment to the control platform (7) operating on the computer (6) by means of the data interchange unit (4).
In a preferred embodiment of the invention, the depth sensor (2) is a unit that is operated by the control platform (7) and that can be used in two separate modes. These are the construction and the tracing modes. In both modes, the depth sensor (2) continuously captures the depth frames.
The projector (3) is a unit that is integrated with the depth sensor (2) and which carries out the task of projecting images. The projector (3) is integrated with the depth sensor (2) or the stabilizing structure (5) or both in a manner such that its position with regards to the depth sensor (2) does not change during a movement of the stabilizing structure (5). During the operation of the system (1), the projector (3) is first calibrated together with the depth sensor (2) and then used in a calibrated manner. The intrinsic and extrinsic
parameters of the depth sensor (2) and the projector (3) are determined during the calibration.
The projector (3) receives the data related to the image it will project onto an environment and/or objects from the projective overlay platform (8) operating on the computer (6) by means of the data interchange unit (4).
The data interchange unit (4) is a unit that, in its most basic state, enables the depth sensor (2) and the projector (3) to exchange data with the control platform (7) and the projective overlay platform (8) operating on the computer (6). When the system (1) also consists of a control circuit (9), the data interchange unit (4) is also a unit that enables the control circuit (9) to exchange data with the control platform (7) and the projective overlay platform (8) operating on the computer (6). When the system (1) also consists of a spectrophotometer (10), the data interchange unit (4) is also a unit that enables the spectrophotometer (10) to exchange data with the control platform (7) and the projective overlay platform (8) operating on the computer (6).
In different embodiments of the invention, the data interchange unit (4) may be any unit such as an Ethernet port, wireless network card, VGA port, USB port, HDMI port, Firewire (IEEE 1394) port that one skilled in the art may use for data and image transfer locally or over a network. A single stabilizing structure (5) may consist of one or more data interchange units (4) and in the case it consists of more than one, these may be of the same or of different type.
What type of unit the data interchange unit (4) will be among the ones specified above in an embodiment of the invention is based on how the computer (6) carries out the data exchange. For example, in an embodiment wherein the computer (6) is located on a remote server or a cloud system, i.e. it is a computer (6) that communicates with the depth sensor (2) and the projector (3) over a network; the
data interchange unit (4) may consist of an Ethernet port and/or wireless network card enabling communication over the network. In another embodiment wherein the computer (6) is accessed locally, i.e. it is a computer (6) that communicates directly with the depth sensor (2) and the projector (3); the data interchange unit (4) may consist of one or more ports such as a GA port, USB port, HDMI port, and Firewire port. In another embodiment of the invention, the stabilizing structure (5) may consist of more than one data interchange unit (4) that enables the depth sensor (2) and the projector (3) to communicate with the computer (6) both directly and over the network.
The stabilizing structure (5), in the most basic state of the invention, is a unit that holds the depth sensor (2) and the projector (3) together in an integrated manner. In one embodiment of the invention, the stabilizing structure (5) is a unit that holds the depth sensor (2), the projector (3) and the data interchange unit (4) together in an integrated manner. In a preferred embodiment of the invention, the stabilizing structure (5) is a metal plate and an outer casing around it. The stabilizing structure (5), depth sensor (2), projector (3) and data interchange unit (4) and other system (1) elements the invention holds in other embodiments contains holes and openings which enable them to interact with the environment, objects and other system (1) elements. In the event the system (1) contains a control circuit (9), the stabilizing structure (5) is the unit that can hold the control circuit (9). In the event the system (1) contains a spectrophotometer (10), the stabilizing structure (5) is the unit that can hold the control circuit (10). The computer (6) is a unit consisting of at least one processor that evaluates the data received by means of the data interchange unit (4) and that transmits information related to the image to be projected to the projector (3) by means of the data interchange unit (4). The computer (6) term is used to describe all devices with computer properties. The computer (6) is the unit that provides the environment necessary for the control platform (7) and the projective overlay
platform (8) to operate. In one embodiment of the invention, in addition to a CPU (Central Processing Unit), the computer may also include a GPU (Graphics Processing Unit) and/or any customized microprocessor card. In one embodiment of the invention, the computer (6) is a unit that is located inside the stabilizing structure (5) and that provides for interaction with the user (K) by means of the environment elements that can be reached from within the opening of the stabilizing structure (5). An example embodiment of the invention is one where a touchscreen monitor or a screen and keyboard is found on top of the stabilizing structure (5), the user (K) carries out interaction with the computer (6) by means of a touchscreen monitor or a screen and keyboard, and the computer (6) carries out all data exchange processes within a stabilizing structure (5). The control platform (7) is a graphics engine unit that enables the environment to be modelled, the environment to be traced, the objects to be defined, the objects to be traced, and the image to be projected onto the objects to be designed using the data received from the depth sensor (2). The control platform (7) operates on the computer (6) and, in a preferred embodiment of the invention, also provides a graphical interface that can be used together with the environment elements in order to provide the user (K) with interaction between the user (K) and the system (1). The control platform is an integrated or external 3 dimensional (3D) modeling platform. The interaction between the control platform (7) and the user (K), in its most basic state, is defined as the user (K) displaying the 3 dimensional model, the user (K) designing the image to be projected by means of the control platform (7) and carrying out a preview from the control platform (7).
The control platform (7) is a unit that evaluates the data received from the depth sensor in two separate modes. These are the construction and the tracing modes. In the construction mode, the depth sensor (2) transmits the depth frames it acquires by scanning the environment to the control platform (7), while the control platform (7) merges this data together using the sensor fusion algorithms and forms a model. The sensor fusion algorithms used in this section may be one of the sensor fusion algorithms found in the current state of the art. In the tracing mode, the depth sensor (2) transmits the depth frames it obtains to the control platform (7), while the control platform (7) uses these data to trace the changes in position and direction of the depth sensor (2) and other "non-static" objects in the environment. In the tracing mode, the control platform (7) traces the changes in position and direction within the model it forms in the construction mode.
When the system (1) contains more than one depth sensor (2), the control platform (7) is a unit that combines the data received from each depth sensor to enable their use in the construction mode and/or the tracing mode.
Moreover, the control platform (7) is also the unit in which static and non-static objects are determined. In one embodiment of the invention, the process of static and non-static objects being determined is carried out automatically by the control platform (7), while in another embodiment of the invention, it is carried out by the user (K) using the environment elements of the interface provided by the control platform (7) and the computer (6). Wherein the static and non-static objects are automatically determined, the control platform (7) determines, by means of feature matching, whether a model of an object in its memory and determined as being non-static belongs to any of the objects in the environment and this object is determined as being a non-static object.
The control platform (7) is a unit that consists of a memory. This memory stores a 3 dimensional model or models formed on it or received in ready form from
another source, 2 dimensional surface parameters for objects, pattern and brightness maps or appearance of fabrics under different lights, data on the material assigned to objects to be projected onto the objects. When the 3 dimensional model or models stored on the memory is received in ready form from another source, the control platform (7) starts to operate directly in tracing mode without operating in the construction mode.
As the control platform (7) continuously evaluates the data continuously received from the depth sensor (2), the control platform (7) is a unit that can recognize new objects that enter the environment and that can properly update the 3 dimensional model if a new projection needs to be applied on these objects or if no projection is to be applied. For this purpose, the control platform (7) uses subtraction algorithms for the volume covered by these objects to be subtracted from the 3 dimensional model. The control platform (7) makes the decision as to whether projection is to be applied or not applied onto an object that newly enters the environment by determining, by means of feature matching, as to whether the model of an object previously added to its memory and for which a command concerning projection not being projected onto it has been specified belongs to any of the objects newly introduced in the environment.
The projective overlay platform (8) is a custom graphics engine that processes the images to be projected by the projector (3) in a manner that prevents their physical distortion. The projective overlay platform (8) is a unit that takes the design formed on the control platform (7) and enables it to attain the structure necessary for it to be sent to the projector (3) for it to be projected. Moreover, the projective overlay platform (8) is a unit that can carry out real-time rendering.
The projective overlay platform (8) is a unit that contains a memory. The 3 dimensional model or models received from the control platform (7) are stored on this memory. The projective overlay platform (8) is a unit that enables projection to be carried out in a manner similar to real world illumination and human vision models.
The projective overlay platform (8) applies a native method for the images to be projected from the projector (3) to be projected onto the objects or the environment without distortion. The projective overlay platform (8) is a ray tracing method based unit. However, the projective overlay platform (8), utilizes a customized aspect of the ray tracing method in the known state of the art.
As the projective overlay platform (8) utilizes this customized method, it manages the formation of each ray to be created by the projector (3) for the image to be projected by the projector (3). By this means, the process of tracing the path of a ray of the ray tracing method in the known state of the art as it reflects from a certain point on the object and then reaches the human eye or a camera is used to enable the formation of the ray to enable it to be sent from the projective overlay platform (8) to the said point on the object using the same path.
As a result, while the point of exit of the ray of the ray tracing method in the known state of the art is the point it reflects from the object and the point of arrival is the point it is detected by the camera, in the projective overlay platform (8) of the system (1) the point of exit of the ray is the point the projector (3) sends the ray and the point of arrival is the point the ray falls onto the object and/or space. In other words, the projective overlay platform (8), utilizes the ray tracing method in the known state of the art in a reverse manner.
In this embodiment, the relationship between each ray traced in the ray tracing method and of which the path is identified and each ray to be projected from the projector (3) of the system (1) is calculated by the projective overlay platform (8) and each ray is formed by the projective overlay platform (8) in accordance with this calculation. As this relationship is calculated, the projective overlay platform (8) uses the intrinsic and extrinsic parameters of the projector (3) that will send the ray to determine the angle of refraction the ray will exhibit as it exits the lens of the projector (3). By this means, rays are formed in a manner such that a distortion arising from the lenses of the projector (3) will not be inflicted onto the image to be projected onto the object and the path of the rays are arranged to send rays to the correct point of the object and the rays are formed taking this calculation into consideration and are transferred by means of the data interchange unit (4) to the projector (3) to enable their projection onto the object. The projective overlay platform (8) carries out this process for each ray to form the image to be projected onto the object; in other words, is able to manage the formation of each ray individually and enable their transmission to the projector (3) to ensure they are projected to the correct point on the object. When the system (1) contains more than one projector (3), the projective overlay platform (8) is a unit that can apply edge blending methods to ensure there is no overlap or distortion in the process of projecting images onto an object and/or space. In one embodiment of the invention, the projective overlay platform (8) and the control platform (7) may operate in such a manner that the preview observed by the user (K) on the graphical interface provided by the control platform (7) is at the same time projected onto the objects and the environment. In one embodiment of the invention, sharing and synchronization of 3 dimensional models between the projective overlay platform (8) and the control platform (7) is carried out.
The control circuit (9) is a unit that, in its most basic state, provides for the commands which enable the commands enabling the input, output and management processes related to the depth sensor (2) and the projector (3) to be carried out to be sent to the computer (6). When the system (1) contains a spectrophotometer (10), the control circuit (9) also provides for the commands which enable the commands enabling the input, output and management processes related to the spectrophotometer (10) to be carried out to be sent to the computer (6). When the computer (6) is located on a remote server or a cloud system, i.e. it is a computer (6) that communicates with the depth sensor (2) and the projector (3) over a network, the control circuit (9) carries out the tasks of conveying a request for data transfer from the depth sensor (2) to the computer (6) by means of the data interchange unit (4) and for conveying a request to the computer (6) by means of the data interchange unit (4) for receiving information on the image to be projected from the projector (3).
Spectrophotometer (10) is the unit that carries out color measurements on the image projected and transmits information on these measurements to the computer (6) by means of the data interchange unit (4). Information on the color measurements carried out by the spectrophotometer (10) is used to ensure the color and color tones of the projected image are projected onto the objects and/or spaces in accordance with how they are designed on the control platform (7). In embodiments wherein the spectrophotometer (1) is located within the system (1), preferably a number of colors and color tones belonging to a wide range of colors is projected onto the objects and/or spaces under the control of the control platform (7) prior to the image projection process and the measurements of these colors and color tones on the objects and/or spaces is collected from the spectrophotometer (10) to be matched by the control platform (7) with the information of the actual color that is desired to be projected and by this means a profile or calibration table is formed within the control platform (7). By this
means, the information regarding as to which color or color tone is to be transmitted to the projective overlay platform (8) to ensure that colors selected in the design using the control platform (7) is displayed correctly on the objects and/or spaces is determined by the control platform (7).
By means of the system (1) of the invention, an augmented reality application is formed by projecting images onto the environment and objects. The system (1) carries out three basic steps utilizing its elements as the said processes are realized. These steps are the scanning, designing and projecting steps.
The scanning process consists of the sub-steps of the environment scanning of the depth sensor (2), it transmitting the obtained data to the control platform (7), the control platform (7) forming a 3 dimensional model and the static and non-static objects being determined. The scanning process may commence after the depth sensor (2) is calibrated. After it starts to obtain depth frames, the depth sensor (2) transmits these data by means of the data interchange unit (4) to the control platform (7) that operates on the computer (6) and the data obtained by the depth sensor (2) is used by the control platform (7) in the construction mode, i.e. for the formation of a 3 dimensional model. The 3 dimensional model created is transformed into a polygon mesh representation by the control platform (7). During this transformation, the control platform (7) may utilize a transformation algorithm found in the known state of the art. After this 3 dimensional model in the polygon mesh representation is formed in the control platform (7), the static and non-static objects of this model are either automatically determined by the control platform (7) or determined manually by the user ( ) by means of the control platform (7). While the static objects may be elements such as walls, surfaces found in the environment, the non-static objects may be elements found in the environment that are mobile or are likely to be mobile. Data regarding this 3 dimensional model formed and static and non-static objects, which enable these objects to be scanned later on, are stored in the memory of the control platform
(7). Moreover, the top and side vectors of the scanned object may be determined by the user (K) or the sensors developed and/or an algorithm. These vectors are stored inside the control platform (7) and may be used to support the algorithms or to develop new algorithms. Following the scanning process, i.e. scanning of the environment, creating a 3 dimensional (volumetric) model of the environment, and identifying the static and non-static objects, the depth frames still being received from the depth sensor (2) are only used by the control platform (7) for the tracing mode. The scanning process is expressed in terms of the subdivision of the 3 dimensional model and the assignment and arrangement of the material to be projected onto the objects on the control platform (7). During the subdivision process, the 3 dimensional model formed by the control platform (7) is regarded as a background and the background is divided into more than one subdivisions. The subdivision process may be carried out automatically, semi-automatically or manually by means of the control platform (7). In one embodiment of the invention, following the subdivision process, a better appearance may be provided by utilizing edge blending methods to blend the edges between the subdivisions. Following the subdivision process, the control platform (7) carries out a 2 dimensional surface parameterization process for each object. Then, by means of the control platform (7), a different material assignment is carried out on each object using the material data stored in the memory of the control platform. The materials assigned may be of different properties and data on characteristics such as diffuse, specular, bump may be found for each object and may exhibit differences. A preview of how the materials assigned appear on the objects may be shown to the user (K) by means of a virtual scene provided by the graphical interface of the control platform (7).
Following the trace and the design processes, the image is transmitted to the projector (3) by means of the projective overlay platform (8), computer (6) and data interchange unit (4) and is projected onto the environment and/or the objects. In other words, the projection process is carried out. In order for the projection process to be carried out, the 3 dimensional model, data on the static and non- static objects, data on the subdivision process, pattern maps and data on material assigned to each object which is stored on the control platform (7) is transmitted to the projective overlay platform (8). In the event that an application wherein the preview visualized by the user (K) by means of the graphical interface of the control platform (7) during the design process is also projected onto the objects and the environment is in place, then these data may be stored in such a manner that they are synchronized between the control platform (7) memory and the projective overlay platform (8) in each stage of the design process. In other words, certain stages of the projection and the design processes may be carried out simultaneously.
In the projection process, the projective overlay platform (8) carries out processes and conversions on the data so as to prevent distortions related to the projection. In the preferred embodiment of the invention, the processes carried out by the projective overlay platform (8) are ray tracing based methods. As a result of these processes and conversions which are carried out in real time, the image to be projected by the projector (3) is finalized and is transmitted from the projective overlay platform (8) operating "on the computer (6) to the projector (3) by means of the computer (6) and the data interchange unit (4). The projective overlay platform (8) carrying out real time processes and conversions prevents distortions in the projected images being formed based on the movements of the stabilizing structure (5).
In an example embodiment, the system (1) of the invention may be used to project patterns representing appearance of fabric onto furniture. In this situation, the
stabilizing structure (5) is moved, in other words, the depth sensor (2) obtains depth frames in order to form a depth map of the environment. The data obtained by the depth sensor (2) is transmitted by means of the data interchange unit (4) to the control platform (7) operating on the computer (6) and the control platform (7) creates and stores a 3 dimensional model representation of the environment using these data. By means of the control platform (7), the static and non-static objects within this environment are also identified. For example, while the walls and the surface of the environment are identified as static objects, the couch within the environment is stored as a non-static object and these data are stored on the control platform (7). By means of this stored data the couch, which is defined as a non-static object, may be identified even when in another environment. By this means, it is possible to carry out a projection onto only the couch without having to rescan the new environment. After the environment containing the couch is traced and the objects are identified, the environment is subdivided by the control platform (7) and 2 dimensional surface parameters are identified for the couch to enable overlay maps to be created. Then, the material to be projected onto the couch (for example, a fabric pattern image) is selected and a preview image in the form of a virtual scene is provided by means of the graphic interface provided by the control platform (7). The user (K) may move or turn these overlay maps to provide for the fabric overlay to be different.
Then, the data created as a result of the trace and design processes is once again transmitted to the memory of the project overlay platform (8) operating on the computer (6) and the project overlay platform (8) carries out processes on the image data to ensure that the distortions that may appear as a result of the projection are prevented and transmits the image to be projected to the projector (3) by means of the interchange unit (4). As a result of the image projected by the projector (3), the pattern desired on the couch is displayed in the real
environment. In the same example embodiment, in the event a situation that changes the volume rendering of the object being the case, such as, for example, a cushion being placed onto the couch during the projection, the control platform (7) is able to subtract the volume of the cushion from the 3 dimensional model and ensure that the projection remains only on the couch.
It is possible to develop various embodiments of the inventive system (1), it cannot be limited to examples disclosed herein and it is essentially according to claims.
Claims
A system (1) that provides for an augmented reality application to be provided by means of projecting an image onto the environment and/or objects; comprising;
at least one depth sensor (2) that carries out the task of detecting the environment and objects,
at least one projector (3) that carries out the task of projecting images, at least one data interchange unit (4) that, in its most basic state, enables the depth sensor (2) and the projector (3) to exchange data with other related elements of the system ( 1 ),
at least one stabilizing structure (5),
at least one computer (6) with a processor that evaluates the data received by means of the data interchange unit (4) and that transmits information related to the image to be projected to the projector (3) by means of the data interchange unit (4),
at least one control platform (7) that is a graphics engine that enables the environment to be modelled using the data received from the depth sensor (2),
at least one projective overlay platform (8) that processes the images to be projected by the projector (3)
and characterized by
at least one projector (3) that is integrated with the depth sensor (2) and which carries out the task of projecting images,
at least one stabilizing structure (5) that, in the most basic state, holds the depth sensor (2) and the projector (3) together in an integrated manner,
at least one control platform (7) that enables the environment to be modelled, the environment to be traced, the objects to be defined, the
objects to be traced, the image to be projected onto the objects to be designed using the data received from the depth sensor (2), at least one projective overlay platform (8) that is a custom graphics engine that processes the images to be projected by the projector (3) in a manner that prevents their physical distortion.
2. A system (1) according to Claim 1, characterized by a projector (3) that is integrated to the depth sensor (2) in a manner such that its position in relation to the depth sensor (2) does not change during the movement of the stabilizing structure (5).
3. A system according to Claim 1 or Claim 2, characterized by a stabilizing structure (5) that is a metal plate and contains an outer casing around it.
4. A system (1) according to any of the preceding claims, characterized by a computer (6) that contains a GPU (Graphics Processing Unit).
5. A system (1) according to any of the preceding claims, characterized by a computer
(6) that contains a customized microprocessor card.
A system (1) according to any of the preceding claims, characterized by a control platform (7) that can evaluate the data received from the depth sensor in the construction and tracing modes.
A system (1) according to any of the preceding claims, characterized by a control platform
(7) that consists of a memory that stores a 3 dimensional model or models formed on it or received in ready form from another source, 2 dimensional surface parameters for objects, pattern and brightness maps or appearance of fabrics under different lights, data on the material assigned to objects to be projected onto the objects.
8. A system (1) according to any of the preceding claims, characterized by a control platform (7) that determines, by means of feature matching, whether a model of an object in its memory and determined as being non-static belongs to any of the objects in the environment and identifies the objects in the environment as being static and non-static.
9. A system (1) according to any of the preceding claims, characterized by a control platform (7) that takes the decision as to whether projection is to be applied or not applied onto an object that newly enters the environment by determining, by means of feature matching, as to whether the model of an object previously added to its memory and for which a command regarding projection not being projected onto it has been specified belongs to any of the objects newly introduced in the environment.
10. A system (1) according to any of the preceding claims, characterized by a projective overlay platform (8) that can carry out real-time rendering.
11. A system (1) according to any of the preceding claims, characterized by a projective overlay platform (8) that uses a ray tracing based method.
12. A system (1) according to any of the preceding claims, characterized by a projective overlay platform (8) that uses the intrinsic and extrinsic parameters of the projector (3) that will send the ray, to determine the angle of refraction the ray will exhibit as it exits the lens of the projector (3).
13. A system (1) according to any of the preceding claims, characterized by a projective overlay platform (8) that calculates the path of the rays to ensure they are sent to the correct point of the objects and/or the environment, that enables the rays to be formed taking this calculation into consideration and
them to be transmitted by means of the data interchange unit (4) to the projector (3) to enable their projection onto the object.
14. A system (1) according to any of the preceding claims, characterized by a projective overlay platform (8) that contains a memory wherein the synchronization between the memory of the control platform (7) and the 3 dimensional models is carried out.
15. A system (1) according to Claim 1, characterized by a control circuit (9) that enables the commands which enable the input, output and management processes related to the depth sensor (2) and the projector (3) to be carried out to be transmitted to the computer (6).
16. A system (1) according to Claim 1 or Claim 15, characterized by a spectrophotometer (10) that carries out a color measurement on the projected image and transmits information relating to this measurement to the computer by means of the data interchange unit (4).
17. A system (1) according to Claim 16, characterized by a stabilizing structure (5) that holds the depth sensor (2), projector (3), data interchange unit (4), computer (6), control circuit (9) and spectrophotometer (10) together in an integrated manner.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TR2013/092289 | 2013-07-31 | ||
TR201392289 | 2013-07-31 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2015016798A2 true WO2015016798A2 (en) | 2015-02-05 |
WO2015016798A3 WO2015016798A3 (en) | 2015-04-02 |
Family
ID=52432527
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/TR2014/000293 WO2015016798A2 (en) | 2013-07-31 | 2014-07-31 | A system for an augmented reality application |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015016798A2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11055919B2 (en) | 2019-04-26 | 2021-07-06 | Google Llc | Managing content in augmented reality |
US11151792B2 (en) | 2019-04-26 | 2021-10-19 | Google Llc | System and method for creating persistent mappings in augmented reality |
US11163997B2 (en) | 2019-05-05 | 2021-11-02 | Google Llc | Methods and apparatus for venue based augmented reality |
WO2023167888A1 (en) * | 2022-03-01 | 2023-09-07 | Meta Platforms Technologies, Llc | Addressable projector for dot based direct time of flight depth sensing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030043152A1 (en) * | 2001-08-15 | 2003-03-06 | Ramesh Raskar | Simulating motion of static objects in scenes |
DE102011015987A1 (en) * | 2011-04-04 | 2012-10-04 | EXTEND3D GmbH | System and method for visual presentation of information on real objects |
EP2667615A1 (en) * | 2012-05-22 | 2013-11-27 | ST-Ericsson SA | Method and apparatus for removing distortions when projecting images on real surfaces |
WO2014101955A1 (en) * | 2012-12-28 | 2014-07-03 | Metaio Gmbh | Method of and system for projecting digital information on a real object in a real environment |
-
2014
- 2014-07-31 WO PCT/TR2014/000293 patent/WO2015016798A2/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030043152A1 (en) * | 2001-08-15 | 2003-03-06 | Ramesh Raskar | Simulating motion of static objects in scenes |
DE102011015987A1 (en) * | 2011-04-04 | 2012-10-04 | EXTEND3D GmbH | System and method for visual presentation of information on real objects |
EP2667615A1 (en) * | 2012-05-22 | 2013-11-27 | ST-Ericsson SA | Method and apparatus for removing distortions when projecting images on real surfaces |
WO2014101955A1 (en) * | 2012-12-28 | 2014-07-03 | Metaio Gmbh | Method of and system for projecting digital information on a real object in a real environment |
Non-Patent Citations (2)
Title |
---|
EXTEND3DGmbH: "EXTEND3D - Werklicht HD", YouTube , 22 May 2012 (2012-05-22), XP054975673, Retrieved from the Internet: URL:https://www.youtube.com/watch?v=vJ0O-o s3Ea8 [retrieved on 2015-01-15] * |
OLIVER BIMBER ET AL: "Modern approaches to augmented reality", ACM SIGGRAPH 2006 COURSES ON , SIGGRAPH '06, 1 January 2006 (2006-01-01), page 2, XP055162410, New York, New York, USA DOI: 10.1145/1185657.1185797 ISBN: 978-1-59-593364-5 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11055919B2 (en) | 2019-04-26 | 2021-07-06 | Google Llc | Managing content in augmented reality |
US11151792B2 (en) | 2019-04-26 | 2021-10-19 | Google Llc | System and method for creating persistent mappings in augmented reality |
US11163997B2 (en) | 2019-05-05 | 2021-11-02 | Google Llc | Methods and apparatus for venue based augmented reality |
US12067772B2 (en) | 2019-05-05 | 2024-08-20 | Google Llc | Methods and apparatus for venue based augmented reality |
WO2023167888A1 (en) * | 2022-03-01 | 2023-09-07 | Meta Platforms Technologies, Llc | Addressable projector for dot based direct time of flight depth sensing |
Also Published As
Publication number | Publication date |
---|---|
WO2015016798A3 (en) | 2015-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6638892B2 (en) | Virtual reality based apparatus and method for generating a three-dimensional (3D) human face model using image and depth data | |
US8797352B2 (en) | Method and devices for visualising a digital model in a real environment | |
JP4945642B2 (en) | Method and system for color correction of 3D image | |
US8218903B2 (en) | 3D object scanning using video camera and TV monitor | |
JP2022542573A (en) | Method and computer program product for generating three-dimensional model data of clothing | |
CN105513112B (en) | Image processing method and device | |
US20190362539A1 (en) | Environment Synthesis for Lighting An Object | |
CN106373178B (en) | Apparatus and method for generating artificial image | |
US20160381348A1 (en) | Image processing device and method | |
JP5299173B2 (en) | Image processing apparatus, image processing method, and program | |
JP2019510297A (en) | Virtual try-on to the user's true human body model | |
US20050190181A1 (en) | Image processing method and apparatus | |
US11681751B2 (en) | Object feature visualization apparatus and methods | |
WO2015016798A2 (en) | A system for an augmented reality application | |
JP2016162392A (en) | Three-dimensional image processing apparatus and three-dimensional image processing system | |
CN110110412A (en) | House type full trim simulation shows method and display systems based on BIM technology | |
JP5332061B2 (en) | Indoor renovation cost estimation system | |
US20150138199A1 (en) | Image generating system and image generating program product | |
JP2020095484A (en) | Texture adjustment supporting system and texture adjustment supporting method | |
JP6825315B2 (en) | Texture adjustment support system and texture adjustment support method | |
RU2735066C1 (en) | Method for displaying augmented reality wide-format object | |
US20230247184A1 (en) | Installation information acquisition method, correction method, program, and installation information acquisition system | |
JP6679966B2 (en) | Three-dimensional virtual space presentation system, three-dimensional virtual space presentation method and program | |
JP2003157290A (en) | Processing method and processing system for image simulation, image simulation processing program, and recording medium | |
CN109299989A (en) | Virtual reality dressing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14781327 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase in: |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 12/05/2016) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14781327 Country of ref document: EP Kind code of ref document: A2 |