WO2014193326A1 - System for forming a virtual image - Google Patents

System for forming a virtual image Download PDF

Info

Publication number
WO2014193326A1
WO2014193326A1 PCT/TR2014/000180 TR2014000180W WO2014193326A1 WO 2014193326 A1 WO2014193326 A1 WO 2014193326A1 TR 2014000180 W TR2014000180 W TR 2014000180W WO 2014193326 A1 WO2014193326 A1 WO 2014193326A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
virtual image
unit
positioning
created
Prior art date
Application number
PCT/TR2014/000180
Other languages
French (fr)
Inventor
Cetin Ozgur BALTACI
Tunc BILGINCAN
Original Assignee
Baltaci Cetin Ozgur
Bilgincan Tunc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baltaci Cetin Ozgur, Bilgincan Tunc filed Critical Baltaci Cetin Ozgur
Publication of WO2014193326A1 publication Critical patent/WO2014193326A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/40Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/346Image reproducers using prisms or semi-transparent mirrors

Definitions

  • the invention is related to a device consisting of electronic and mechanical hardware enabling at least two and/or three dimensional images formed as if actually there being mobile or immobile in accordance with at least one reference point with varying and/or stationary position be displayed at a varying distance to a reference point, at varying dimensions and/or varying depth perception using optical laws in a local environment (living room, bus stop, street, etc.) of which the position and/or form is previously determined and/or not determined.
  • a local environment living room, bus stop, street, etc.
  • the device of the invention covers all wearable (glasses, etc.), mobile and/or immobile applications (handheld computers, mobile phones, laptop-desktop computers, televisions, wearable video glasses, cars, etc.) that can be implemented for all hardware.
  • three dimensional imaging technologies can be found in various forms and types. These are essentially stereoscopic, autostereoscopic imaging. They may be grouped as augmented reality display, heads of display (virtual display panel), wearable virtual display glasses and volume imaging devices.
  • Stereoscopic and autostereoscopic imaging technology is based on the three dimension perception of the human brain being activated. While in the formation of stereoscopic and autostereoscopic imaging it is not necessary for the user to use a corrector such as glasses for the user to see the three dimensional image, other methods require the use of various devices and glasses.
  • the basis and reason of the human eye to be able to perceive three dimension objects with their depths is that due to the distance between our two eyes, we perceive the objects we look at, at a certain depth.
  • Figure -1 the area (1 .1) that the left eye sees and the area (1 .2) that the right eye sees is shown.
  • Figure-2 we can see objects with "X" and "Y” written on them inside the perception areas (1 .1 ), (1.2).
  • Figure-3 shows the images seen separately by the left and the right eye in Figure-2 as overlapped on the screen.
  • a regulator or a corrector in the form of glasses that can separately direct the overlapped images on the screen the left and the right eye need to see to the left and the right eye is needed.
  • Xband glasses there are two LCD screens where the lenses should be and there is one IR (infra-red-receiver) on the glasses.
  • IR infra-red-receiver
  • the film that is recorded is recorded with a horizontal polarized filter for the left eye and a vertical polarized filter for the right eye.
  • Various filters can be used to move the ray in the direction of the desired wave. These filters are the vertical and horizontal filters.
  • the horizontal polarization filter filters the horizontal polarization coming from the ray and the vertical polarized filter filters the vertical polarization coming from the ray).
  • the projection device with two separate lenses has a vertical and horizontal polarization filter, respectively (the rays coming out of the lenses are filtered vertically and horizontally).
  • a vertical and horizontal polarization filter respectively (the rays coming out of the lenses are filtered vertically and horizontally).
  • Head up display unit (virtual display panel): This is a system used in fighter aircraft for many years. This system enables the information on the dashboard to be shown in front of the pilot on a semipermeable surface so that the pilot is not distracted by looking at the dashboard. Similar systems have been utilized by automobile companies such as BMW in the models wherein the values indicated on the dashboard are reflected onto the front glass of the car so that the driver is able to keep his eyes on the road.
  • Smart glasses companies like Lumus (www.lumus-optical.com), Samsung and Google have similar applications.
  • the work of these companies are totally in the form of glasses systems and the image the individual wearing the glasses will see is at a fixed distance from the individual.
  • the image also moves along so as to be at a fixed distance from the user.
  • the user focuses on the formed image, he/she sees the objects behind the image as blurry.
  • the user focuses on the objects behind the image, then this time the image formed appears as blurry. This is the most major disadvantage of the system (it is not possible to use the glass of the glasses produced by Lumus with the glasses and system of this invention).
  • the invention enables images that do not exist in a physical environment of which the position is determined and/or not determined beforehand to be seen in a physical environment by means of electronic hardware by the individual using the devices utilizing the technology of the invention.
  • the individual using this device utilizing this technology is able to see a virtual image constant to the environment and is able to see the image as if it actually exists in the physical environment in a manner such that when the individual approaches/moves away from the image, it grows/reduces in size and/or when the individual turns his/her head to the left/right wearing the glasses, the position of the images do not change in the physical environment.
  • the device enables real physical objects in the same (nearby) location (position) as the image to be seen by the individual at the same clarity when the individual focuses on the image newly formed in the environment.
  • the invention can be applied to video chat, watching movies, reading newspaper, personal virtual applications, display of virtual billboards in streets or military applications and the like by means of using it as a personal computer or together with a portable computer.
  • Figure 1- Top view of boxes with the letters X and Y printed on them and an individual looking at these boxes.
  • Figure 2- View of these boxes seen separately by the left and the right eye of the individual in Figure-1.
  • Figure 3- Overlapped view of the images seen separately in Figure-2 by the left and the right eye on the screen.
  • Figure 4- General view of the system of the invention.
  • Figure 5- Position of the individual within the example environment limited by the LPS (local positioning system) and the position of the image formed.
  • LPS local positioning system
  • Figure 6- Schematic view of an object reflecting in the mirror.
  • Figure 7 Positions of the individual, reflective semipermeable surface, screen and table.
  • Figure 8 The way in which the individual sees the image an the screen by means of looking at the reflective semipermeable surface and the transparency of the semipermeable surface.
  • Figure 9 The way in which the individual sees the image on the screen by means of looking at the reflective semipermeable surface when the table is near the reflective semipermeable surface.
  • Figure 10- The way in which the individual sees the image on the screen by means of looking at the reflective semipermeable surface when the table is distant from the reflective semipermeable surface.
  • Figure 12- Schematic view of the main lens and optical image formation hardware of the system.
  • Figure 13 Schematic use of the main lens and optical image formation hardware.
  • Figure 14- The way in which the individual looking at the image sees the image in the schematic use of the main lens and optical image formation hardware.
  • FIG 15- The parts of the system of the invention described in the general view in Figure-4 shown in Figure- 4 as the equivalent of the main lens and optical image formation hardware.
  • Figure 16 (a) - Focusing of the human eye in order to see a nearby object.
  • Figure 16 (b) Focusing of the human eye in order to see a distant object.
  • GPS global positioning system
  • GPS global positioning system
  • K2 Ray sent from the head of the second image to the center point of the convex lens. 12. S) The optical system formed of the lens system, the reflective semipermeable surface and the eye looking at this surface.
  • the system may be designed so as to be formed of a structure physically constituting a whole and/or formed more than one separate part and one and/or more than one of the functions can be carried out via the combination of different parts based on the needs of the system.
  • the system as presented in Figure-4, is formed of at least one computer unit (4.1), at least one global and at least one local positioning device (4.2), at least one virtual image unit (4.3) and the volume in which the virtual image is created (4.4).
  • the system utilizes cable and/or wireless communication devices (Bluetooth, wireless, etc.) each of which operates independent from and/or dependent on each other, physically integrated and/or dissociated with each other.
  • Bluetooth, wireless, etc. each of which operates independent from and/or dependent on each other, physically integrated and/or dissociated with each other.
  • the system enables relatively high level data transfer and multiple functional algorithms to be carried out. For this reason, this process power required in the operation of the system is mainly provided by the computer unit (4.1).
  • the computer unit (4.1) principally supports three main processes. These are the management of communication with cable and wireless devices (4.1.1), assessment of the global and local positioning information from the virtual image unit (4.3) GPS (4.3.9) and LPS (4.3.10), and the calculation of axial rotation information received from the gyroscope (4.3.6) on at least one axis, acceleration information received from the accelerometer (4.3.7) on at least one direction, magnetic positioning information received from at least one magnetic compass unit (4.3.11), positioning information of the virtual image volume reference point unit (4.4) and the virtual image unit (4.3) in space and/or the current acceleration.
  • GPS technology is generally used in open fields for global positioning by means of the assessment of the signals received from multiple satellites orbiting and constantly sending GPS signals to earth.
  • the determination of the global positioning of the system mentioned in this invention may also be provided by means of GPS satellites.
  • LPS technology which operates under a similar logic to GPS satellites, is also used for constant and/or mobile signal transmitters carrying out signal transmission on a local level).
  • the most important function of this section is the determination of the positioning, calculation of the focus distance, and determination of the dimension of the two and/or three dimensional virtual image formed for a varying and/or constant position and the transfer of all of these information to the virtual image unit in a processed and/or raw manner by means of cable and/or wireless communication devices (4.1.1).
  • the global and local positioning devices (4.2) shown in Figure-4 may be used to determine the position of the individual using the system and/or the position of the virtual image unit (4.3) and/or the position of the volume in which the virtual image is created by means of the reference point unit (4.4) in open and closed spaces and/or transmit the same to the computer unit by means of cable and/or wireless communication devices (4.2.3).
  • the system (4) includes two main positioning technologies. These are GPS (global position system) and LPS (local positioning system).
  • signals (5.3) sent from local positioning devices and/or global positioning system devices (4.2) as multiple signal transmitters are detected by the system (4) described in the invention; local positioning including those of open and/or closed areas is carried out and the signals are transmitted to the computer unit (4.1) by means of cable and/or wireless communication devices (4.2.3).
  • the positioning of the virtual image unit (4.3) and/or the virtual image volume reference point (4.4) can be carried out at any point.
  • the units which provide cabled and/or wireless, continuous and/or discontinuous data transfer between virtual image units (4.3) and the computer unit (4.1 ), perform in accordance with the requirements, the transfer of desired analogue and/or digital, double and/or single direction, images and/or sound data.
  • the virtual image unit (4.3) may be expressed with principle units integrated and/or dissociated with other sections (4.1 , 4.2 and 4.4).
  • the virtual image unit (4.3) includes cable and/or wireless data communication devices (4.3.1 ) enabling the desired image and/or audio, analog and/or digital dual and/or one way data transfer to the cable and/or wireless communication devices (4.1 .1 ) found in the computer unit.
  • the data (images to be created virtually) received at the virtual image unit (4.3) from the computer unit by means of the cable and/or wireless communication devices (4.3.1 ) is processed by the image processing hardware (4.3.2) and formed on the screen (4.3.5.1 .) as part of the optical image processing hardware (4.3.5).
  • audio broadcasting is carried out over the speaker system (4.3.8) by means of the voice processing hardware (4.3.3).
  • the unit possesses internal power units (4.3.4) and can store energy.
  • the gyroscope (4.3.6), accelerometer (4.3.7) and magnetometer (4.3.1 1 ) systems located at the virtual image unit detect small changes in position instantaneously and relatively and tracked the same by the system specified in Figure-4.
  • This information related to position and angle detected by various sensors (4.3.6, 4.3.7, 4.3.9, 4.3.10, 4.3.1 1 ) is transmitted to the virtual image unit (4.3) and/or the computer unit (4.1 ).
  • This comparative data and the position of the virtual image unit (4.3) are detected dependent on and/or independent of the computer unit.
  • the luminance of the virtual image created by the system is adjusted according to the environment the device is found and use light sensors (4.3.12) for this purpose.
  • at least one microphone (4.3.13) is included for applications in the system requiring voice detection (voice command, voice recording, etc.).
  • the section in Figure-4 specified as the virtual image volume reference point unit (4.4) can be expressed as the position wherein the virtual image is formed by the system (4) of the invention and in the event that it is desired for the location of the area in which the image will be created to not be defined and/or specified, at least one single degree of freedom space is used.
  • This section includes at least one cable and/or wireless communication device enabling communication with the computer unit (4.1 ).
  • At least one GPS (4.4.2) and/or LPS (4.4.3) unit is included in order to determine the volume in which the image will be formed for cases where positioning is not carried out and/or not desired.
  • At least one light sensor (4.4.4) is included to calculate the quantity of light where the image will be formed and to adjust the quantity of light of the virtual image and includes internal power units (4 4.5) and can store energy to provide ease and mobility to the system.
  • the virtual image is created with the spatial position, distance and/or varying depth perception desired and the user can see the virtual image by means of the optical image processing hardware (4.3.5).
  • the optical image processing hardware shown with dashed lines in Figure-15 (4.3.5 in Figure-4 and Figure-15) is formed of the screen (LED, TFT, AMOLED, LED, projection, laser projection, etc.) (4.3.5.1 ), the semipermeable reflective surface (4.3.5.2) and the lens system.
  • the most important hardware to enable the image created to achieve the objective of the invention to appear as if it actually exists is the optical image processing hardware (4.3.5).
  • the feature created by the optical image processing hardware can be expressed as an optical reflection that can be seen by everyone when they look outside from the window of the vehicle when using public transportation (especially when light intensity in the internal environment is greater in comparison to the external environment).
  • the windows of the vehicle act as a semipermeable mirror in this optical reflection and the virtual image of the objects within the vehicle (passengers, seats, etc.) can be seen as reflections overlapping with the objects outside of the vehicle.
  • an individual looking from inside the vehicle sees the virtual image of those objects as if they appear outside the vehicle at the same distance as the distance of the objects to the window.
  • an individual looking at these virtual images from inside the vehicle is able to see these reflected images outside the vehicle, in the real world.
  • the case described here is an example of how a hologram image can be created.
  • a screen (7.1 ) is placed in front of the semipermeable reflective surface (7.2) that can create the mirror effect shown in Figure- 7 and the part on the screen where the image is displayed is placed so as to face the reflective semipermeable surface (7.2) and so as to provide for a certain distance (7.4) between the screen and the surface.
  • a table (7.3) is placed so as to be between the semipermeable reflective surface (7.2) and at a distance (7.5) equal to the distance (7.4) between the screen and the surface.
  • the screen (4.3.5.1 ) enabling the virtual image to be created to be physically reflected
  • the semipermeable reflective surface (4.3.5.2) enabling the virtual image to be created to be created physically
  • the optical system (4.3.5.3) enabling the created virtual image (14.1 ) to be viewed in accordance with changing position, angle and/or depth perception by the individual using the device.
  • Figure-4 comprises the screen (4.3.5.1 ) that is a part of the optical image processing hardware (4.3.5),within the device where the desired virtual image (14.1 ) arid the physically created digital and/or analogue image is formed by means of the system subject to the invention shown in Figure 15.
  • This device is a hardware, where the image (14.1 ) that is desired to be created virtually at a desired size and/or perspective is created physically and whose reflection is formed at a reflective surface (4.3.5.2).
  • the part where the physical image created by the screen (4.3.5.1) is converted into a virtual image (14.1 ) is the semipermeable reflective surface (4.3.5.2).
  • the semipermeable reflective surface (4.3.5.2) in Figure-15 is the surface that reflects a part of the ray coming onto it in accordance with optical laws and allows a part to pass through it. This event can be explained with the image of an object that is reflected on the mirror.
  • An object (6.2) is placed in front (6.6) of the mirror (6.1 ) so as to provide for a certain distance (6.4) between the object and the mirror.
  • the virtual image (6.3) of the object formed on the mirror is such that the distance (6.4) between the object and the mirror and the distance (6.5) between the virtual image and the mirror is equal and that it is formed behind (6.7) the mirror.
  • the mirror is not semipermeable, the objects behind it are not visible.
  • the semipermeable surface (4.3.5.2) defined in this invention acts as a mirror but as it allows a part of the light coming onto it to pass; the objects behind it are also visible.
  • the lens system (4.3.5.3) in Figure- 15 enables the virtual image created (14.1 ) to be overlapped with the real image (table(7.3)) at the distance it is desired to display the virtual image by means of changing the focal point of the virtual image (14.1 ) created on the semipermeable reflective surface (4.3.5.2).
  • This physical feature is provided in a number of different ways utilizing present technology.
  • the most basic example of this uses a fixed focus lens system (1 1.S) with solid lens.
  • Figure-1 shows how this lens system operates in the simplest state.
  • the eye (1 1.12) looks at the lens system (1 1 .S) from in front of the convex lens (1 1 .7) to be able to see the object (1 1 .1 ) behind the lens system.
  • the eye (1 1 .12) sees the final image (1 1.4) of the object as being further and smaller than the object.
  • the image of objects is formed in concave and convex lenses utilizing optical laws.
  • the following process is used in order to form the final image (1 1.4) of the object on paper.
  • the first image ( 1 .2) of the object is formed on concave lens number one (11 .5).
  • the first image of the object (1 1 .2) acts as the real object for lens number two (1 1 .6) and the second image of the object (1 1 .3) is formed.
  • the second image of the object (1 1 .3) acts as the object and the third image of the object (1 1.4) is formed.
  • Figure-1 1 shows the single focal point (1 1 .8) of concave lens number one, single focal point (1 1 .9) of concave lens number two, and two focal points (1 1.10)(1 1.11 ) of the convex lens.
  • the second image of the object (1 1 .3) is formed between the focal point ( 1.1 1 ) of the convex lens and the convex lens
  • the final image of the object (1 1.4) is formed behind the object (1 1.1 ) in accordance with optical laws.
  • the eye (1 1 .12) sees this third image (1 1 .4).
  • the movement of the object (1 1.1 ) in the (-) direction will also move the final image (1 1 .4) in the (-) direction (they may be driven by the electrical-electronic and/or mechanical and/or pneumatic and/or hydraulic drive systems).
  • the horizontal ray (C1 ) exiting from the head of the object refracts (C1 .1 ) in a manner such that its extension passes from the focal point
  • the first image (1 1.2) of the object acts as an object for concave lens number two
  • the horizontal ray (G.1 ) exiting from the head of the first image refracts (G .1 ) in a manner that its extension passes from the focal point (1 .9) of concave lens number two.
  • the second image . (1 1.3) of the object is formed as a result of this intersection.
  • the second image (11 .3) of the object acts as an object for the convex lens
  • the final image (1 1.4) formed in the lens system acts as an object for the semipermeable reflective surface (12.1).
  • the image (12.2) is formed on the semipermeable reflective surface.
  • the position of this image is such that the distance (12.3) of the final image to the semipermeable mirror and the distance (12.4) of the final image formed on the semipermeable reflective surface to the semipermeable reflective surface is equal (as described previously in relation to Figure-7).
  • the final image (12.2) formed by the semipermeable reflective surface and the lens system (12.S) can be seen by the eye (1 1.12) looking at the reflective semipermeable surface (12.1 ).
  • the final image (1 1 .4) of the object in the lens system needs to be moved further (-) from the object (11 .1 ). For this reason, concave lens number two (1 1.0) needs to be moved in the (-) direction. In this situation, the final image ( 1 .4) will become larger and at the same time move in the (-) direction. If the object (11 .1 ) is moved in the (+) direction, the part of the final image to become larger will remain partially fixed.
  • concave lens number two (1 1 .6) and/or the object (1 1.1 ) needs to be moved in (+)(-) directions.
  • the action of changing the position of the final image (1 .4) of the lens system changes the position and size of the final image (12.2) reflecting from the semipermeable reflective surface.
  • the main logic here is that the focusing distance of the image formed by means of changing the positions of the lenses and the object is changed.
  • the lens system (4.3.5.3) can be defined as an optical regulator that enables the individual (1 ), by means of the optical laws in physics, to see the ray (image) received from the screen (4.3.5.1 ) at varying dimensions and/or varying depth perception in accordance with the semipermeable reflective surface (4.3.5.2) which is in the range of vision.
  • This optical regulator specified for the virtual image (14.1 ) is driven by the electric-electronic and/or mechanical and/or pneumatic and/or hydraulic drive system (4.3.5.4) located inside the lens system (4.3.5.3).
  • Figure-13 The case in which the distance (7.5) between the table and the semipermeable reflective surface in Figure-7 is increased is shown in Figure-13.
  • the increased distance between the table and the semipermeable reflective surface (7.2) in Figure-13 is shown as (13.1 ) and this figure includes the lens system box (1 1 .K) containing the lens system (1 1 .S) theoretically schematized in Figure- 1 .
  • the technology of the invention may be used in various applications in many different ways as dependent on and/or independent from many other technologies and is a technology that enables a three and/or two dimensional virtual image to be seen in a local environment of which the position and/or form is previously determined and/or not determined.
  • the most basic application of this invention is the previously mentioned wearable video glasses.
  • a dynamic image is obtained in these systems and the user is able to see the formed image at different distances, different dimensions and/or different depth perception as being mobile and/or immobile in accordance with at least one reference point with varying and/or stationary position.
  • each eye should be able to see a separate image in order for the stereoscopic image to be formed.
  • the optical image processing hardware found in the system of the invention needs to use at least one optical image processing hardware (4.3.5) for each eye by means of forming dependent and/or independent images for each eye of the user, enabling the eyes of the user to see them separately.
  • the eyes of the user are able to see the independent image it must see separately and the system of the invention is able to create a stereoscopic image.
  • the application of known heads of display applications enables the three and/or two dimensional virtual image that is formed as if it actually exists at different distances, different dimensions and/or different depth perception as being mobile and/or immobile in accordance with at least one reference point with varying and/or stationary position and by this means, information for cars such as GPS, speed, warnings, advertisement and the like can be displayed without distracting the driver.
  • This application may similarly be applied in all other transportation vehicles as well.
  • the invention may be applied and find application in many other similar applications either specified or not specified in the above examples.

Abstract

The invention is related to a device consisting of electronic and mechanical hardware enabling at least two and/or three dimensional images formed as if actually there being mobile or immobile in accordance with at least one reference point with varying and/or stationary position be displayed at a varying distance to a reference point, at varying dimensions and/or varying depth perception using optical laws in a local environment (living room, bus stop, street, etc.) of which the position and/or form is previously determined and/or not determined.

Description

DESCRIPTIO SYSTEM FOR FORMING A VIRTUAL IMAGE TECHNICAL FIELD
The invention is related to a device consisting of electronic and mechanical hardware enabling at least two and/or three dimensional images formed as if actually there being mobile or immobile in accordance with at least one reference point with varying and/or stationary position be displayed at a varying distance to a reference point, at varying dimensions and/or varying depth perception using optical laws in a local environment (living room, bus stop, street, etc.) of which the position and/or form is previously determined and/or not determined. (Virtual image: It is the two and/or three dimensional image formed to display, as if actually there, visual aids [video broadcast and any kind of picture, billboard, traffic sign, etc.] that do not exist physically and/or are not in the physical vicinity by means of reflection at varying distances, varying dimensions and/or varying depth perception upon the contact of an individual and/or individuals with the device utilizing the subject matter technology.) The device of the invention covers all wearable (glasses, etc.), mobile and/or immobile applications (handheld computers, mobile phones, laptop-desktop computers, televisions, wearable video glasses, cars, etc.) that can be implemented for all hardware.
PRIOR ART
In today's world, three dimensional imaging technologies can be found in various forms and types. These are essentially stereoscopic, autostereoscopic imaging. They may be grouped as augmented reality display, heads of display (virtual display panel), wearable virtual display glasses and volume imaging devices.
Stereoscopic and autostereoscopic imaging technology is based on the three dimension perception of the human brain being activated. While in the formation of stereoscopic and autostereoscopic imaging it is not necessary for the user to use a corrector such as glasses for the user to see the three dimensional image, other methods require the use of various devices and glasses. The basis and reason of the human eye to be able to perceive three dimension objects with their depths is that due to the distance between our two eyes, we perceive the objects we look at, at a certain depth. In Figure -1 , the area (1 .1) that the left eye sees and the area (1 .2) that the right eye sees is shown. In Figure-2, we can see objects with "X" and "Y" written on them inside the perception areas (1 .1 ), (1.2). The situation where we look at the figure (1 .3) in which the X and Y letters have been written from the front, and the situation where the right and left eyes see in Figure 2 have been given separately. As the left and right eyes perceive the object from different angles, while the left eye can see the difference between the objects (2.1 ) the right eye cannot. The difference between these two perceptions are combined together with the depths between the objects and a perception of depth is established. So it can be said that the perception of depth, is a phenomenon that is established by the brain, combining the different perceptions of both eyes of a person who is looking at a three dimension object. Three dimension imaging technologies are created for TV's and the cinema by using this physical condition. In said technologies the film is recorded from two different angles, which are the angles both the left and the right eye can see and then said images are then converged on top of each other and a depth perception is virtually created.
Figure-3 shows the images seen separately by the left and the right eye in Figure-2 as overlapped on the screen. In this case, a regulator or a corrector in the form of glasses that can separately direct the overlapped images on the screen the left and the right eye need to see to the left and the right eye is needed.
This is the logic behind creating images for three dimensional televisions and movie theatres. There are various glasses technologies that utilize this physical event. Some of these are namely Xband, RealD 3D technologies. These technologies exhibit certain differences, but nevertheless provide a three dimensional (depth) perception by showing separate images that must be seen separately by the left and the right eye.
In Xband glasses, there are two LCD screens where the lenses should be and there is one IR (infra-red-receiver) on the glasses. By means of sending signals that can be received by the IR on the glasses sent in cinema theatres simultaneously with the movie and the LCD screens used instead of the lenses shutting and opening at very fast intervals (120 Hz) the separate image for each eye is formed.
However in real 3D glasses systems, the film that is recorded, is recorded with a horizontal polarized filter for the left eye and a vertical polarized filter for the right eye. (Various filters can be used to move the ray in the direction of the desired wave. These filters are the vertical and horizontal filters. When one looks at the image (to the ray) with a horizontal polarized filter, one is able to only see the vertical wave. Thus, the horizontal polarization filter filters the horizontal polarization coming from the ray and the vertical polarized filter filters the vertical polarization coming from the ray).
In movie theatres, the projection device with two separate lenses has a vertical and horizontal polarization filter, respectively (the rays coming out of the lenses are filtered vertically and horizontally). When two images filmed taking into account the angle (distance) between our eyes are simultaneously reflected onto the screen from the lens, an individual looking at the images with the naked eye will see an overlapped blurred image as shown in Figure-3. However, when a vertical filter is used to view this overlapped image with different polarizations, the image filmed with the vertical filter will only be seen when it is viewed with the horizontal filter and the image filmed with the horizontal filter will only be seen when it is viewed with the vertical filter. Thus, when one of these filters is placed on the right eye and the other is placed on the left eye, only the right eye will see the image the right eye is supposed to see and only the left eye will see the image the left eye is supposed to see. By this means, as the two images on the single screen is adjusted, a depth perception is formed.
In today's world, in addition to technology of this type, there are also various work related to the formation of three dimensional and/or volume hologram imaging in the air.
Technology for creating hologram image using laser can be found at Miraikan in
Japan (The National Museum of Emerging Science and Innovation (Miraikan) / www.miraikan.jst.go.ip). Here, high energy laser that is sent at ionized air forms a three dimensional image in the air. However, as the laser used contains high density energy, it is highly dangerous and is currently only displayed at the museum. Augmented reality imaging technology: An image of a local environment is obtained using a camera and the obtained image is viewed on a screen. This camera and the screen through which the image is viewed is connected to a computer system and the obtained image is analyzed in the computer system; and objects which are not in the environment of where the video image is and formed in the computer system are shown to the user as if they are in the same environment as where the video image is shown. A similar application is described in patent applications US 2008/0071559 and WO 2011/144793.
Head up display unit (virtual display panel): This is a system used in fighter aircraft for many years. This system enables the information on the dashboard to be shown in front of the pilot on a semipermeable surface so that the pilot is not distracted by looking at the dashboard. Similar systems have been utilized by automobile companies such as BMW in the models wherein the values indicated on the dashboard are reflected onto the front glass of the car so that the driver is able to keep his eyes on the road.
Many patent applications related to this system have been submitted. One such example is patent WO 2010/103596 submitted by Honda.
Smart glasses: companies like Lumus (www.lumus-optical.com), Samsung and Google have similar applications. However, the work of these companies are totally in the form of glasses systems and the image the individual wearing the glasses will see is at a fixed distance from the individual. When the user uses the glasses and moves his/her head left and right, up and down or when he/she walks around while wearing the glasses, the image also moves along so as to be at a fixed distance from the user. When the user focuses on the formed image, he/she sees the objects behind the image as blurry. When the user focuses on the objects behind the image, then this time the image formed appears as blurry. This is the most major disadvantage of the system (it is not possible to use the glass of the glasses produced by Lumus with the glasses and system of this invention). AIM OF THE INVENTION
The invention enables images that do not exist in a physical environment of which the position is determined and/or not determined beforehand to be seen in a physical environment by means of electronic hardware by the individual using the devices utilizing the technology of the invention. In this application, the individual using this device utilizing this technology is able to see a virtual image constant to the environment and is able to see the image as if it actually exists in the physical environment in a manner such that when the individual approaches/moves away from the image, it grows/reduces in size and/or when the individual turns his/her head to the left/right wearing the glasses, the position of the images do not change in the physical environment.
The device enables real physical objects in the same (nearby) location (position) as the image to be seen by the individual at the same clarity when the individual focuses on the image newly formed in the environment. Moreover, the invention can be applied to video chat, watching movies, reading newspaper, personal virtual applications, display of virtual billboards in streets or military applications and the like by means of using it as a personal computer or together with a portable computer.
Description of the drawings
The structural and characteristic features and all of the advantages of the invention may be better understood in light of the figures provided below and the detailed descriptions provided referencing these figures and therefore it is required for assessment to be made to these figures and the detailed descriptions into account.
Figure 1- Top view of boxes with the letters X and Y printed on them and an individual looking at these boxes.
Figure 2- View of these boxes seen separately by the left and the right eye of the individual in Figure-1.
Figure 3- Overlapped view of the images seen separately in Figure-2 by the left and the right eye on the screen.
Figure 4- General view of the system of the invention. Figure 5- Position of the individual within the example environment limited by the LPS (local positioning system) and the position of the image formed.
Figure 6- Schematic view of an object reflecting in the mirror.
Figure 7- Positions of the individual, reflective semipermeable surface, screen and table. Figure 8- The way in which the individual sees the image an the screen by means of looking at the reflective semipermeable surface and the transparency of the semipermeable surface.
Figure 9- The way in which the individual sees the image on the screen by means of looking at the reflective semipermeable surface when the table is near the reflective semipermeable surface.
Figure 10- The way in which the individual sees the image on the screen by means of looking at the reflective semipermeable surface when the table is distant from the reflective semipermeable surface.
Figure 1 - Lens system
Figure 12- Schematic view of the main lens and optical image formation hardware of the system.
Figure 13- Schematic use of the main lens and optical image formation hardware.
Figure 14- The way in which the individual looking at the image sees the image in the schematic use of the main lens and optical image formation hardware.
Figure 15- The parts of the system of the invention described in the general view in Figure-4 shown in Figure- 4 as the equivalent of the main lens and optical image formation hardware.
Figure 16 (a) - Focusing of the human eye in order to see a nearby object.
Figure 16 (b) - Focusing of the human eye in order to see a distant object.
Description of the references in the figures:
1) Individual.
1.1) Area visualized by the left eye.
1.) Area visualized by the right eye. 1 .3) Objects marked X and Y.
2.1 ) Distance between objects marked X and Y.
2.2) Depth between objects marked X and Y.
4) Virtual image system.
4.1 ) Computer unit content.
4.1 .1 ) Cable and/or wireless communication devices inside the computer unit.
4.1.2) Positioning software algorithm inside the computer unit.
4.1.3) Image processing software inside the computer unit.
4.2) Global and local positioning device systems.
4.2.1 ) Local positioning device.
4.2.2) Global positioning device.
4.2.3) Cable and/or wireless communication devices inside the global positioning device. 4.2) Cable and/or wireless communication devices inside the global and local positioning device systems.
4.3) Virtual image unit content.
4.3.1 ) Cable and/or wireless communication devices inside the virtual image unit.
4.3.2) Digital image processing hardware.
4.3.3) Voice processing hardware.
4.3.4) Power unit.
4.3.5) Optical image processing hardware content.
4.3.5.1 ) Screen inside the virtual image unit.
4.3.5.2) Semipermeable reflective surface inside the virtual image unit.
4.3.5.3) Optical image focus system and drive mechanism inside the virtual image unit.
4.3.5.4) Lens system drive system.
4.3.6) Gyroscope.
4.3.7) Accelerometer.
4.3.8) Speaker system.
4.3.9) GPS (global positioning system) inside the hologram image unit.
4.3.10) LPS (local positioning system) inside the hologram image unit.
4.3.1 1 ) Magnetometer. 4.3.12) Light sensor.
4.4) Virtual image volume reference unit.
4.4.1) Cable and/or wireless communication devices inside the virtual image volume reference unit.
4.4.2) GPS (global positioning system) inside the virtual image volume reference unit.
4.4.3) LPS (local positioning system) inside the virtual image volume reference unit.
4.4.4) Light sensor inside the virtual image volume reference unit.
4.4.5) Virtual image volume reference point unit.
5) Physical environment limited by LPS.
5.1) Individual inside the physical environment limited by the LPS system.
5.2) The image of which the formation is desired within the physical environment limited by the LPS.
5.3) Signals sent by the signal transmitter.
6.1) Mirror.
6.2) Object in front of the mirror.
6.3) The image formed in the mirror object system of Figure-6.
6.4) Distance of the object to the mirror.
6.5) Distance of the image formed in the mirror object system of Figure-6 to the mirror.
6.6) Front of the mirror.
6.7) Back of the mirror.
7.1) Screen.
7.2) Semipermeable reflective surface.
7.3) Table.
7.4) Distance of the semipermeable reflective surface to the screen.
7.5) Distance of the table to the semipermeable reflective surface.
8.1) Image of the screen formed on the semipermeable reflective surface.
8.2) Part of the table visible within the semipermeable reflective surface.
9.1) Section wherein the image formed on the semipermeable reflective surface cannot be seen outside of the semipermeable surface.
11.S) Lens system. 1 1 .1 ) Object in front of the lens system.
1 1.2) First image.
1 1.3) Second image.
1 .4) Final image.
1 1.5) Concave lens number one.
1 1.6) Concave lens number two.
11.7) Convex lens.
11 .8) Focal point of concave lens number one on the side of the object.
1 .9) Focal point of concave lens number two on the side of the object.
11.10) Focal point of the convex lens on the side without the object.
11 .1 1 ) Focal point of the convex lens on the side of the object.
1 .12) Eye looking at the final image.
1 1 .K) Box bearing the optical system.
M1 ) Center point of concave lens number one.
M2) Center point of concave lens number two.
M3) Center point of the convex lens.
C1 ) Horizontal ray exiting from the head of the object.
C1 .1 ) Ray refracting as a result of the extension of the horizontal ray exiting from the head of the object passing from the focal point of concave lens number one.
C2) Ray sent from the head of the object to the center point of concave lens number one.
G1 ) Horizontal ray exiting from the head of the first image.
G1 .1 ) Ray refracting as a result of the extension of the horizontal ray exiting from the head of the first image passing from the focal point of concave lens number two.
G2) Ray sent from the head of the first image to the center point of concave lens number two.
K1 ) Horizontal ray exiting from the head of the second image.
K1 .1 ) ) Ray refracting as a result of the extension of the horizontal ray exiting from the head of the second image passing from the focal point of the convex lens.
K2) Ray sent from the head of the second image to the center point of the convex lens. 12. S) The optical system formed of the lens system, the reflective semipermeable surface and the eye looking at this surface.
12.1) Reflective semipermeable surface shown as being in front of the lens system.
12.2) The final image formed by means of the lens system, the object, the reflective semipermeable surface and seen by the individual looking at the reflective semipermeable surface.
12.3) The distance between the reflective semipermeable surface and the final image formed on the lens system.
12.4) The distance between the semipermeable reflective surface and the image of the final image formed in the lens system formed on the semipermeable reflective surface.
13.1) The increased distance between the reflective semipermeable surface and the table.
14.1) The new virtual image near the far table formed in the lens system.
16) Human eye diagram.
16.1) The star object the human eye focuses on to see near and far.
16.2) Human eye lens.
16.3) Human eye retina.
16.4) Variable distance between the eye and the star object. DETAILED DESCRIPTION OF THE INVENTION
In order to fully explain the system of the invention, firstly what the optical laws that enable the invention to exist are and how they enable this system to operate by means of electronic hardware shall be explained. The general diagram of the device and hardware is presented in Figure-4.
This invention, which embodies multiple technological hardware in terms of functionality, can in principle, be examined in terms of four separate parts. The system may be designed so as to be formed of a structure physically constituting a whole and/or formed more than one separate part and one and/or more than one of the functions can be carried out via the combination of different parts based on the needs of the system. The system, as presented in Figure-4, is formed of at least one computer unit (4.1), at least one global and at least one local positioning device (4.2), at least one virtual image unit (4.3) and the volume in which the virtual image is created (4.4). The system utilizes cable and/or wireless communication devices (Bluetooth, wireless, etc.) each of which operates independent from and/or dependent on each other, physically integrated and/or dissociated with each other. These parts, which are in continuous and/or intermittent communication, enable the system to form virtual images in the desired shape, size and in the desired environments.
The system enables relatively high level data transfer and multiple functional algorithms to be carried out. For this reason, this process power required in the operation of the system is mainly provided by the computer unit (4.1).
The computer unit (4.1) principally supports three main processes. These are the management of communication with cable and wireless devices (4.1.1), assessment of the global and local positioning information from the virtual image unit (4.3) GPS (4.3.9) and LPS (4.3.10), and the calculation of axial rotation information received from the gyroscope (4.3.6) on at least one axis, acceleration information received from the accelerometer (4.3.7) on at least one direction, magnetic positioning information received from at least one magnetic compass unit (4.3.11), positioning information of the virtual image volume reference point unit (4.4) and the virtual image unit (4.3) in space and/or the current acceleration. (GPS technology is generally used in open fields for global positioning by means of the assessment of the signals received from multiple satellites orbiting and constantly sending GPS signals to earth. The determination of the global positioning of the system mentioned in this invention may also be provided by means of GPS satellites. LPS technology, which operates under a similar logic to GPS satellites, is also used for constant and/or mobile signal transmitters carrying out signal transmission on a local level). The most important function of this section is the determination of the positioning, calculation of the focus distance, and determination of the dimension of the two and/or three dimensional virtual image formed for a varying and/or constant position and the transfer of all of these information to the virtual image unit in a processed and/or raw manner by means of cable and/or wireless communication devices (4.1.1). The global and local positioning devices (4.2) shown in Figure-4 may be used to determine the position of the individual using the system and/or the position of the virtual image unit (4.3) and/or the position of the volume in which the virtual image is created by means of the reference point unit (4.4) in open and closed spaces and/or transmit the same to the computer unit by means of cable and/or wireless communication devices (4.2.3). The system (4) includes two main positioning technologies. These are GPS (global position system) and LPS (local positioning system).
As can be seen in Figure-5, signals (5.3) sent from local positioning devices and/or global positioning system devices (4.2) as multiple signal transmitters are detected by the system (4) described in the invention; local positioning including those of open and/or closed areas is carried out and the signals are transmitted to the computer unit (4.1) by means of cable and/or wireless communication devices (4.2.3). By this means, the positioning of the virtual image unit (4.3) and/or the virtual image volume reference point (4.4) can be carried out at any point. The units which provide cabled and/or wireless, continuous and/or discontinuous data transfer between virtual image units (4.3) and the computer unit (4.1 ), perform in accordance with the requirements, the transfer of desired analogue and/or digital, double and/or single direction, images and/or sound data.
In Figure-4, the virtual image unit (4.3) may be expressed with principle units integrated and/or dissociated with other sections (4.1 , 4.2 and 4.4). The virtual image unit (4.3) includes cable and/or wireless data communication devices (4.3.1 ) enabling the desired image and/or audio, analog and/or digital dual and/or one way data transfer to the cable and/or wireless communication devices (4.1 .1 ) found in the computer unit. The data (images to be created virtually) received at the virtual image unit (4.3) from the computer unit by means of the cable and/or wireless communication devices (4.3.1 ) is processed by the image processing hardware (4.3.2) and formed on the screen (4.3.5.1 .) as part of the optical image processing hardware (4.3.5). Additionally, in accordance with the type of application used (hand-held computers, mobile phones, desktop computers, televisions, video glasses, etc.) by the system (4), audio broadcasting is carried out over the speaker system (4.3.8) by means of the voice processing hardware (4.3.3). In order to provide ease and mobility to the virtual image unit, the unit possesses internal power units (4.3.4) and can store energy. The gyroscope (4.3.6), accelerometer (4.3.7) and magnetometer (4.3.1 1 ) systems located at the virtual image unit detect small changes in position instantaneously and relatively and tracked the same by the system specified in Figure-4. This information related to position and angle detected by various sensors (4.3.6, 4.3.7, 4.3.9, 4.3.10, 4.3.1 1 ) is transmitted to the virtual image unit (4.3) and/or the computer unit (4.1 ). This comparative data and the position of the virtual image unit (4.3) are detected dependent on and/or independent of the computer unit. The luminance of the virtual image created by the system is adjusted according to the environment the device is found and use light sensors (4.3.12) for this purpose. Moreover, at least one microphone (4.3.13) is included for applications in the system requiring voice detection (voice command, voice recording, etc.).
The section in Figure-4 specified as the virtual image volume reference point unit (4.4) can be expressed as the position wherein the virtual image is formed by the system (4) of the invention and in the event that it is desired for the location of the area in which the image will be created to not be defined and/or specified, at least one single degree of freedom space is used. This section includes at least one cable and/or wireless communication device enabling communication with the computer unit (4.1 ). At least one GPS (4.4.2) and/or LPS (4.4.3) unit is included in order to determine the volume in which the image will be formed for cases where positioning is not carried out and/or not desired. At least one light sensor (4.4.4) is included to calculate the quantity of light where the image will be formed and to adjust the quantity of light of the virtual image and includes internal power units (4 4.5) and can store energy to provide ease and mobility to the system.
As a result of all of these processes, the virtual image is created with the spatial position, distance and/or varying depth perception desired and the user can see the virtual image by means of the optical image processing hardware (4.3.5).
The optical image processing hardware shown with dashed lines in Figure-15 (4.3.5 in Figure-4 and Figure-15) is formed of the screen (LED, TFT, AMOLED, LED, projection, laser projection, etc.) (4.3.5.1 ), the semipermeable reflective surface (4.3.5.2) and the lens system. The most important hardware to enable the image created to achieve the objective of the invention to appear as if it actually exists is the optical image processing hardware (4.3.5).
The feature created by the optical image processing hardware can be expressed as an optical reflection that can be seen by everyone when they look outside from the window of the vehicle when using public transportation (especially when light intensity in the internal environment is greater in comparison to the external environment). The windows of the vehicle act as a semipermeable mirror in this optical reflection and the virtual image of the objects within the vehicle (passengers, seats, etc.) can be seen as reflections overlapping with the objects outside of the vehicle. In other words, as a result of the objects inside the vehicle reflecting from the window, an individual looking from inside the vehicle sees the virtual image of those objects as if they appear outside the vehicle at the same distance as the distance of the objects to the window. In this case, an individual looking at these virtual images from inside the vehicle is able to see these reflected images outside the vehicle, in the real world. The case described here is an example of how a hologram image can be created.
In order the describe this in greater detail: a screen (7.1 ) is placed in front of the semipermeable reflective surface (7.2) that can create the mirror effect shown in Figure- 7 and the part on the screen where the image is displayed is placed so as to face the reflective semipermeable surface (7.2) and so as to provide for a certain distance (7.4) between the screen and the surface. Additionally, a table (7.3) is placed so as to be between the semipermeable reflective surface (7.2) and at a distance (7.5) equal to the distance (7.4) between the screen and the surface. The position of the objects specified in the example in Figure-7 is preserved so that an individual (1 ) looking at the semipermeable reflective surface (7.2) in Figure-8 views the virtual image (8.1 ) and the part of the table between the surface that is visible (8.2) as overlapping when the line of vision is changed. Figure-9 shows that the image (8.1 ) cannot be seen when looking from outside the surface (7.2). This is shown as the part of the image that is not visible (9.1 ). As can be seen from Figure-15, the optica! image processing hardware (4.3.5) can fundamentally be examined as three main parts. These are namely the screen (4.3.5.1 ) enabling the virtual image to be created to be physically reflected, the semipermeable reflective surface (4.3.5.2) enabling the virtual image to be created to be created physically, and the optical system (4.3.5.3) enabling the created virtual image (14.1 ) to be viewed in accordance with changing position, angle and/or depth perception by the individual using the device.
Figure-4 comprises the screen (4.3.5.1 ) that is a part of the optical image processing hardware (4.3.5),within the device where the desired virtual image (14.1 ) arid the physically created digital and/or analogue image is formed by means of the system subject to the invention shown in Figure 15. This device is a hardware, where the image (14.1 ) that is desired to be created virtually at a desired size and/or perspective is created physically and whose reflection is formed at a reflective surface (4.3.5.2).
The part where the physical image created by the screen (4.3.5.1) is converted into a virtual image (14.1 ) is the semipermeable reflective surface (4.3.5.2). As defined in optics, the semipermeable reflective surface (4.3.5.2) in Figure-15 is the surface that reflects a part of the ray coming onto it in accordance with optical laws and allows a part to pass through it. This event can be explained with the image of an object that is reflected on the mirror. An object (6.2) is placed in front (6.6) of the mirror (6.1 ) so as to provide for a certain distance (6.4) between the object and the mirror. The virtual image (6.3) of the object formed on the mirror is such that the distance (6.4) between the object and the mirror and the distance (6.5) between the virtual image and the mirror is equal and that it is formed behind (6.7) the mirror. However, as the mirror is not semipermeable, the objects behind it are not visible. The semipermeable surface (4.3.5.2) defined in this invention acts as a mirror but as it allows a part of the light coming onto it to pass; the objects behind it are also visible.
In Figure-16, for the human eye (16) to see the objects (16.1 ) clearly (viewed at the retina layer (16.3)) it must focus on the distance (16.4) where the object is and adjust the focal point by changing the dimensions of the lens (16.2) found in the eye. In other words, it is required for the rays reflecting from the object to fall clearly on the retina layer (16.3) of the eye. For this reason, it is not possible for two objects that are different in terms of depth to be viewed clearly at the same time with the naked eye (in Figure-16, a difference between the distance (16.4) at which object (a) is at and the distance (16.4) at which object (b) is at can be seen). For this reason, an individual who is close to one of two objects that are at a certain distance to each other at a straight line sees the object in the back as indistinct when he/she looks at the object he/she is close to- Examining the case consisting of an object (7.1), semipermeable reflective surface (7.2) and a table such as the ones in Figure-7, it can be seen that when the distance (7.5) of the object to the mirror and the distance (7.4) of the screen to the mirror are equal, the image (8.1) formed on the semipermeable reflective surface (7.2) appears to the individual (1 ) as if it is in the same location as the table (7.3) and the individual (1) can see both the virtual image (8.1 ) and the table (7.3) with the same clarity and as overlapped. However, in Figure-7, if the distance (7.5) between the object and the semipermeable reflective surface is increased or decreased while maintaining the distance (7.4) between the screen (7.1 ) and the semipermeable surface (7.2), a situation such as the one in Figure-10 is observed (one of the objects loses clarity). In this event, if the individual (1 ) focuses on the virtual image (8.1 ), the individual (1 ) sees the table (7.3) indistinct like in Figure-10 due to the table (7.3) being distant from the image (8.1 ). Similarly, when the individual (1) focuses on the table (7.3), he/she will see the virtual image (8.1) blurry as well. In order for the individual (1 ) to see the image (8.1 ) as if it is again in the same place as the table (7.3), either the reflective surface (7.2) needs to be brought closer to or away from the table (7.3) or the screen (7.3) needs to be brought closer to or away from the reflective surface (7.2). The reason for this action is related to the position of the image in the mirror being determined in accordance with the optical rules previously explained.
By means of the technology of this invention, the lens system (4.3.5.3) in Figure- 15 enables the virtual image created (14.1 ) to be overlapped with the real image (table(7.3)) at the distance it is desired to display the virtual image by means of changing the focal point of the virtual image (14.1 ) created on the semipermeable reflective surface (4.3.5.2).
This physical feature is provided in a number of different ways utilizing present technology. The most basic example of this uses a fixed focus lens system (1 1.S) with solid lens.
Figure-1 shows how this lens system operates in the simplest state. The eye (1 1.12) looks at the lens system (1 1 .S) from in front of the convex lens (1 1 .7) to be able to see the object (1 1 .1 ) behind the lens system. The eye (1 1 .12) sees the final image (1 1.4) of the object as being further and smaller than the object.
It is known how the image of objects is formed in concave and convex lenses utilizing optical laws. The following process is used in order to form the final image (1 1.4) of the object on paper. The first image ( 1 .2) of the object is formed on concave lens number one (11 .5). The first image of the object (1 1 .2) acts as the real object for lens number two (1 1 .6) and the second image of the object (1 1 .3) is formed. Similarly for the convex lens (1 1 .7), the second image of the object (1 1 .3) acts as the object and the third image of the object (1 1.4) is formed. There are two focal points on the right and left of each lens (concave and convex lenses). Figure-1 1 shows the single focal point (1 1 .8) of concave lens number one, single focal point (1 1 .9) of concave lens number two, and two focal points (1 1.10)(1 1.11 ) of the convex lens. As the second image of the object (1 1 .3) is formed between the focal point ( 1.1 1 ) of the convex lens and the convex lens
(1 1.7) , the final image of the object (1 1.4) is formed behind the object (1 1.1 ) in accordance with optical laws. The eye (1 1 .12) sees this third image (1 1 .4). The movement of the object (1 1.1 ) in the (-) direction will also move the final image (1 1 .4) in the (-) direction (they may be driven by the electrical-electronic and/or mechanical and/or pneumatic and/or hydraulic drive systems).
Expressed in more detail, the horizontal ray (C1 ) exiting from the head of the object refracts (C1 .1 ) in a manner such that its extension passes from the focal point
(1 1.8) of concave lens number one. The ray (C2) sent from the head of the object to the center point (M1 ) of concave lens number one intersects with extension of the ray (C1.1 ) formed as a result of the horizontally sent ray refracting. This intersection forms the first image (1 1 .2) of the object (1 1.5) at concave lens number one (1 1.5).
The first image (1 1.2) of the object acts as an object for concave lens number two
(1 1.6) . Similarly, the horizontal ray (G.1 ) exiting from the head of the first image refracts (G .1 ) in a manner that its extension passes from the focal point (1 .9) of concave lens number two. The ray (G2) sent to the center point (M2) of concave lens number two from the head of image number one (1 1 .2) and the ray formed (G 1.1 ) as a result of the ray sent horizontally refracting intersect. The second image . (1 1.3) of the object is formed as a result of this intersection.
The second image (11 .3) of the object acts as an object for the convex lens
(1 1.7) . The horizontal ray (K1) exiting from image number two refracts (K1 .1 ) by passing from the focal point of the convex lens (1 1.10). The intersection of the ray (K2) sent from the head of image number two (1 1 .3) to the convex lens center point ( 3) and the ray formed (K1.1 ) as a result of the extension of the ray sent horizontally refracting lead to image number three (1 1.4) forming at the point of intersection.
When the semipermeable reflective surface (12.1 ) that can act as a mirror is placed in front of the lens system (1 1.S) as shown in Figure-12, the final image (1 1.4) formed in the lens system acts as an object for the semipermeable reflective surface (12.1). As a result, the image (12.2) is formed on the semipermeable reflective surface. The position of this image is such that the distance (12.3) of the final image to the semipermeable mirror and the distance (12.4) of the final image formed on the semipermeable reflective surface to the semipermeable reflective surface is equal (as described previously in relation to Figure-7). The final image (12.2) formed by the semipermeable reflective surface and the lens system (12.S) can be seen by the eye (1 1.12) looking at the reflective semipermeable surface (12.1 ).
In order to increase the distance (12.4) between the final image of the object reflecting from the semipermeable reflective surface and the semipermeable reflective surface, the final image (1 1 .4) of the object in the lens system needs to be moved further (-) from the object (11 .1 ). For this reason, concave lens number two (1 1.0) needs to be moved in the (-) direction. In this situation, the final image ( 1 .4) will become larger and at the same time move in the (-) direction. If the object (11 .1 ) is moved in the (+) direction, the part of the final image to become larger will remain partially fixed. In this situation, in order to change the position and size of the final image (1 1 .4) that is formed , concave lens number two (1 1 .6) and/or the object (1 1.1 ) needs to be moved in (+)(-) directions. As a result, the action of changing the position of the final image (1 .4) of the lens system changes the position and size of the final image (12.2) reflecting from the semipermeable reflective surface. The main logic here is that the focusing distance of the image formed by means of changing the positions of the lenses and the object is changed.
As can be seen in Figure-15, the lens system (4.3.5.3) can be defined as an optical regulator that enables the individual (1 ), by means of the optical laws in physics, to see the ray (image) received from the screen (4.3.5.1 ) at varying dimensions and/or varying depth perception in accordance with the semipermeable reflective surface (4.3.5.2) which is in the range of vision. This optical regulator specified for the virtual image (14.1 ) is driven by the electric-electronic and/or mechanical and/or pneumatic and/or hydraulic drive system (4.3.5.4) located inside the lens system (4.3.5.3).
When the lens system (1 1 .S) shown in Figure-1 1 is placed between the semipermeable reflective surface (7.2) and the screen (7.1) in Figure-10, the resulting situation can be seen in Figure-13. In this new situation, the screen (7.1 ) in Figure-13 will act as the object instead of the object in Figure-12. This new situation is theoretically schematized in Figure-12 and shown physically in Figure-13.
The case in which the distance (7.5) between the table and the semipermeable reflective surface in Figure-7 is increased is shown in Figure-13. The increased distance between the table and the semipermeable reflective surface (7.2) in Figure-13 is shown as (13.1 ) and this figure includes the lens system box (1 1 .K) containing the lens system (1 1 .S) theoretically schematized in Figure- 1 .
In Figure-10, the individual (1 ) looking at the image (8.1 ) without using the lens system sees the image (8.1 ) clearly while seeing the table (7.3) as indistinct when he/she focuses to see the image (8.1 ) as the table (7.3) is further away. The distance (12.4) in Figure-12 between the final image formed in the semipermeable surface and the lens system (12.S) to the semipermeable reflective surface and the distance (13.1 ) in Figure- 3 between the table and the reflective surface is set to be equal with each other by previous adjustment utilizing the movement of the mobile concave lens (11.6) in the (-) direction.
In Figure-16, for image of the object (16.1) to fall onto the retina (16.3) by means of the eye (16) and be visible to the eye (16), the lens (16.2) needs to focus on the object. The way in which the eye focuses when the object (16.1) is distant (Figure-16(b)) or near (Figure-16(a)) was explained in Figure-16. Similarly, in Figure-14 which is the view from the eye of the user, when the individual (1) looking at the semipermeable reflective surface focuses on the virtual image (14.1) formed at the distance (13.1 ) between the table and the reflective surface by means of the lens system, he/she is able to see the image of the object (7.3) positioned at the same distance with the same clarity.
This situation is realized by means of the position of the virtual image unit (4.3) and/or the virtual image volume reference point unit (4.4) to be continuously tracked by the computer system (4.1) and the position of the image to be continuously tracked and updated by the system (4) to provide for the focusing distance to be changed and/or the angle and/or dimension of the image on the screen (4.3.5.1) to be changed. It is possible to also use other systems similar to the lens system mentioned here (lens systems containing liquid and with changeable focal point [Focus Liquid Lens], etc.) to form the image of the object to be closer or more distant from the position it is located at. All parts and sections of the system shown in Figure-4 as being separate and independent from each other are interchangeable and/or may be present at the same time.
The technology of the invention may be used in various applications in many different ways as dependent on and/or independent from many other technologies and is a technology that enables a three and/or two dimensional virtual image to be seen in a local environment of which the position and/or form is previously determined and/or not determined. The most basic application of this invention is the previously mentioned wearable video glasses. By means of the technology of the invention, a dynamic image is obtained in these systems and the user is able to see the formed image at different distances, different dimensions and/or different depth perception as being mobile and/or immobile in accordance with at least one reference point with varying and/or stationary position.
In the application of the invention, each eye should be able to see a separate image in order for the stereoscopic image to be formed. For this reason, the optical image processing hardware found in the system of the invention needs to use at least one optical image processing hardware (4.3.5) for each eye by means of forming dependent and/or independent images for each eye of the user, enabling the eyes of the user to see them separately. By this means, the eyes of the user are able to see the independent image it must see separately and the system of the invention is able to create a stereoscopic image.
In the technology of the invention, the application of known heads of display applications (in cars, planes, etc.) enables the three and/or two dimensional virtual image that is formed as if it actually exists at different distances, different dimensions and/or different depth perception as being mobile and/or immobile in accordance with at least one reference point with varying and/or stationary position and by this means, information for cars such as GPS, speed, warnings, advertisement and the like can be displayed without distracting the driver. This application may similarly be applied in all other transportation vehicles as well.
The invention may be applied and find application in many other similar applications either specified or not specified in the above examples.

Claims

1. The invention is the virtual image formation system consisting of electrical and/or electronic and/or mechanical and/or pneumatic and/or hydraulic hardware enabling the virtual image to be displayed using optical laws in a physical environment of which the position and/or form is previously determined and/or not determined; and it is characterized by an at least two and/or three dimensional imaging system (4) enabling a virtual image formed as if actually there as being mobile or immobile in accordance with at least one reference point with varying and/or stationary position to be displayed at a varying distance to a reference point, at varying dimensions and/or varying depth perception and is characterized in that it comprises:
• global and local positioning devices (4.2), integrated and/or separate computer unit (4.1), integrated and/or separate virtual image unit (4.3), virtual image volume reference point unit (4.4) that is separate and/or integrated with other parts (4.1 , 4.2, 4.3) used in marking in at least one single degree of freedom space wherein the positioning of the area in which it is not desired to define the positioning the image is to be created in and/or the same cannot be provided,
• at least one cable and/or wireless communication device (4.1.1 ) on the mentioned computer unit (4.1) that enables communication with other parts (4.1 , 4.2, 4.3, 4.4) of the system (4), at least one positioning algorithm (4.1.2) used for the assessment of the global and local positioning information received from at least a GPS (4.3.9) and at least an LPS (4.3.10) at the virtual image unit (4.3) and the assessment of axial rotation information received from at least one gyroscope (4.3.6) on at least one axis, calculation of the acceleration information received from at least one accelerometer (4.3.7) on at least one direction, magnetic positioning information received from at least one magnetic compass unit (4.3.11), the positioning information received from at least one LPS (4.4.3) and at least one GPS (4.4.2) of the virtual image volume reference point unit (4.4) that is separate and/or integrated with other parts (4.1 , 4.2, 4.3) used in marking in at least one single degree of freedom space wherein the positioning of the area in which it is not desired to define the positioning the image is to be created in and/or the same cannot be provided, the image processing software (4.1.3) that determines the virtual image to be created with and/or without positioning information,
• at least one cable and/or wireless communication device (4.2.3) enabling communication of mentioned global and local positioning devices (4.2) with the other parts (4.1 , 4.3, 4.4),
• the global and local positioning information taken from at least one LPS
(4.3.10) and at least one GPS (4.3.9) for the assessment of the information taken from the global and local positioning devices (4.2) on the mentioned virtual image unit (4.3), axial rotation information received from at least one gyroscope (4.3.6) on at least one axis, acceleration information received from at least one accelerometer (4.3.7) on at least one direction, magnetic positioning information received from at least one magnetic compass unit
(4.3.1 1 ) , at least one light sensor (4.3.12) to adjust the light density of the virtual image to be created in accordance with the environment the device is found in and to assess the light amount of the environment, at least one cable and/or wireless communication device (4.3.1) to enable communication with the computer unit (4.1 ), at least one optical image processing hardware (4.3.5) to enable the formation of the virtual image, at least one microphone (4.3.13) for applications requiring voice detection (voice command, voice recording, etc.), at least one voice processing hardware (4.3.3) for audio broadcasting and an internal power unit (4.3.4) that can store energy to for the electrical and electronic hardware within the virtual image unit (4.3.1 , 4.3.2, 4.3.3, 4.3.5, 4.3.6, 4.3.7, 4.3.8, 4.3.9, 4.3.10, 4.3.1 1 , 4.3.12, 4.3.13) to operate and to provide ease and mobility to the unit, • at least one screen (4.3.5.1) that enables the virtual image to be formed inside the mentioned optical image processing hardware (4.3.5) to be physically created, at least one semipermeable reflective surface (4.3.5.2) the enables the user to see the virtual image created and at least one lens system (4.3.5.3) that is solid and/or containing liquid inside that enables the virtual image (14.1) created to be seen with varying position, angle and depth perception according to the individual using the system, and an electric-electronic and/or mechanical and/or pneumatic and/or hydraulic drive system (4.3.5.4) located within the optical image processing hardware (4.3.5) to enable the lens system (4.3.5.3) to be adjusted with varying focusing distance,
• at least one LPS (4.4.3) and at least one GPS (4.4.2) within the virtual image volume reference point unit (4.4) mentioned that is separate and/or integrated with other parts (4.1, 4.2, 4.3) used in marking in at least one single degree of freedom space wherein the positioning of the area in which it is not desired to define the positioning the image is to be created in and/or in which the same cannot be provided, a light sensor (4.4.4) to calculate the quantity of light where the reference point unit (4.4) is located and to increase or decrease the quantity of light of the virtual image to be created, cable and/or wireless communication devices (4.4.1) to enable the transfer of this information to the other parts (4.1 , 4.2, 4.3), internal power units (4.4.5) that can store energy to provide ease and mobility to the system to enable the electric and electronic hardware (4.4.1 ,4.4.2,4.4.3,4.4.4) within the virtual image volume reference point unit.
2. Along with the depth perception which may vary according to Claim 1, the invention comprises at least two and/or three dimensional virtual image formation system characterized in that it comprises at least one optical imaging processing hardware (4.3.5) for each eye within the left (1.1) and right (1.2) fields of vision in order to form a stereoscopic and/or autostereoscopic image.
PCT/TR2014/000180 2013-05-29 2014-05-21 System for forming a virtual image WO2014193326A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR2013/06402 2013-05-29
TR201306402 2013-05-29

Publications (1)

Publication Number Publication Date
WO2014193326A1 true WO2014193326A1 (en) 2014-12-04

Family

ID=51205553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2014/000180 WO2014193326A1 (en) 2013-05-29 2014-05-21 System for forming a virtual image

Country Status (1)

Country Link
WO (1) WO2014193326A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013049248A2 (en) * 2011-09-26 2013-04-04 Osterhout Group, Inc. Video display modification based on sensor input for a see-through near-to-eye display

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013049248A2 (en) * 2011-09-26 2013-04-04 Osterhout Group, Inc. Video display modification based on sensor input for a see-through near-to-eye display

Similar Documents

Publication Publication Date Title
JP6433914B2 (en) Autostereoscopic augmented reality display
US9881422B2 (en) Virtual reality system and method for controlling operation modes of virtual reality system
US7091931B2 (en) Method and system of stereoscopic image display for guiding a viewer's eye motion using a three-dimensional mouse
JP6165170B2 (en) 3D display system
US9230500B2 (en) Expanded 3D stereoscopic display system
CN110968188A (en) Head position based application placement
US20110141246A1 (en) System and Method for Producing Stereoscopic Images
CN205195880U (en) Watch equipment and watch system
WO2011102136A1 (en) Three-dimensional display system and three-dimensional viewing glasses
US11762197B2 (en) Display systems with geometrical phase lenses
CN113302547A (en) Display system with time interleaving
US10464482B2 (en) Immersive displays
JP6712557B2 (en) Stereo stereoscopic device
WO2003073738A2 (en) Method and system for controlling a stereoscopic camera
US11822083B2 (en) Display system with time interleaving
WO2014193326A1 (en) System for forming a virtual image
WO2017208148A1 (en) Wearable visor for augmented reality
CN209086560U (en) Augmented reality 3 d display device
JP4399789B2 (en) 3D image viewing glasses
US11927761B1 (en) Head-mounted display systems
US20240104823A1 (en) System and Method for the 3D Thermal Imaging Capturing and Visualization
Foster VR visual display systems
CN105785579A (en) Stereoscopic projection display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14739587

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14739587

Country of ref document: EP

Kind code of ref document: A1