CN116783644A - Space suspension image display device - Google Patents
Space suspension image display device Download PDFInfo
- Publication number
- CN116783644A CN116783644A CN202180086904.9A CN202180086904A CN116783644A CN 116783644 A CN116783644 A CN 116783644A CN 202180086904 A CN202180086904 A CN 202180086904A CN 116783644 A CN116783644 A CN 116783644A
- Authority
- CN
- China
- Prior art keywords
- image
- spatially
- display device
- light
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000000725 suspension Substances 0.000 title abstract description 13
- 238000007667 floating Methods 0.000 claims abstract description 136
- 238000001514 detection method Methods 0.000 claims abstract description 111
- 238000003384 imaging method Methods 0.000 claims description 66
- 238000012545 processing Methods 0.000 claims description 46
- 238000003860 storage Methods 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 6
- 239000003550 marker Substances 0.000 claims 1
- 238000011161 development Methods 0.000 abstract description 7
- 230000036449 good health Effects 0.000 abstract description 2
- 230000036642 wellbeing Effects 0.000 abstract description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 90
- 230000010287 polarization Effects 0.000 description 90
- 238000000034 method Methods 0.000 description 82
- 238000010586 diagram Methods 0.000 description 76
- 238000009792 diffusion process Methods 0.000 description 40
- 238000006243 chemical reaction Methods 0.000 description 29
- 101100111861 Schizosaccharomyces pombe (strain 972 / ATCC 24843) but1 gene Proteins 0.000 description 25
- 230000003287 optical effect Effects 0.000 description 22
- 230000002093 peripheral effect Effects 0.000 description 22
- 238000000926 separation method Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 17
- 238000002834 transmittance Methods 0.000 description 15
- 101100111862 Schizosaccharomyces pombe (strain 972 / ATCC 24843) but2 gene Proteins 0.000 description 14
- 230000004048 modification Effects 0.000 description 14
- 238000012986 modification Methods 0.000 description 14
- 230000004907 flux Effects 0.000 description 13
- 241001422033 Thestylus Species 0.000 description 11
- 230000000694 effects Effects 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 9
- 238000013461 design Methods 0.000 description 8
- 239000005357 flat glass Substances 0.000 description 7
- 239000011521 glass Substances 0.000 description 7
- 239000011347 resin Substances 0.000 description 7
- 229920005989 resin Polymers 0.000 description 7
- 239000004925 Acrylic resin Substances 0.000 description 6
- 229920000178 Acrylic resin Polymers 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 239000010408 film Substances 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 5
- 210000000887 face Anatomy 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 238000009434 installation Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000002131 composite material Substances 0.000 description 4
- 230000007423 decrease Effects 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 239000004033 plastic Substances 0.000 description 4
- 229920003023 plastic Polymers 0.000 description 4
- 239000000758 substrate Substances 0.000 description 4
- 230000003746 surface roughness Effects 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 230000008033 biological extinction Effects 0.000 description 2
- 238000001816 cooling Methods 0.000 description 2
- 208000015181 infectious disease Diseases 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 244000144985 peep Species 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000010409 thin film Substances 0.000 description 2
- 208000035473 Communicable disease Diseases 0.000 description 1
- 206010016322 Feeling abnormal Diseases 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000012788 optical film Substances 0.000 description 1
- 229920003229 poly(methyl methacrylate) Polymers 0.000 description 1
- 239000004926 polymethyl methacrylate Substances 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Abstract
The invention provides a better space suspension image display device. "3 good health and well-being" contributing to sustainable development goals "," 9 industry, innovation and infrastructure "," 11 sustainable cities and communities ". The spatial floating image display device includes: a display device for displaying the image; a retro-reflective member for reflecting image light from the display device and forming a spatially floating image in the air by the reflected light; a sensor that detects a touch operation performed by a finger of a user on 1 or more objects displayed in the spatially-suspended image; and a control unit. When a user touches an object, the control unit assists the user with the touch operation based on the detection result of the touch operation obtained by the sensor.
Description
Technical Field
The present invention relates to a spatial floating image display device.
Background
As a spatial floating information display system, an image display device that directly displays an image to the outside and a display method that displays the image as a spatial screen are known. Further, for example, patent document 1 discloses a detection system capable of reducing erroneous detection for an operation on an operation surface of a displayed aerial image.
Prior art literature
Patent literature
Patent document 1: japanese patent laid-open publication No. 2019-128722
Disclosure of Invention
Technical problem to be solved by the application
However, the touch operation on the spatially floating image is not performed on a physical button, touch panel, or the like. Therefore, there are cases where the user cannot recognize whether the touch operation is performed.
The application aims to provide a better space suspension image display device.
Technical means for solving the problems
The present application includes various means for solving the above-mentioned problems, and as an example thereof, a spatially suspended image display device may be configured to include: a display device for displaying the image; a retro-reflective member that reflects image light from the display device and forms a spatially floating image in the air using the reflected light; a sensor that detects a position of a finger of a user performing a touch operation on 1 or more objects displayed in the spatially-suspended image; and a control unit configured to control image processing for an image to be displayed on the display device based on the position of the user's finger detected by the sensor, thereby displaying a virtual shadow of the user's finger on a display surface of the spatially floating image where no physical contact surface exists.
Effects of the invention
According to the invention, a better space suspension image display device can be realized. Other technical problems, technical features and technical effects will become apparent from the following description of the embodiments.
Drawings
Fig. 1A is a diagram showing an example of a usage pattern of the spatial floating image display device according to an embodiment of the present invention.
Fig. 1B is a diagram showing an example of a usage pattern of the spatial floating image display device according to an embodiment of the present invention.
Fig. 2A is a diagram showing an example of the main part structure and the retro-reflective part structure of the spatially floating image display device according to the embodiment of the present invention.
Fig. 2B is a diagram showing an example of the main part structure and the retro-reflective part structure of the spatially floating image display device according to the embodiment of the present invention.
Fig. 3A is a diagram showing an example of a method of setting the spatial floating image display device.
Fig. 3B is a diagram showing another example of the method for setting the spatially suspended image display device.
Fig. 3C is a diagram showing a configuration example of the spatially suspended image display device.
Fig. 4 is a diagram showing another example of the main part structure of the spatial floating image display device according to the embodiment of the present invention.
Fig. 5 is an explanatory diagram for explaining the function of the sensor device used in the spatial floating image display device.
Fig. 6 is an explanatory diagram of a principle of three-dimensional image display used in the spatially suspended image display device.
Fig. 7 is an explanatory diagram of a measurement system for evaluating characteristics of a reflective polarizer.
Fig. 8 is a characteristic diagram showing transmittance characteristics corresponding to the incident angle of light with respect to the transmission axis of the reflective polarizer.
Fig. 9 is a characteristic diagram showing transmittance characteristics corresponding to the incident angle of light with respect to the reflection axis of the reflective polarizer.
Fig. 10 is a characteristic diagram showing transmittance characteristics corresponding to the incident angle of light with respect to the transmission axis of the reflective polarizer.
Fig. 11 is a characteristic diagram showing transmittance characteristics corresponding to the incident angle of light with respect to the reflection axis of the reflective polarizer.
Fig. 12 is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 13 is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 14 is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 15 is a configuration diagram showing a main part of a spatially suspended image display device according to an embodiment of the present invention.
Fig. 16 is a cross-sectional view showing the structure of a display device according to an embodiment of the present invention.
Fig. 17 is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 18A is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 18B is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 19A is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 19B is a cross-sectional view showing an example of a specific structure of the light source device.
Fig. 20 is an explanatory diagram for explaining light source diffusion characteristics of the image display apparatus.
Fig. 21A is an explanatory diagram for explaining the diffusion characteristics of the image display apparatus.
Fig. 21B is an explanatory diagram for explaining the diffusion characteristics of the image display apparatus.
Fig. 22A is an explanatory diagram for explaining the diffusion characteristics of the image display apparatus.
Fig. 22B is an explanatory diagram for explaining the diffusion characteristics of the image display apparatus.
Fig. 23 is a cross-sectional view showing the structure of the image display device.
Fig. 24 is an explanatory diagram for explaining the generation principle of the ghost image in the related art.
Fig. 25 is a cross-sectional view showing the structure of a display device according to an embodiment of the present invention.
Fig. 26 is a diagram illustrating an example of display of a display device according to an embodiment of the present invention.
Fig. 27A is a diagram illustrating an example of a touch operation supporting method using virtual shadows.
Fig. 27B is a diagram illustrating an example of a touch operation supporting method using virtual shadows.
Fig. 28A is a diagram illustrating an example of a touch operation supporting method using virtual shadows.
Fig. 28B is a diagram illustrating an example of a touch operation supporting method using virtual shadows.
Fig. 29A is a diagram illustrating an example of a touch operation supporting method using virtual shadows.
Fig. 29B is a diagram illustrating an example of a touch operation supporting method using virtual shadows.
Fig. 30A is a diagram illustrating another example of a touch operation supporting method using virtual shadows.
Fig. 30B is a diagram illustrating another example of a touch operation supporting method using virtual shadows.
Fig. 31A is a diagram illustrating another example of a touch operation supporting method using virtual shadows.
Fig. 31B is a diagram illustrating another example of a touch operation supporting method using virtual shadows.
Fig. 32A is a diagram illustrating another example of a touch operation supporting method using virtual shadows.
Fig. 32B is a diagram illustrating another example of a touch operation supporting method using virtual shadows.
Fig. 33 is a diagram illustrating a method of setting a virtual light source.
Fig. 34 is a configuration diagram showing an example of a method for detecting the position of a finger.
Fig. 35 is a configuration diagram showing another example of a method for detecting the position of a finger.
Fig. 36 is a configuration diagram showing another example of a method for detecting the position of a finger.
Fig. 37 is a diagram illustrating a method of assisting a touch operation by displaying input content.
Fig. 38 is a diagram illustrating a method of assisting a touch operation by highlighting input contents.
Fig. 39 is a diagram illustrating an example of a method of assisting a touch operation by vibration.
Fig. 40 is a diagram illustrating another example of a touch operation supporting method using vibration.
Fig. 41 is a diagram illustrating another example of a touch operation supporting method by vibration.
Fig. 42A is a diagram illustrating an example of a display of a spatially-suspended image according to an embodiment of the present invention.
Fig. 42B is a diagram illustrating an example of a display of a spatially-suspended image according to an embodiment of the present invention.
Fig. 43 is a diagram illustrating an example of the structure of a spatially floating image display device according to an embodiment of the present invention.
Fig. 44 is a diagram illustrating an example of a configuration of a part of a spatially suspended image display device according to an embodiment of the present invention.
Fig. 45 is a diagram illustrating an example of a display of a spatially-suspended image according to an embodiment of the present invention.
Fig. 46 is a diagram illustrating an example of the structure of a spatially floating image display device according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. However, the present invention is not limited to the description of the embodiments, and those skilled in the art can implement various changes and modifications within the scope of the technical ideas disclosed in the present specification. In the drawings for explaining the present invention, the same reference numerals are given to portions having the same functions, and a repetitive description thereof may be omitted. In the following description of the embodiments, the term "spatially floating image" is used to express an image floating in space. However, the terms may be replaced by "aerial image", "aerial floating optical image of display image", or the like. The term "spatially suspended image" mainly used in the description of the embodiments is used as a representative example of these terms.
The following embodiments relate to an image display system capable of transmitting an image formed by image light from an image light source through a transparent member such as glass for partitioning a space, and displaying the image as a spatially floating image outside the transparent member.
According to the following embodiments, a good image display device can be realized in, for example, an ATM of a bank, a ticket vending machine of a station, a digital signage, or the like. For example, touch panels are commonly used in ATM of banks, ticket vending machines of stations, and the like, but transparent glass surfaces or translucent plates may be used, and high-resolution image information may be displayed in a state of being spatially suspended on the glass surfaces or the translucent plates. In this case, since the divergence angle of the outgoing image light is reduced, that is, the outgoing image light becomes an acute angle, and the outgoing image light is unified into a specific polarization, only the reflected light normal to the retro-reflective member can be efficiently reflected, and therefore, the light utilization efficiency is high, and the ghost image, which is a problem in the conventional retro-reflection system and is generated in addition to the main spatially suspended image, can be suppressed, and a clear spatially suspended image can be obtained. In addition, by the apparatus including the light source of the present embodiment, a novel and excellent-usability spatially-suspended image display apparatus (spatially-suspended image display system) capable of greatly reducing power consumption can be provided. In addition, for example, in a vehicle, a vehicle suspended image display device capable of displaying a so-called one-way suspended image that can be viewed inside and/or outside the vehicle can be provided. In the following examples, a plate-shaped retroreflective member may be used in any case. In this case, the retroreflective sheet may be expressed as a retroreflective sheet.
On the other hand, the related art combines an organic EL panel, a liquid crystal panel, and a retro-reflective member 151 as a high-resolution color display image source 150. In the related art, since the image light is spread widely, ghost images 301 and 302 are generated by the image light obliquely incident to the retro-reflective member 2a as shown in fig. 24 in addition to the reflected light normally reflected on the retro-reflective member 151, and the image quality of the spatially floating image is impaired. In addition, as shown in fig. 23, a plurality of ghost images such as a first ghost image 301 and a second ghost image 302 are generated in addition to the normal spatially-suspended image 300. Therefore, the same ghost as the spatially suspended image is also seen by a person other than the viewer, which is a serious problem in terms of security.
< space floating image display device >)
Fig. 1A and 1B are diagrams showing an example of a usage pattern of the spatial floating image display device according to an embodiment of the present invention, and are diagrams showing an overall configuration of the spatial floating image display device according to the embodiment. The specific structure of the spatially floating image display device will be described in detail with reference to fig. 2A and 2B, in which light having a narrow angular orientation characteristic and having a specific polarization is emitted from the image display device 1 as an image beam, and is incident on the retroreflective member 2, reflected back, and transmitted through the transparent member 100 (glass or the like), and an aerial image (spatially floating image 3) of a real image is formed outside the glass surface.
In stores and the like, a display window (also referred to as "window glass") 105 made of a light-transmitting member such as glass is partitioned into spaces. According to the spatial floating image display apparatus of the present embodiment, floating images can be displayed unidirectionally to the outside and/or inside of a store (space) through the transparent member.
In fig. 1A, the inner side (in a store) of the window glass 105 is shown as a depth direction, and the outer side (for example, a sidewalk) thereof is shown as a vicinity. On the other hand, the light can be reflected by providing a mechanism for reflecting a specific polarization on the window glass 105, and an aerial image can be formed at a desired position in the store.
Fig. 1B is a schematic block diagram showing the structure of the display device 1. The display device 1 includes a video display unit that displays an original image of an aerial image, a video control unit that converts an input video according to the resolution of a panel, and a video signal receiving unit that receives a video signal. The video signal receiving unit can support a wired input signal such as HDMI (High-Definition Multimedia Interface) input and a wireless input signal such as Wi-Fi (Wireless Fidelity), and can function as a video receiving/displaying device alone, and can display video information from a tablet, a smart phone, or the like. Furthermore, if a bar-shaped PC or the like is connected, it is also possible to provide the capabilities of calculation processing, image analysis processing, and the like.
Fig. 2A and 2B are diagrams showing an example of the main part structure and the retro-reflective part structure of the spatially suspended image display device according to the embodiment of the present invention. The structure of the spatially floating image display device will be described in more detail with reference to fig. 2A and 2B. As shown in fig. 2A, the display device 1 is provided with a transparent member 100 such as glass, which diverges image light of a specific polarization at a narrow angle in an oblique direction. The display device 1 includes a liquid crystal display panel 11 and a light source device 13 that generates light of a specific polarization having a narrow angle diffusion characteristic.
The image light of the specific polarization from the display device 1 is reflected by the polarization separation member 101 (in the figure, the polarization separation member 101 is formed in a sheet shape and attached to the transparent member 100) provided on the transparent member 100 and having a film for selectively reflecting the image light of the specific polarization, and is incident on the retro-reflection member 2. A lambda/4 wave plate 21 is provided on the image light incident surface of the retro-reflective member 2. The image light passes through the 2 x/4 wave plate 21 when entering and exiting the retroreflective member 2, and is thereby polarization-converted from a specific polarization to the other polarization. Here, the polarization separation member 101 selectively reflects the image light of the specific polarization has a property of transmitting the light of the other polarization after the polarization conversion, so that the image light of the specific polarization after the polarization conversion is transmitted through the polarization separation member 101. The image light transmitted through the polarization separation member 101 forms a spatially suspended image 3 of a real image on the outside of the transparent member 100.
In addition, the light forming the spatially suspended image 3 is a collection of light rays converging from the retro-reflective member 2 to the optical image of the spatially suspended image 3, which rays travel straight after passing through the optical image of the spatially suspended image 3. Thus, the spatially suspended image 3 is an image having high directivity unlike the diffused image light formed on the screen by a general projector or the like. In the configuration of fig. 2A and 2B, the user can see a bright image of the spatially suspended image 3 when viewing from the direction of arrow a. However, when other persons see the arrow B, the spatially suspended image 3 cannot be seen at all as an image. This feature is well suited for use in systems that display images requiring high security, and in systems that display images with high security that are intended to be kept secret from the person facing the user.
In addition, depending on the performance of the retro-reflective member 2, the polarization axis of the reflected image light may not be uniform. In this case, a part of the image light whose polarization axes are not uniform is reflected by the polarization separation member 101 and returned to the display device 1. The light is reflected again on the image display surface of the liquid crystal display panel 11 constituting the display device 1, and there is a possibility that ghost images are generated and the image quality of the spatially suspended image is lowered.
Thus, in this embodiment, the absorption type polarizing plate 12 is provided on the image display surface of the display device 1. The image light emitted from the display device 1 is transmitted through the absorption-type polarizing plate 12, and the reflection light returned from the polarization separation member 101 is absorbed by the absorption-type polarizing plate 12, whereby the re-reflection can be suppressed. Thus, degradation of image quality due to ghost images of the spatially suspended image can be prevented.
The polarization separation member 101 may be formed of, for example, a reflective polarizer, a metal multilayer film for reflecting a specific polarization, or the like.
Next, in fig. 2B, the surface shape of a retroreflective member manufactured by japan Carbide industry co. The display device is composed of regularly arranged hexagonal prisms, and light rays entering the interior are reflected on the wall surfaces and the bottom surfaces of the hexagonal prisms to become retro-reflected light, and are emitted in a direction corresponding to the incident light, so that a spatially-suspended image of a real image is displayed based on an image displayed on the display device 1.
The resolution of the spatially suspended image depends greatly on the outer shape D and the pitch P of the retro-reflective part of the retro-reflective member 2 shown in fig. 2B, in addition to the resolution of the liquid crystal display panel 11. For example, in the case of using a 7-inch WUXGA (1920×1200 pixels) liquid crystal display panel, even if 1 pixel (1 triplet) is about 80 μm, if the diameter D of the retro-reflective section is 240 μm and the pitch is 300 μm, 1 pixel of the spatially suspended image corresponds to 300 μm. Therefore, the effective resolution of the spatially suspended image is reduced to about 1/3.
In order to make the resolution of the spatially suspended image the same as that of the display device 1, it is preferable to make the diameter and pitch of the retro-reflective sections close to 1 pixel of the liquid crystal display panel. On the other hand, in order to suppress moire caused by the retroreflective member and the pixels of the liquid crystal display panel, the respective pitch ratios may be designed to deviate from an integer multiple of 1 pixel. In addition, the shape may be such that none of the sides of the retro-reflective section overlaps with any of 1 pixel of the liquid crystal display panel.
On the other hand, the retroreflective member may be molded by a roll press method for inexpensive production. Specifically, the method of arranging and shaping the retroreflective portions on the film is a method of forming the shape opposite to the shape to be shaped on the surface of the roll, applying an ultraviolet curable resin on the fixing base material and passing between the rolls, thereby forming the necessary shape, and then irradiating ultraviolet rays to cure the material to obtain the retroreflective member 2 of the desired shape.
Method for setting up space floating image display device
Next, a method for setting the spatial floating image display apparatus will be described. The spatial floating image display apparatus can freely change the setting method according to the use mode. Fig. 3A is a diagram showing an example of a method of setting the spatial floating image display device. The spatially suspended image display device shown in fig. 3A is laterally arranged so that the surface on the side on which the spatially suspended image 3 is formed faces upward. That is, in fig. 3A, the spatially suspended image display device is disposed such that the transparent member 100 faces upward, and the spatially suspended image 3 is formed above the spatially suspended image display device.
Fig. 3B is a diagram showing another example of the method for setting the spatially suspended image display device. The spatially-suspended image display device shown in fig. 3B is disposed vertically so that one side of the spatially-suspended image 3 faces laterally (the direction of the user 230). That is, in fig. 3B, the spatially-suspended image display device is disposed such that the transparent member 100 faces sideways, and the spatially-suspended image 3 is formed sideways (in the direction of the user 230) of the spatially-suspended image display device.
Structure of spatial floating image display device
Next, the structure of the spatial floating image display apparatus 1000 will be described. Fig. 3C is a block diagram showing an example of the internal structure of the spatial floating image display apparatus 1000.
The spatial floating image display apparatus 1000 includes a retro-reflection unit 1101, an image display unit 1102, a light guide 1104, a light source 1105, a power supply 1106, an operation input unit 1107, a nonvolatile memory 1108, a memory 1109, a control unit 1110, an image signal input unit 1131, an audio signal input unit 1133, a communication unit 1132, an air operation detection sensor 1351, an air operation detection unit 1350, an audio output unit 1140, an image control unit 1160, a storage unit 1170, an imaging unit 1180, and the like.
The components of the spatially suspended image display device 1000 are disposed in a housing 1190. The imaging unit 1180 and the air operation detection sensor 1351 shown in fig. 3C may be provided outside the housing 1190.
The retro-reflection portion 1101 of fig. 3C corresponds to the retro-reflection member 2 of fig. 2A and 2B. The retro-reflection unit 1101 causes light modulated by the image display unit 1102 to undergo retro-reflection. The spatially floating image 3 is formed by light output to the outside of the spatially floating image display device 1000 from among the reflected light from the retro-reflection unit 1101.
The image display unit 1102 of fig. 3C corresponds to the liquid crystal display panel 11 of fig. 2A. The light source 1105 of fig. 3C corresponds to the light source device 13 of fig. 2A. The image display unit 1102, the light guide 1104, and the light source 1105 of fig. 3C correspond to the display device 1 of fig. 2A.
The video display unit 1102 is a display unit that generates a video by modulating transmitted light based on a video signal inputted under the control of a video control unit 1160 described later. The image display unit 1102 corresponds to the liquid crystal display panel 11 of fig. 2A. As the image display unit 1102, for example, a transmissive liquid crystal panel is used. As the image display unit 1102, for example, a reflective liquid crystal panel or DMD (Digital Micromirror Device: registered trademark) panel that modulates reflected light can be used.
The light source 1105 generates light for the image display unit 1102, and is a solid-state light source such as an LED light source or a laser light source. The power supply 1106 converts an AC current input from the outside into a DC current to power the light source 1105. The power supply 1106 supplies a necessary DC current to each of the spatial floating image display apparatus 1000.
The light guide 1104 guides the light generated by the light source 1105 to irradiate the image display unit 1102 with the light. The combination of the light guide 1104 and the light source 1105 may also be referred to as a backlight of the image display unit 1102. Various ways are conceivable as a combination of the light guide 1104 and the light source 1105. Specific structural examples of the combination of the light guide 1104 and the light source 1105 will be described in detail later.
The air operation detection sensor 1351 is a sensor that detects the operation of the finger of the user 230 on the spatially suspended image 3. The air operation detection sensor 1351 senses, for example, a range overlapping the entire display range of the spatially suspended image 3. The air operation detection sensor 1351 may sense only a range overlapping with at least a part of the display range of the spatially suspended image 3.
Specific examples of the air operation detection sensor 1351 include a distance sensor configured by using invisible light such as infrared light, invisible light laser light, ultrasonic waves, or the like. The air operation detection sensor 1351 may be configured to be capable of detecting coordinates of a two-dimensional plane by combining a plurality of sensors. The air operation detection sensor 1351 may be constituted by LiDAR (Light Detection and Ranging) of the ToF (Time Of Flight) system or an image sensor.
The air operation detection sensor 1351 may be capable of sensing to detect a touch operation of a user's finger on an object (object) displayed as the spatially suspended image 3. Such sensing can also be performed using existing techniques.
The overhead detection unit 1350 acquires a sensing signal from the overhead detection sensor 1351, determines whether or not the finger of the user 230 has contacted the object of the spatially-suspended image 3 based on the sensing signal, and calculates a position (contact position) where the finger of the user 230 contacts the object, and the like. The air operation detection unit 1350 is configured by a circuit such as FPGA (Field Programmable Gate Array), for example. A part of the functions of the air operation detection unit 1350 may be implemented in software by, for example, a space operation detection program executed by the control unit 1110.
The air operation detection sensor 1351 and the air operation detection unit 1350 may be incorporated in the spatially suspended image display device 1000, but may be provided outside separately from the spatially suspended image display device 1000. When the suspended image display device 1000 is provided separately from the suspended image display device 1000, the air operation detection sensor 1351 and the air operation detection unit 1350 are configured to be able to transmit information and signals to the suspended image display device 1000 via a wired or wireless communication connection path and an image signal transmission path.
The air operation detection sensor 1351 and the air operation detection unit 1350 may be provided separately. Thus, the spatial floating image display apparatus 1000 having no air operation detection function can be mainly constructed, and a system capable of optionally adding only the air operation detection function can be constructed. In addition, only the air operation detection sensor 1351 may be separated, and the air operation detection unit 1350 may be incorporated in the spatial floating image display apparatus 1000. In the case where the air operation detection sensor 1351 and the like are to be arranged more freely with respect to the installation position of the spatial floating image display apparatus 1000, a configuration in which only the air operation detection sensor 1351 is separated is advantageous.
The image capturing unit 1180 is, for example, a camera having an image sensor, and captures a space near the spatially floating image 3 and/or a face, an arm, a finger, or the like of the user 230. The imaging unit 1180 may be provided in plural. The use of the plurality of imaging units 1180 or the use of the imaging unit with the depth sensor can assist the spatial operation detection unit 1350 when detecting a touch operation of the user 230 on the spatially floating image 3. The imaging unit 1180 may be provided separately from the spatial floating image display apparatus 1000. When the imaging unit 1180 is provided separately from the spatially suspended image display device 1000, it may be configured to be able to transmit an imaging signal to the spatially suspended image display device 1000 via a wired or wireless communication connection path or the like.
For example, when the air operation detection sensor 1351 is configured to detect whether or not an object has entered the intrusion detection plane with respect to a plane (intrusion detection plane) including the display surface of the spatially floating image 3, there is information such as how far away from the intrusion detection plane an object (for example, a user's finger) that has not entered the intrusion detection plane is, or how close away from the intrusion detection plane the object is, which cannot be detected by the air operation detection sensor 1351.
In this case, the distance between the object and the intrusion detection plane can be calculated by using information such as depth calculation information of the object obtained based on the captured images of the plurality of imaging units 1180 and depth information of the object obtained by the depth sensor. Then, these pieces of information and various pieces of information such as the distance between the object and the intrusion detection plane are used for various display controls for the spatially-suspended image 3.
Instead of using the overhead detection sensor 1351, the overhead detection unit 1350 may detect a touch operation of the user 230 on the spatially floating image 3 based on the image captured by the imaging unit 1180.
The image capturing unit 1180 may capture the face of the user 230 of the operation space floating image 3, and the control unit 1110 may perform the recognition processing of the user 230. In order to determine whether or not another person is standing around or behind the user 230 operating the suspended image 3, or to peep the user 230 to operate the suspended image 3, the imaging unit 1180 may capture a range including the user 230 operating the suspended image 3 and a surrounding area of the user 230.
The operation input unit 1107 is, for example, a signal receiving unit of an operation button or a remote controller, and inputs a signal concerning an operation different from the air operation (touch operation) performed by the user 230. In addition to the user 230 who touches the spatially floating image 3, the operation input unit 1107 may be used to operate the spatially floating image display device 1000 by a manager, for example.
The video signal input unit 1131 is connected to an external video output device to input video data. The audio signal input unit 1133 is connected to an external audio output device to input audio data. The audio output unit 1140 can output audio based on the audio data input to the audio signal input unit 1133. The audio output unit 1140 may output an operation sound or an error warning sound.
The nonvolatile memory 1108 stores various data used in the spatial floating image display apparatus 1000. The data stored in the nonvolatile memory 1108 includes, for example, various operation data to be displayed in the spatially suspended image 3, a display icon, data of an object to be operated by a user, layout information, and the like. The memory 1109 stores image data displayed as the spatially suspended image 3, control data of the apparatus, and the like.
The control unit 1110 controls the operation of each connected unit. The control unit 1110 may perform arithmetic processing based on information acquired from each unit in the spatial floating image display apparatus 1000 in cooperation with a program stored in the memory 1109. The communication unit 1132 communicates with an external device, an external server, and the like via a wired or wireless interface. Various data such as video data, image data, and audio data are transmitted and received by communication via the communication unit 1132.
The storage unit 1170 is a storage device that records various data such as video data, image data, and audio data, and various information. The storage unit 1170 may record various information such as various data including video data, image data, and audio data in advance at the time of shipping the product. The storage 1170 may record various information such as video data, image data, and audio data acquired from an external device, an external server, or the like via the communication unit 1132.
The video data, image data, and the like recorded in the storage unit 1170 are output as the spatially suspended video 3 via the video display unit 1102 and the retro-reflection unit 1101. The storage unit 1170 also stores display icons displayed as spatially-suspended images 3, and image data, and the like of objects to be manipulated by the user.
Layout information of display icons, objects, and the like displayed as the spatially-suspended video 3, information on various metadata of the objects, and the like are also recorded in the storage section 1170. The audio data recorded in the storage unit 1170 is output as audio from the audio output unit 1140, for example.
The video control unit 1160 performs various controls on the video signal input to the video display unit 1102. The video control unit 1160 performs, for example, control of video switching, and switches which video signal, such as a video signal stored in the memory 1109 or a video signal (video data) input to the video signal input unit 1131, is input to the video display unit 1102.
The video control unit 1160 may perform control to generate a superimposed video signal obtained by superimposing the video signal stored in the memory 1109 and the video signal input from the video signal input unit 1131, and input the superimposed video signal to the video display unit 1102, thereby forming a composite video as the spatially suspended video 3.
The video control unit 1160 may perform control to perform image processing on the video signal input from the video signal input unit 1131, the video signal stored in the memory 1109, and the like. Examples of the image processing include scaling processing such as image enlargement, reduction, and distortion, brightness adjustment processing for changing brightness, contrast adjustment processing for changing a contrast curve of an image, and Retinex processing for decomposing an image into light components and changing weights of the components.
The video control unit 1160 may perform special effect video processing or the like for assisting the user 230 in the air operation (touch operation) on the video signal input to the video display unit 1102. The special effect image processing is performed on the captured image of the user 230 based on, for example, the detection result of the touch operation of the user 230 by the air operation detection unit 1350 and the image capturing unit 1180.
As described above, various functions are mounted in the spatial floating image display apparatus 1000. However, the spatially suspended image display device 1000 need not have all of these functions, and may have any configuration as long as it has a function of forming the spatially suspended image 3.
< space floating image display device 2 >)
Fig. 4 is a diagram showing another example of the main part structure of the spatial floating image information display system according to the embodiment of the present invention. The display device 1 includes a liquid crystal display panel 11 as an image display element, and a light source device 13 that generates light of a specific polarization having a narrow angle diffusion characteristic. The display device 1 is constituted by, for example, a small liquid crystal display panel having a screen size of about 5 inches to a large liquid crystal display panel exceeding 80 inches. The turning mirror 22 uses a transparent member 100 as a substrate. A polarization separation member 101 such as a reflective polarizer for selectively reflecting image light of a specific polarization is provided on the surface of the transparent member 100 on the display device 1 side, and image light from the liquid crystal display panel 11 is reflected toward the retroreflective sheet 2. Thus, the steering mirror 22 has a function as a reflecting mirror. The image light of a specific polarization from the display device 1 is reflected by the polarization separation member 101 (the polarization separation member 101 having a sheet shape is attached in the figure) provided in the transparent member 100, and enters the retroreflective sheet 2. Instead of the polarization separation member 101, an optical film having polarization separation characteristics may be deposited on the surface of the transparent member 100.
The light incidence surface of the retro-reflective plate is provided with a lambda/4 wave plate 21, and the image light is subjected to polarization conversion by passing through the plate 2 times, thereby converting a specific polarization into the other polarization having a phase difference of 90 °. This allows the image light after the back reflection to pass through the polarization separation member 101, and the spatially-suspended image 3 of the real image can be displayed on the outside of the transparent member 100.
Here, since the polarization axis becomes non-uniform by the back reflection on the polarization separation member 101, a part of the image light is reflected and returned to the display device 1. The light is reflected again on the image display surface of the liquid crystal display panel 11 constituting the display device 1, and a ghost image is generated, resulting in a significant reduction in the image quality of the spatially suspended image.
Thus, the present embodiment can provide the absorbing polarizer 12 on the image display surface of the display device 1. The image light emitted from the display device 1 is transmitted through and absorbed by the polarization separation member 101, thereby preventing degradation of image quality due to ghost images of the spatially suspended image. In order to reduce degradation of image quality due to sunlight and illumination light outside the device, an absorbing polarizer 102 may be provided on the surface of the transparent member 100 on the image light transmission output side.
Next, in order to sense the relationship between the distance and the position of the object and the sensor 44 with respect to the spatially-suspended image obtained by the spatially-suspended image display device, as shown in fig. 5, the sensor 44 having a TOF (Time of Fly) function is arranged in a plurality of layers, and coordinates in the depth direction, the moving direction and the moving speed of the object can be perceived in addition to coordinates in the plane direction of the object. In order to read the distance and position in two dimensions, a combination of a plurality of infrared light emitting units and light receiving units is arranged on a straight line, and the object is irradiated with light from a light emitting point and the reflected light is received by the light receiving units. The distance to the object can be clarified by the product of the difference between the light emission time and the light receiving time and the light velocity. The coordinates on the plane can be read from coordinates of a portion where the difference between the light emission time and the light receiving time is smallest, using a plurality of light emitting portions and light receiving portions. According to the above, the three-dimensional coordinate information can be obtained by combining the coordinates of the object on the plurality of planes (two dimensions) with the sensor.
Further, a method of obtaining a three-dimensional spatially-suspended image as the spatially-suspended image display device will be described with reference to fig. 6. Fig. 6 is an explanatory diagram of a principle of three-dimensional image display used in the spatially suspended image display device. The horizontal lenticular lenses are arranged in correspondence with the pixels of the image display screen of the liquid crystal display panel 11 of the display device 1 shown in fig. 4. As a result, in order to display the motion parallaxes in the 3 directions of the horizontal motion parallaxes P1, P2, and P3 of the screen as shown in fig. 6, the images from the 3 directions are divided into 1 block for each 3 pixels, the image information from the 3 directions is displayed for each pixel, and the emission directions of the light are controlled by the action of the corresponding lenticular lenses (indicated by vertical lines in fig. 6) so as to be separated and emitted in the 3 directions. The result is a stereoscopic image capable of displaying 3 parallaxes.
< reflective polarizer >)
In the spatial floating image display apparatus of the present embodiment, the polarization separation member 101 is used to improve contrast performance for determining image quality as compared with a normal half mirror. As one example of the polarization separation member 101 of the present embodiment, the characteristics of the reflective polarizer will be described. Fig. 7 is an explanatory diagram of a measurement system for evaluating characteristics of a reflective polarizer. The transmission characteristic and reflection characteristic of the incident angle of light in the direction perpendicular to the polarization axis of the reflective polarizer of fig. 7 are denoted by V-AOI in fig. 8 and 9, respectively. Similarly, the transmission characteristic and reflection characteristic of the incident angle of light in the direction (parallel direction) horizontal to the polarization axis of the reflective polarizer are denoted by H-AOI in fig. 10 and 11, respectively.
In the characteristic diagrams of fig. 8 to 11, the value of the angle (deg) shown outside the right column is shown from the top in the order of the transmittance (%) on the vertical axis. For example, in fig. 8, in the range where the horizontal axis represents light having a wavelength of approximately 400nm to 800nm, the transmittance is highest when the angle in the vertical (V) direction is 0 degrees (deg), and the transmittance decreases in the order of 10 degrees, 20 degrees, 30 degrees, and 40 degrees. In fig. 9, the transmittance is highest when the angle in the vertical (V) direction is 0 degrees (deg) in the range where the horizontal axis represents light having a wavelength of approximately 400nm to 800nm, and the transmittance decreases in the order of 10 degrees, 20 degrees, 30 degrees, and 40 degrees. In fig. 10, the transmittance is highest in the range where the horizontal axis represents light having a wavelength of approximately 400nm to 800nm, and the angle in the horizontal (H) direction is 0 degrees (deg), and the transmittance decreases in the order of 10 degrees and 20 degrees. In fig. 11, the transmittance is highest in the range of light having a wavelength of approximately 400nm to 800nm, and the transmittance decreases in the order of 10 degrees and 20 degrees when the angle in the horizontal (H) direction is 0 degrees (deg).
As shown in fig. 8 and 9, the reflective polarizer of the wire grid structure has a characteristic that is degraded for light from a direction perpendicular to the polarization axis. Therefore, the light source of the present embodiment, which can emit the outgoing image light from the liquid crystal display panel at a narrow angle, is preferably an ideal light source by adopting a design along the polarization axis. In addition, the characteristics in the horizontal direction are similarly reduced for light from an oblique direction. In view of the above characteristics, a configuration example of the present embodiment uses a light source capable of emitting the outgoing image light from the liquid crystal display panel at a narrower angle as a backlight of the liquid crystal display panel, and will be described below. Thus, a spatially-suspended image with high contrast can be provided.
< display device >)
Next, the display device 1 of the present embodiment will be described with reference to the drawings. The display device 1 of the present embodiment includes an image display element 11 (liquid crystal display panel) and a light source device 13 constituting a light source thereof, and in fig. 12, the light source device 13 is shown in an expanded perspective view together with the liquid crystal display panel.
As shown by an arrow 30 in fig. 12, the liquid crystal display panel (image display element 11) receives an illumination light beam having a narrow angle diffusion characteristic, that is, a characteristic similar to a laser beam having strong directivity (linear traveling property) and a polarization plane uniform in one direction, by using light from a light source device 13, that is, a backlight device. The liquid crystal display panel (image display element 11) modulates the received illumination light beam in accordance with the inputted image signal. The modulated image light is reflected by the retro-reflective member 2 and transmitted through the transparent member 100 to form a spatially-suspended image of a real image (see fig. 2A).
Fig. 12 includes a liquid crystal display panel 11 constituting the display device 1 and a light direction conversion panel 54 for controlling the directional characteristic of the light beam emitted from the light source device 13, and includes a narrow-angle diffusion plate (not shown) as needed. That is, polarizing plates are provided on both sides of the liquid crystal display panel 11, and image light of a specific polarization is emitted in accordance with the intensity of the modulated light of the image signal (see arrow 30 in fig. 12). As a result, the desired image is projected onto the retro-reflective member 2 through the light direction conversion panel 54 as light of a specific polarization having high directivity (straight traveling property), reflected by the retro-reflective member 2, and transmitted to the eyes of a viewer outside the store (space), thereby forming the spatially floating image 3. The protective cover 50 may be provided on the surface of the light direction conversion panel 54 (see fig. 13 and 14).
In the present embodiment, in order to improve the utilization efficiency of the outgoing light beam 30 from the light source device 13 and to greatly reduce the power consumption, in the display device 1 configured to include the light source device 13 and the liquid crystal display panel 11, the light from the light source device 13 (see arrow 30 in fig. 12) is projected onto the retro-reflective member 2, reflected on the retro-reflective member 2, and then the directivity is controlled by a transparent sheet (not shown) provided on the surface of the transparent member 100 (window glass 105 or the like) to form a spatially floating image at a desired position. Specifically, the transparent sheet is provided with high directivity by an optical member such as a fresnel lens or a linear fresnel lens, and the imaging position of the suspended image is controlled. With this configuration, the image light from the display device 1 reaches the observer positioned outside the display window 105 (for example, the sidewalk) with high directivity (straight traveling property) as a laser beam. As a result, a high-quality floating image can be displayed with high resolution, and the power consumption of the display device 1 including the LED elements 201 of the light source device 13 can be significantly reduced.
Example 1 of display device
Fig. 13 shows an example of a specific structure of the display device 1. Fig. 13 includes a liquid crystal display panel 11 and a light direction conversion panel 54 arranged on the light source device 13 of fig. 12. The light source device 13 is formed of, for example, plastic in a case shown in fig. 12, and is configured to house the LED elements 201 and the light guide 203 therein, and is provided with a lens shape having a shape in which the cross-sectional area gradually increases as going to a surface facing the light receiving portion, and a function of gradually reducing the divergence angle by total reflection of light multiple times when light propagates inside, in order to convert divergent light from each LED element 201 into a substantially parallel light flux, as shown in fig. 12, etc., at an end surface of the light guide 203. A liquid crystal display panel 11 constituting the display device 1 is mounted on the upper surface of the display device 1. An LED board 202 is mounted on one side surface (in this example, the left side end surface) of the case of the light source device 13, a LED (Light Emitting Diode) element 201 as a semiconductor light source and a control circuit thereof are mounted thereon, and a heat sink, which is a member for cooling heat generated in the LED element and the control circuit, may be mounted on the outer side surface of the LED board 202.
Further, a frame (not shown) of a liquid crystal display panel attached to the upper surface of the housing of the light source device 13 is attached with the liquid crystal display panel 11 attached to the frame, an FPC (Flexible Printed Circuits: flexible wiring board) (not shown) electrically connected to the liquid crystal display panel 11, and the like. That is, the liquid crystal display panel 11 as an image display element generates a display image by modulating the intensity of transmitted light based on a control signal from a control circuit (not shown) constituting an electronic device together with the LED element 201 as a solid-state light source. In this case, since the generated image light has a narrow diffusion angle and only a specific polarization component, the image light approaches the surface-emission laser image source driven by the image signal, and a novel image display device which has not been conventionally obtained can be obtained. In addition, it is not technically and safely possible to obtain a laser beam having the same size as the image obtained by the display device 1 by using a laser device. In this embodiment, the light of the above-described proximity surface emission laser image light is obtained by using a light flux emitted from a normal light source having an LED element, for example.
Next, the structure of the optical system housed in the case of the light source device 13 will be described in detail with reference to fig. 13 and 14.
Since fig. 13 and 14 are cross-sectional views, only 1 LED element 201 is shown for the plurality of LED elements 201 constituting the light source, and they are converted into substantially collimated light by the shape of the light receiving end surface 203a of the light guide 203. Therefore, the light receiving portion of the light guide end surface is mounted so as to maintain a predetermined positional relationship with the LED element.
The light guides 203 are each formed of a light-transmitting resin such as an acrylic resin. As shown in fig. 13 and 14, the light receiving surface of the LED light on one end side of the light guide 203 has, for example, a convex conical outer peripheral surface rotated by a parabolic cross section, and a concave portion is formed in a central region of the top side of the outer peripheral surface, and a convex portion (i.e., a convex lens surface) is formed in the central portion of the concave portion. The light guide 203 may have a convex lens surface protruding outward (or a concave lens surface recessed inward) in a central region of the other end side planar portion. The structure thereof will be described later through the explanation of fig. 16 and the like. The light receiving section of the light guide body to which the LED element 201 is attached has a parabolic shape forming a conical outer peripheral surface, and is set in an angle range in which light emitted from the LED element in the peripheral direction is totally reflected inside, or a reflecting surface is formed.
On the other hand, the LED elements 201 are disposed at predetermined positions on the surface of the LED board 202, which is the circuit board. The LED board 202 is disposed and fixed to the LED collimator (light receiving end surface 203 a) such that the LED elements 201 on the surface thereof are located at the central portions of the recesses.
According to this configuration, the light emitted from the LED element 201 can be outputted as substantially parallel light by the shape of the light receiving end surface 203a of the light guide 203, and the efficiency of use of the generated light can be improved.
As described above, the light source device 13 is configured by mounting a light source unit in which a plurality of LED elements 201 as light sources are arranged at the light receiving end surface 203a, which is a light receiving portion provided at the end surface of the light guide 203. The light source device 13 emits divergent light beams from the LED element 201 into substantially parallel light beams by the lens shape of the light receiving end surface 203a of the light guide end surface, guides the light beams (direction parallel to the paper surface) inside the light guide 203 as indicated by an arrow, and outputs the light beams to the liquid crystal display panel 11 (direction perpendicular to the paper surface) arranged substantially parallel to the light guide 203 by the light beam direction conversion means 204. By optimizing the distribution (density) of the beam direction conversion unit 204 by the shape of the inside or the surface of the light guide, the uniformity of the light beam incident on the liquid crystal display panel 11 can be controlled.
The beam direction conversion means 204 outputs the light beam propagating in the light guide body to the liquid crystal display panel 11 (in a direction perpendicular to the paper surface) arranged substantially parallel to the light guide body 203 by using the shape of the surface of the light guide body or providing a portion having a different refractive index in the light guide body, for example. In this case, the liquid crystal display panel 11 is superior in characteristics when the relative luminance ratio is more than 30% and there is no practical problem in comparing the luminance at the center of the screen with the luminance at the periphery of the screen in a state where the viewpoint is positioned at the same position as the diagonal dimension of the screen as facing the center of the screen.
Fig. 13 is a cross-sectional configuration diagram for explaining the structure and operation of the light source of the present embodiment, which performs polarization conversion in the light source device 13 including the light guide 203 and the LED element 201. In fig. 13, the light source device 13 includes, for example, a light guide 203 formed of plastic or the like and provided with a light flux direction conversion means 204 on the surface or inside, an LED element 201 as a light source, a reflecting sheet 205, a phase difference plate 206, a lenticular lens, and the like, and a liquid crystal display panel 11 having polarizers on the light source light incident surface and the image light emitting surface is mounted on the upper surface thereof.
A thin film or sheet-like reflective polarizer 49 is provided on a light source light incident surface (lower surface in the figure) of the liquid crystal display panel 11 corresponding to the light source device 13, one of the natural light fluxes 210 emitted from the LED element 201 is selectively reflected by a polarization (for example, P-ray) 212, and reflected by a reflector 205 provided on a surface of one of the light guides 203 (lower surface in the figure) so as to be directed to the liquid crystal display panel 11 again. Then, a phase difference plate (λ/4 plate) is provided between the reflecting plate 205 and the light guide 203 or between the light guide 203 and the reflective polarizing plate 49, and the light is reflected on the reflecting plate 205 and passes through the phase difference plate 2 times, whereby the reflected light beam is converted from P polarization to S polarization, and the utilization efficiency of the light source light as the image light is improved. The image beam (arrow 213 in fig. 13) whose light intensity is modulated by the image signal through the liquid crystal display panel 11 is incident on the retro-reflective member 2, and is transmitted through the window glass 105 after being reflected as shown in fig. 1A, whereby a spatially-suspended image of a real image can be obtained inside or outside a store (space).
Fig. 14 is a cross-sectional configuration diagram for explaining the structure and operation of the light source of the present embodiment, which performs polarization conversion in the light source device 13 including the light guide 203 and the LED element 201, as in fig. 13. The light source device 13 includes a light guide 203 formed of plastic or the like and provided with a light flux direction conversion means 204 on the surface or inside, an LED element 201 as a light source, a reflecting sheet 205, a phase difference plate 206, a lenticular lens, and the like, for example. A liquid crystal display panel 11 having polarizers on a light source light incident surface and an image light emitting surface is mounted as an image display element on the upper surface of the light source device 13.
A thin film or sheet-like reflective polarizer 49 is provided on a light source light incident surface (lower surface in the figure) of the liquid crystal display panel 11 corresponding to the light source device 13, one of the natural light fluxes 210 emitted from the LED light sources 201 is selectively reflected (for example, S-light) 211, and reflected on a reflective sheet 205 provided on a surface of one of the light guides 203 (lower surface in the figure) so as to be directed to the liquid crystal display panel 11 again. A phase difference plate (λ/4 plate) is provided between the reflecting sheet 205 and the light guide 203 or between the light guide 203 and the reflective polarizing plate 49, and then the light is reflected on the reflecting sheet 205 and passes through the phase difference plate 2 times, whereby the reflected light beam is converted from S-polarization to P-polarization, and the utilization efficiency of the light source light as the image light is improved. The image beam (arrow 214 in fig. 14) whose light intensity is modulated by the image signal through the liquid crystal display panel 11 is incident on the retro-reflective member 2, and as shown in fig. 1, is transmitted through the window glass 105 after being reflected, and a spatially-suspended image of a real image can be obtained inside or outside a store (space).
In the light source device shown in fig. 13 and 14, in addition to the function of the polarizing plate provided on the light incident surface of the corresponding liquid crystal display panel 11, since one polarization component is reflected by the reflective polarizing plate, the contrast ratio that can be theoretically obtained is the result of multiplying the reciprocal of the orthogonal transmittance of the reflective polarizing plate by the reciprocal of the orthogonal transmittance obtained by the 2 polarizing plates attached to the liquid crystal display panel. Thus, a high contrast performance can be obtained. In fact, it was experimentally confirmed that the contrast performance of the display image was improved by 10 times or more. As a result, a high-quality image that is comparable to the self-luminous organic EL can be obtained.
Example 2 of display device
Fig. 15 shows another example of the specific structure of the display device 1. The light source device 13 of fig. 15 is the same as the light source device of fig. 17 and the like. The light source device 13 is configured by, for example, housing an LED, a collimator, a synthetic diffusion block, a light guide, and the like in a case made of plastic or the like, and the liquid crystal display panel 11 is mounted on the upper surface thereof. An LED board is mounted on one side surface of the case of the light source device 13, LED (Light Emitting Diode) elements 14a and 14B as semiconductor light sources and a control circuit thereof are mounted on the LED board, and a heat sink 103 (see also fig. 17, 18A, 18B, etc.) serving as a member for cooling heat generated in the LED elements and the control circuit is mounted on the outer side surface of the LED board.
A liquid crystal display panel frame attached to the upper surface of the case is provided with a liquid crystal display panel 11 attached to the frame, an FPC (Flexible Printed Circuits: flexible wiring board) 403 (see fig. 7) electrically connected to the liquid crystal display panel 11, and the like. That is, the liquid crystal display panel 11 as a liquid crystal display element modulates the intensity of transmitted light based on a control signal from a control circuit (not shown here) constituting an electronic device together with the LED elements 14a and 14b as solid-state light sources to generate a display image.
Example 1 of light source device of example 2 of display device
Next, the structure of the optical system such as the light source device housed in the case will be described in detail with reference to fig. 17 and fig. 18A and 18B.
Fig. 17, 18A, and 18B show LEDs 14a and 14B constituting a light source, which are mounted at predetermined positions with respect to an LED collimator 15. The LED collimators 15 are each formed of a light-transmitting resin such as an acrylic resin. As shown in fig. 18B, the LED collimator 15 has a convex conical outer peripheral surface 156 obtained by rotation of a parabolic cross section. Further, a concave portion 153 having a convex portion (i.e., convex lens surface) 157 is formed in the center of the top portion (side opposite to the LED substrate 102) of the LED collimator 15. In addition, a convex lens surface (or a concave lens surface which is concave inward) 154 protruding outward is provided in the center of the flat surface (the side opposite to the top) of the LED collimator 15. The paraboloid 156 forming the conical outer peripheral surface of the LED collimator 15 is set within an angle range in which light emitted from the LEDs 14a and 14b in the peripheral direction is totally reflected inside, or a reflecting surface is formed.
The LEDs 14a and 14b are disposed at predetermined positions on the surface of the LED board 102, which is the circuit board. The LED board 102 is disposed and fixed to the LED collimator 15 such that the LEDs 14a and 14b on the surface thereof are located at the central portions of the concave portions 153.
According to this structure, the light emitted from the LED14a or 14b, particularly, the light emitted upward (rightward in the drawing) from the central portion thereof is condensed Cheng Pinghang by the 2 convex lens surfaces 157, 154 forming the outer shape of the LED collimator 15, among the lights emitted from the LED collimator 15. Light emitted from other portions in the peripheral direction is reflected by a paraboloid forming the conical outer peripheral surface of the LED collimator 15, and is similarly condensed into parallel light. In other words, the use of the LED collimator 15 having the convex lens formed in the central portion thereof and the parabolic surface formed in the peripheral portion thereof can make almost all of the light generated by the LEDs 14a or 14b be parallel light output, thereby improving the utilization efficiency of the generated light.
In addition, a polarization conversion element 21 is provided on the light emission side of the LED collimator 15. As is clear from fig. 18A and 18B, the polarization conversion element 21 is constituted by a combination of a light-transmitting member having a columnar shape with a parallelogram cross section (hereinafter referred to as a parallelogram column) and a light-transmitting member having a columnar shape with a triangle cross section (hereinafter referred to as a triangle column), and a plurality of light-transmitting members are arranged in parallel and in an array on a plane orthogonal to the optical axis of the parallel light from the LED collimator 15. Further, a polarizing beam splitter (hereinafter, abbreviated as "PBS film") 211 and a reflective film 212 are alternately provided at the interface between the adjacent light-transmissive members arranged in an array. And a λ/2 phase plate 213 is provided on an exit surface from which light incident on the polarization conversion element 21 and transmitted through the PBS film 211 exits.
A rectangular synthetic diffusion block 16 shown in fig. 18A is further provided on the output surface of the polarization conversion element 21. That is, the light emitted from the LED14a or 14b is made parallel light by the LED collimator 15, enters the composite diffusion block 16, is diffused by the texture 161 on the emission side, and reaches the light guide 17.
The light guide 17 is a rod-shaped member formed of a light-transmitting resin such as an acrylic resin, for example, and having a substantially triangular cross section (see fig. 18B). As can be seen from fig. 17, the light guide 17 includes a light guide light incident portion (surface) 171 facing the emission surface of the composite diffusion block 16 via a first diffusion plate 18a, a light guide light reflecting portion (surface) 172 forming a slant surface, and a light guide light emitting portion (surface) 173 facing the liquid crystal display element, i.e., the liquid crystal display panel 11 via a second diffusion plate 18 b.
As shown in fig. 17, which is a partially enlarged view, a plurality of reflecting surfaces 172a and connecting surfaces 172b are alternately formed in a zigzag shape on the light guide body light reflecting portion (surface) 172 of the light guide body 17. The reflecting surface 172a (line segment rising rightward in the drawing) forms an αn (n: a natural number, for example, 1 to 130 in this example) with respect to a horizontal plane indicated by a chain line in the drawing, and the αn is set to 43 degrees or less (0 degree or more) here, for example.
The light guide light incident portion (surface) 171 is formed in a curved convex shape inclined to the light source side. As a result, the parallel light from the emission surface of the composite diffusion block 16 is diffused and incident through the first diffusion plate 18a, and as can be seen from the figure, reaches the light guide light reflection portion (surface) 172 by being slightly bent (deflected) upward by the light guide light incidence portion (surface) 171, and is reflected there to reach the liquid crystal display panel 11 provided at the emission surface above the figure.
According to the display device 1 described in detail above, the light utilization efficiency and the uniform illumination characteristics thereof can be further improved, and the light source device including the modularized S-polarization can be manufactured in a small size and at low cost. In the above description, the polarization conversion element 21 is mounted on the LED collimator 15, but the present invention is not limited thereto, and similar operations and effects can be obtained by providing it in the optical path to the liquid crystal display panel 11.
The light guide light reflecting portion (surface) 172 is provided with a plurality of reflecting surfaces 172a and connecting surfaces 172b alternately and in a zigzag pattern, the illumination light beams are totally reflected on the reflecting surfaces 172a and directed upward, and the light guide light emitting portion (surface) 173 is provided with a narrow-angle diffusion plate so as to be substantially parallel to the light guide light emitting portion (surface) and to be incident on the light direction conversion panel 54 for controlling the directional characteristic, and to be incident on the liquid crystal display panel 11 from an oblique direction. In the present embodiment, the light direction conversion panel 54 is provided between the light guide emission portion (surface) 173 and the liquid crystal display panel 11, but the same effect can be obtained even if the light direction conversion panel 54 is provided on the emission surface of the liquid crystal display panel 11.
Example 2 of light source device of example 2 of display device
Fig. 19A and 19B show other examples of the configuration of the optical system such as the light source device 13. The example shown in fig. 19A and 19B shows a plurality (2 in this example) of LEDs 14a and 14B constituting a light source, which are mounted at predetermined positions with respect to the LED collimator 15, similarly to the example shown in fig. 18A and 18B. The LED collimators 15 are each formed of a light-transmitting resin such as an acrylic resin.
As in the examples shown in fig. 18A and 18B, the LED collimator 15 shown in fig. 19A has a convex conical outer peripheral surface 156 obtained by rotating a parabolic cross section. A concave portion 153 (see fig. 18B) is provided in the center of the top (top side) of the LED collimator 15, and a convex portion (i.e., convex lens surface) 157 is formed.
In addition, a convex lens surface (or a concave lens surface which is concave inward) 154 protruding outward is provided in the center of the planar portion of the LED collimator 15 (see fig. 18B). The paraboloid 156 forming the conical outer peripheral surface of the LED collimator 15 is set in an angle range in which light emitted from the LED14a in the peripheral direction is totally reflected inside, or a reflecting surface is formed.
The LEDs 14a and 14b are disposed at predetermined positions on the surface of the LED board 102, which is the circuit board. The LED board 102 is disposed and fixed to the LED collimator 15 such that the LEDs 14a and 14b on the surface thereof are located at the central portions of the concave portions 153.
According to this structure, the light emitted from the LED14a or 14b, particularly, the light emitted upward (rightward in the drawing) from the central portion thereof is condensed Cheng Pinghang by the 2 convex lens surfaces 157, 154 forming the outer shape of the LED collimator 15, among the lights emitted from the LED collimator 15. Light emitted from other portions in the peripheral direction is reflected by a paraboloid forming the conical outer peripheral surface of the LED collimator 15, and is similarly condensed into parallel light. In other words, the use of the LED collimator 15 having the convex lens formed in the central portion thereof and the parabolic surface formed in the peripheral portion thereof can make almost all of the light generated by the LEDs 14a or 14b be parallel light output, thereby improving the utilization efficiency of the generated light.
A light guide 170 is provided on the light-emitting side of the LED collimator 15 through a first diffusion plate 18 a. The light guide 170 is a rod-shaped member formed of a light-transmitting resin such as an acrylic resin, for example, and having a substantially triangular cross section (see fig. 19A). As is clear from fig. 19A, the light guide 170 includes a light guide light incident portion (surface) 171 facing the exit surface of the diffusion block 16 via the first diffusion plate 18a, a light guide light reflecting portion (surface) 172 forming a slope, and a light guide light exit portion (surface) 173 facing the liquid crystal display panel 11, which is a liquid crystal display element, via the reflective polarizing plate 200.
As this reflective polarizer 200, for example, by selecting a characteristic of reflecting P-polarized light (transmitting S-polarized light), P-polarized light out of natural light emitted from an LED as a light source can be reflected, passed through the λ/4 plate 202 provided at the light guide light reflection portion 172 shown in fig. 19B, reflected on the reflection surface 201, and passed through the λ/4 plate 202 again, thereby being converted into S-polarized light, and the light fluxes incident on the liquid crystal display panel 11 are all unified into S-polarized light.
Similarly, for example, by selecting the reflective polarizer 200 to have a characteristic of reflecting S-polarized light (transmitting P-polarized light), S-polarized light out of natural light emitted from the LED as a light source can be reflected, passed through the λ/4 plate 202 provided at the light guide light reflection section 172 shown in fig. 19B, reflected on the reflection surface 201, and passed through the λ/4 plate 202 again, thereby being converted into P-polarized light, and the light fluxes incident on the liquid crystal display panel 52 are all unified into P-polarized light. The above-described structure can also realize polarization conversion.
Example 3 of display device
Next, another example of a specific configuration of the display device 1 (example 3 of the display device) will be described with reference to fig. 16. The light source device of the display device 1 converts a divergent light flux of light from an LED (P-polarized light and S-polarized light are mixed) into a substantially parallel light flux by the collimator 18, and reflects the converted light flux toward the liquid crystal display panel 11 by the reflection surface of the reflection type light guide 304. The reflected light enters the reflective polarizer 49 disposed between the liquid crystal display panel 11 and the reflective light guide 304. The reflective polarizer 49 transmits light of a specific polarization (for example, P-polarized light), and the transmitted polarized light is incident on the liquid crystal display panel 11. Here, other polarization (e.g., S-polarized light) than the specific polarization is reflected by the reflective polarizer 49 and is again directed to the reflective light guide 304.
The reflective polarizer 49 is disposed obliquely with respect to the liquid crystal display panel 11 so as not to be perpendicular to the principal ray of light from the reflection surface of the reflective light guide 304. The principal ray of the light reflected on the reflective polarizer 49 is incident on the transmission surface of the reflective light guide 304. Light incident on the transmission surface of the reflective light guide 304 passes through the back surface of the reflective light guide 304, and is reflected by the reflection plate 271 through the λ/4 wave plate 270 as a phase difference plate. The light reflected on the reflection plate 271 is transmitted through the λ/4 plate 270 again and is transmitted through the transmission surface of the reflection type light guide 304. The light transmitted through the transmission surface of the reflective light guide 304 is again incident on the reflective polarizer 49.
At this time, the light that is again incident on the reflective polarizer 49 passes through the λ/4 plate 270 2 times, and thus the polarization is converted into a polarization (for example, P polarization) that can pass through the reflective polarizer 49. Thus, the light having undergone polarization conversion is transmitted through the reflective polarizer 49 and enters the liquid crystal display panel 11. The polarization design in the polarization conversion may be reversed (S polarization and P polarization are reversed) from the above description.
As a result, the light from the LEDs is uniformly polarized (for example, P-polarized) and enters the liquid crystal display panel 11, and the luminance is modulated according to the video signal, thereby displaying a video on the panel surface. As in the above example, a plurality of LEDs (only 1 is shown in fig. 16 because of a longitudinal cross-sectional view) constituting the light source are shown and mounted at predetermined positions with respect to the collimator 18.
The collimators 18 are each formed of a light-transmitting resin such as an acrylic resin or glass. The collimator 18 may have a convex conical outer peripheral surface obtained by rotation of a parabolic cross section. There may be a concave portion at the top of the collimator 18, and the concave portion forms a convex portion (i.e., convex lens surface) at the center. In addition, a convex lens surface (or a concave lens surface which is concave inward) protruding outward is provided at the center of the planar portion. The paraboloid forming the conical outer peripheral surface of the collimator 18 is set in an angle range in which light emitted from the LED in the peripheral direction is totally reflected, or a reflecting surface is formed.
The LEDs are arranged at predetermined positions on the surface of the LED board 102, which is the circuit board. The LED board 102 is disposed and fixed to the collimator 18 such that the LEDs on the surface thereof are located at the center of the top of the convex cone shape (the concave portion in the case where the top has the concave portion).
According to this structure, the light emitted from the LED, particularly the light emitted from the central portion thereof, is converged into parallel light by the convex lens surface forming the outer shape of the collimator 18 by the collimator 18. Light emitted from other portions in the peripheral direction is reflected by a paraboloid forming the conical outer peripheral surface of the collimator 18, and is similarly condensed into parallel light. In other words, the collimator 18 having a convex lens formed in the central portion thereof and a parabolic surface formed in the peripheral portion thereof can output almost all of the light generated by the LED as parallel light, thereby improving the utilization efficiency of the generated light.
The above configuration is similar to the configuration of the light source device of the image display device shown in fig. 17, 18A, 18B, and the like. Further, the light converted into substantially parallel light by the collimator 18 shown in fig. 16 is reflected by the reflective light guide 304. Of the light, light of a specific polarization is transmitted through the reflective polarizer 49 by the action of the reflective polarizer 49, and light of the other polarization reflected by the action of the reflective polarizer 49 is transmitted through the light guide 304 again. The light is reflected by the reflective plate 271 located opposite to the liquid crystal display panel 11 with respect to the reflective light guide 304. At this time, the light is polarization-converted by 2 passes through the λ/4 plate 270 as a phase difference plate. The light reflected by the reflective plate 271 is transmitted through the light guide 304 again, and enters the reflective polarizer 49 provided on the opposite surface. Since the incident light is polarized and converted, the reflected light can be transmitted through the reflective polarizer 49, and the polarized light can be incident on the liquid crystal display panel 11 in a uniform direction. As a result, the light of the light source can be fully utilized, and therefore the utilization efficiency of the geometrical optics of the light can be 2 times. Further, since the polarization degree (extinction ratio) of the reflective polarizer is multiplied by the extinction ratio of the entire system, the contrast of the entire display device is greatly improved by using the light source device of the present embodiment. Further, by adjusting the surface roughness of the reflection surface of the reflection type light guide 304 and the surface roughness of the reflection plate 271, the reflection diffusion angle of the light on each reflection surface can be adjusted. In order to improve uniformity of light entering the liquid crystal display panel 11, surface roughness of the reflective surface of the reflective light guide 304 and surface roughness of the reflective plate 271 can be adjusted for each design.
In addition, the phase difference plate of fig. 16, i.e., the λ/4 plate 270, does not need to be λ/4 for the phase difference of polarized light incident perpendicular to the λ/4 plate 270. In the configuration of fig. 16, the phase difference plate may be one in which the polarized light passes through 2 times and the phase can be changed by 90 ° (λ/2). The thickness of the retardation plate can be adjusted accordingly according to the incident angle distribution of the polarized light.
Example 4 of display device
Further, another example of the structure of an optical system such as a light source device of a display device (example 4 of the display device) will be described with reference to fig. 25. This is a configuration example in which a diffusion sheet is used instead of the reflective light guide 304 in the light source device of example 3 of the display device. Specifically, 2 optical sheets (optical sheets 207A and 207B) for converting diffusion characteristics in the vertical direction and the horizontal direction (front-rear direction in the figure, not shown) are used on the light emission side of the collimator 18, and the light from the collimator 18 is made incident between the 2 optical sheets (diffusion sheets). The optical sheet may be 1 sheet instead of a 2-sheet structure. In the case of adopting the 1-sheet structure, the vertical and horizontal diffusion characteristics are adjusted by the fine shapes of the front and back surfaces of the 1-sheet optical sheet. In addition, a plurality of diffusion sheets may be used to share the action. Here, in the example of fig. 25, the number of LEDs, the divergence angle of the LED substrate (optical element) 102, and the optical specification of the collimator 18 are optimally designed as design parameters so that the surface density of the light beam emitted from the liquid crystal display panel 11 becomes uniform with respect to the reflection/diffusion characteristics determined by the front surface shape and the rear surface shape of the optical sheet 207A and the optical sheet 207B. That is, instead of the light guide, the surface shape of the plurality of diffusion sheets is used to adjust the diffusion characteristics. In the example of fig. 25, polarization conversion was performed in the same manner as in example 3 of the display device described above. That is, in the example of fig. 25, the reflective polarizer 49 may be configured to have a characteristic of reflecting S-polarized light (transmitting P-polarized light). In this case, P-polarized light out of light emitted from the LED as a light source is transmitted, and the transmitted light enters the liquid crystal display panel 11. The S-polarized light of the light emitted from the light source, that is, the LED is reflected, and the reflected light passes through the phase difference plate 270 shown in fig. 25. The light passing through the phase difference plate 270 is reflected on the reflection surface 271. The light reflected by the reflection surface 271 passes through the phase difference plate 270 again and is converted into P-polarized light. The light after polarization conversion is transmitted through the reflective polarizer 49 and enters the liquid crystal display panel 11.
In addition, the phase difference plate of fig. 25, i.e., the λ/4 plate 270, does not need to be λ/4 for the phase difference of polarized light incident perpendicular to the λ/4 plate 270. In the configuration of fig. 25, the phase difference plate may be one in which the polarized light passes through 2 times and the phase can be changed by 90 ° (λ/2). The thickness of the retardation plate can be adjusted accordingly according to the incident angle distribution of the polarized light. In fig. 25, the polarization design in the polarization conversion may be reversed (S polarization and P polarization are reversed) as compared with the above description.
The light emitted from the liquid crystal display panel 11 has the same diffusion characteristics in the horizontal direction of the screen (indicated by the X-axis in fig. 22A) and the vertical direction of the screen (indicated by the Y-axis in fig. 22B) in a device for a normal TV application. In contrast, the diffusion characteristic of the light flux emitted from the liquid crystal display panel of this embodiment is such that the viewing angle at which the luminance is 50% of the front view (angle 0 degree) is 13 degrees, which is about 1/5 as compared with the conventional 62 degrees, as shown in example 1 of fig. 22A and 22B. Similarly, the reflection angle of the reflection type light guide, the area of the reflection surface, and the like are optimized so that the vertical viewing angle is not uniform and the upper viewing angle is suppressed to about 1/3 of the lower viewing angle. As a result, the amount of video light in the viewing direction is greatly increased, and the luminance is 50 times or more, as compared with the conventional liquid crystal TV.
Further, if the viewing angle characteristics shown in example 2 of fig. 22A and 22B are adopted, the viewing angle at which the luminance is 50% of the front view (angle 0 degree) is 5 degrees, which is 1/12 of the conventional 62 degrees. Similarly, the reflection angle of the reflection type light guide, the area of the reflection surface, and the like are optimized for the vertical viewing angle so that the vertical viewing angle is uniform and the viewing angle is suppressed to about 1/12 of the conventional viewing angle. As a result, the amount of video light in the viewing direction is greatly increased, and the brightness is 100 times or more, as compared with the conventional liquid crystal TV. By making the viewing angle narrow as described above, the amount of light flux in the viewing direction can be concentrated, and therefore the light utilization efficiency can be greatly improved. As a result, even if a conventional liquid crystal display panel for TV is used, the light diffusion characteristics of the light source device can be controlled to achieve a significant increase in luminance with the same power consumption, and an image display device corresponding to an information display system for a bright outdoor can be realized.
In the case of using a large-sized liquid crystal display panel, light at the periphery of the screen faces inward to the direction to the viewer with the viewer facing the center of the screen, whereby the comprehensiveness of the screen brightness is improved. Fig. 20 shows convergence angles of the long side and the short side of the panel with the distance L between the viewer and the panel size (screen ratio 16:10) as parameters. In the case of vertical screen viewing, the convergence angle may be set in accordance with the short side, and for example, in the case of using a 22 "panel vertical screen and a viewing distance of 0.8m, if the convergence angle is set to 10 degrees, the image light from the angle of the screen 4 can be effectively directed to the viewer.
Also, in the case of 15 "panel vertical screen use viewing, in the case of a viewing distance of 0.8m, if the convergence angle is set to 7 degrees, the image light from the screen 4 angle can be effectively directed to the viewer. As described above, by making the image light around the screen to the viewer positioned at the position most suitable for viewing the center of the screen depending on the size of the liquid crystal display panel and whether the vertical screen or the horizontal screen is used, the comprehensiveness of the screen brightness can be improved.
As a basic configuration, as shown in fig. 16 and the like, a light beam having a narrow angle directional characteristic is made incident on the liquid crystal display panel 11 by the light source device, brightness modulation is performed in accordance with a video signal, video information displayed on the screen of the liquid crystal display panel 11 is reflected on the retro-reflective member to obtain a spatially suspended image, and the spatially suspended image is displayed outdoors or indoors via the transparent member 100.
< lenticular lens >)
In order to control the diffusion distribution of the image light from the liquid crystal display panel 11, a lenticular lens is provided between the light source device 13 and the liquid crystal display panel 11 or on the surface of the liquid crystal display panel 11 to optimize the lens shape, whereby unidirectional emission characteristics can be controlled. Further, by arranging the microlens array in a matrix, emission characteristics of the image light beam from the display device 1 can be controlled in the X-axis and Y-axis directions, and as a result, an image display device having desired diffusion characteristics can be obtained.
The action of the lenticular lens will be described. The lenticular lens is configured to optimize the lens shape, so that an aerial floating image can be efficiently obtained by being emitted from the display device 1 and transmitted or reflected by the transparent member 100. That is, the image light from the display device 1 is provided with a sheet for controlling the diffusion characteristics by combining 2 lenticular lenses or by arranging a microlens array in a matrix, and the brightness (relative brightness) of the image light can be controlled in accordance with the reflection angle (the vertical direction is set to 0 degrees) in the X-axis and Y-axis directions. In this embodiment, by using such a lenticular lens, the luminance characteristics in the vertical direction can be made steeper than those in the conventional art as shown in fig. 22B, and further, by changing the balance of the directivity characteristics in the vertical (positive and negative directions of the Y axis), the luminance (relative luminance) of the reflected and diffused light can be increased, whereby the image light from the surface-emission laser image source can be made to be image light having a narrow diffusion angle (high linear traveling property) and only having a specific polarization component, and the ghost image generated by the retro-reflection member in the case of using the conventional image display device can be suppressed, and the spatially suspended image formed by the retro-reflection can be controlled to reach the eyes of the viewer with high efficiency.
Further, by the light source device, with respect to the emission light diffusion characteristics (referred to as conventional in the drawings) of the normal liquid crystal display panel shown in fig. 22A and 22B, the directivity characteristics greatly narrowed in both the X-axis direction and the Y-axis direction are realized, and thereby, an image display device that emits light of a specific polarization to emit an approximately parallel image beam in a specific direction can be realized.
Fig. 21A and 21B show an example of the characteristics of the lenticular lens used in the present embodiment. In this example, the characteristic in the X direction (vertical direction) is particularly shown, and in the characteristic O, the peak in the light emission direction is located at an angle of about 30 degrees upward from the vertical direction (0 degrees), and is shown as a luminance characteristic symmetrical up and down. Further, characteristics a and B of fig. 21B show examples of characteristics in which the image light above the peak luminance is further condensed at around 30 degrees to improve the luminance (relative luminance). Therefore, in this characteristic A, B, the brightness (relative brightness) of the light is drastically reduced at an angle exceeding 30 degrees as compared with the characteristic O.
That is, according to the optical system including the lenticular lens, when the image light beam from the display device 1 is made incident on the retro-reflective member 2, the emission angle and the viewing angle of the image light aligned at a narrow angle by the light source device 13 can be controlled, and the degree of freedom in installation of the retro-reflective sheet (retro-reflective member 2) can be greatly improved. As a result, the degree of freedom in the relation between imaging positions of the spatially suspended image that is reflected or transmitted by the transparent member 100 and imaged at the desired position can be greatly improved. As a result, light having a narrow diffusion angle (high linear traveling property) and only a specific polarization component can efficiently reach the eyes of an observer outdoors or indoors. Thus, even if the intensity (brightness) of the image light from the image display device is lowered, the viewer can accurately recognize the image light to obtain information. In other words, by reducing the output of the video display device, a spatially floating image display device with low power consumption can be realized.
Auxiliary function of touch operation
The auxiliary function of the touch operation to the user is described next. First, a touch operation in the case where the auxiliary function is not provided will be described. In the following description, a case where a user selects and touches one of 2 buttons (objects) will be described as an example, but the following can be applied to, for example, an ATM of a bank or the like, a ticket vending machine of a station or the like, a digital signage or the like, as appropriate.
Fig. 26 is a diagram illustrating a display example and a touch operation of the spatially floating image display device 1000. The spatially-suspended image 3 shown in fig. 26 includes a first button BUT1 shown as "yes" and a second button BUT2 shown as "no". The user moves the finger 210 toward the spatially suspended image 3, and selects "yes" or "no" by touching the first button BUT1 or the second button BUT2. In the example of fig. 26 and fig. 27A to 29B, the first button BUT1 and the second button BUT2 are displayed in different colors. Here, the areas other than the first button BUT1 and the second button BUT2 in the spatially suspended video 3 may be transparent without displaying the video, BUT in this case, the range where the effect of the virtual shadow described later is available is only the area of the displayed buttons (the display area of the first button BUT1 and the display area of the second button BUT 2). Therefore, in the following description, as a preferable example, in the region other than the first button BUT1 and the second button BUT2 in the spatially suspended image 3, an image of a different color or a different brightness from the first button BUT1 and the second button BUT2 is displayed for a larger region including the display region of the first button BUT1 and the display region of the second button BUT2.
In a typical image display device with a touch panel, which is not a spatially floating image display device, a button selected by a user is constituted by an image button displayed on a touch panel surface. Therefore, the user can recognize the sense of distance between the object (e.g., button) displayed on the touch panel surface and his/her finger by looking at the touch panel surface. However, in the spatially suspended image display device, since the spatially suspended image 3 is suspended in the air, there is a case where the depth of the spatially suspended image 3 is not easily grasped by the user. Therefore, in the touch operation on the spatially floating image 3, the user may not easily grasp the sense of distance between the button displayed on the spatially floating image 3 and his/her finger. In addition, in a typical touch-panel-equipped image display device other than the spatial floating image display device, the user can easily determine whether or not a button has been touched based on the sense of touch at the time of contact. However, in the touch operation on the spatially floating video 3, there is no sense of touch when an object (for example, a button) is touched, and therefore, there is a case where the user cannot determine whether or not the object is touched successfully. In view of the above, the present embodiment sets an auxiliary function for a touch operation by a user.
In the following description, a process based on the position of the user's finger will be described, and a specific method for detecting the position of the user's finger will be described.
Auxiliary (1) >' of touch operation using virtual shadow
Fig. 27A to 29B are diagrams illustrating an example of a touch operation supporting method using virtual shadows. In the example of fig. 27A to 29B, it is assumed that the user selects "yes" by touching the first button BUT 1. The spatial floating image display apparatus 1000 of the present embodiment assists a touch operation by a user by displaying a virtual shadow on a display image of the spatial floating image 3. Here, "displaying virtual shadows on the display image of the spatially suspended image 3" refers to image display processing in which the image displayed as the spatially suspended image 3 is made to appear as if the shadows of the finger were projected on the image by reducing the brightness of the image signal by a partial region imitating the shape of the finger. Specifically, this processing can be performed by the computation of the video control unit 1160 or the control unit 1110. In the virtual shadow display process, the brightness of the video signal may be set to 0 for a partial region simulating the shape of the finger. However, it is more preferable that the brightness of the video signal in a partial region imitating the shape of the finger is set to 0, because the video displayed at a reduced brightness in this region is more naturally recognized as a shadow. In this case, in the virtual shadow display processing, not only the brightness of the video signal but also the saturation of the video signal can be reduced for a part of the region simulating the shape of the finger.
The spatially suspended image 3 is located in the air where there is no physical contact surface, and does not project the shadow of the finger in normal circumstances. However, by the virtual shadow display processing according to the present embodiment, even in the air where the shadow of the finger is not projected, the user can recognize the depth of the spatially suspended image 3 by making the spatially suspended image 3 appear as if the shadow is present, and the sense of actual existence of the spatially suspended image 3 can be improved.
Fig. 27A and 27B show a state in which the user tries to touch the first button BUT1 on the display surface 3a of the spatially-suspended image 3 with the finger 210, fig. 28A and 28B show a state in which the finger 210 is closer to the second time of the spatially-suspended image 3 than in fig. 27A and 27B, and fig. 29A and 29B show a state in which the finger 210 touches the third time of the first button BUT1 on the display surface 3a of the spatially-suspended image 3. Fig. 27A, 28A, and 29A show the state when the display surface 3a of the spatially-suspended image 3 is viewed from the front (the normal direction of the display surface 3 a), and fig. 27B, 28B, and 29B show the state when the display surface 3a of the spatially-suspended image 3 is viewed from the side (the direction parallel to the display surface 3 a). In fig. 27A to 29B, the x-direction is a horizontal direction in the display surface 3a of the spatially-suspended image 3, the y-direction is a direction orthogonal to the x-axis in the display surface 3a of the spatially-suspended image 3, and the z-direction is a normal direction (a height direction facing the display surface 3 a) of the display surface 3a of the spatially-suspended image 3. In the explanatory diagrams of fig. 27A to 33, the spatially-suspended image 3 is illustrated as having a thickness in the depth direction for ease of view, but in reality, if the image display surface of the display device 1 is a plane, the spatially-suspended image 3 is also a plane, and there is no thickness in the depth direction. In this case, the spatially suspended image 3 and the display surface 3a are positioned on the same plane. In the description of the present embodiment, the display surface 3a represents a surface on which the spatially-suspended image 3 can be displayed, and the spatially-suspended image 3 represents a portion on which the spatially-suspended image is actually displayed.
In fig. 27A to 29B, the detection processing of the finger 210 is performed using, for example, a captured image generated by the imaging unit 1180 and a sensing signal of the air operation detection sensor 1351. In the detection process of the finger 210, for example, the position (x-coordinate, y-coordinate) of the tip of the finger 210 on the display surface 3a of the floating image 3 in the detection space, the height position (z-coordinate) of the tip of the finger 210 facing the display surface 3a, and the like are detected. Here, the position (x-coordinate, y-coordinate) of the tip of the finger 210 on the display surface 3a of the spatially-suspended image 3 is the position coordinate of the intersection point of the perpendicular lines drawn from the tip of the finger 210 to the display surface 3a of the spatially-suspended image 3 on the display surface 3 a. The height position of the tip of the finger 210 facing the display surface 3a is depth information indicating the depth of the finger 210 facing the display surface 3 a. The arrangement of the imaging unit 1180 and the air operation detection sensor 1351 for detecting the finger 210 and the like will be described in detail later.
At the first timing shown in fig. 27A and 27B, the finger 210 is positioned farthest from the display surface 3a of the spatially suspended image 3 than at the second timing shown in fig. 28A and 28B and the third timing shown in fig. 29A and 29B. Let the distance (height position) between the tip of the finger 210 and the display surface 3a of the spatially suspended image 3 be dz1. That is, the distance dz1 represents the height of the finger 210 in the z direction relative to the display surface 3a of the spatially floating image 3.
The distance dz1 shown in fig. 27B, the distance dz2 shown in fig. 28B described later, and the like are set to be positive with respect to the display surface 3a of the spatially-suspended image 3, and to be negative with respect to the display surface 3a, opposite to the user. That is, if the finger 210 is located on the user side with respect to the display surface 3a, the distance dz1 and the distance dz2 are positive values, and if the finger 210 is located on the opposite side to the user with respect to the display surface 3a, the distance dz1 and the distance dz2 are negative values.
In the present embodiment, it is assumed that the virtual light source 1500 is present on the user side with respect to the display surface 3a of the spatially suspended image 3. Here, the setting of the setting direction of the virtual light source 1500 may be actually stored as information in the nonvolatile memory 1108 or the memory 1109 of the spatial floating image display apparatus 1000. The setting of the setting direction of the virtual light source 1500 may be a parameter existing only in design. Even when the setting of the setting direction of the virtual light source 1500 is a parameter existing only in design, the setting direction of the virtual light source 1500 in design can be uniquely determined from the relationship between the position of the finger of the user and the display position of the virtual shadow, which will be described later. In the example of fig. 27A to 29B, the virtual light source 1500 is located on the user side with respect to the display surface 3a, and is provided on the right side of the display surface 3a as viewed from the user. A virtual shadow 1510 simulating the shadow of the finger 210 formed by the light irradiated by the virtual light source 1500 is displayed in the spatially suspended image 3. In the example of fig. 27A-29B, virtual shadow 1510 is displayed on the left side of finger 210. The touch operation by the user is assisted by this virtual shadow 1510.
In the state of fig. 27B, the distal end of the finger 210 is farthest from the normal direction of the display surface 3a of the spatially-suspended image 3 than in the state of fig. 28B and the state of fig. 29B. Accordingly, in fig. 27A, the front end of the virtual shadow 1510 is formed at a position farthest from the first button BUT1 to be touched in the horizontal direction, compared with the state of fig. 28A and the state of fig. 29A. Accordingly, in fig. 27A, compared with the state of fig. 28A and the state of fig. 29A, the distance between the front end of the finger 210 and the front end of the virtual shadow 1510 is maximized when the display surface 3a of the spatially suspended image 3 is viewed from the front. In fig. 27A, a distance between the tip of the finger 210 and the tip of the virtual shadow 1510 in the horizontal direction of the display surface 3a of the spatially suspended image 3 is dx1.
In fig. 28B, the finger 210 approaches the spatially-suspended image 3 as compared with fig. 27B. Therefore, in fig. 28B, the distance dz2 between the tip of the finger 210 and the normal direction of the display surface 3a of the spatially-suspended image 3 is smaller than dz1. At this time, in fig. 28A, a virtual shadow 1510 is displayed at a position where the distance between the tip of the finger 210 and the tip of the virtual shadow 1510 in the horizontal direction of the display surface 3a of the spatially suspended image 3 is dx2 smaller than dx1. That is, in the example of fig. 28A and 28B, since the virtual light source 1500 is located on the user side with respect to the display surface 3a and is provided on the right side of the display surface 3a as viewed from the user, the distance between the front end of the finger 210 and the front end of the virtual shadow 1510 changes when the display surface 3a of the spatially-suspended image 3 is viewed from the front in conjunction with the distance between the front end of the finger 210 and the normal direction of the display surface 3a of the spatially-suspended image 3.
Then, when the tip of the finger 210 contacts the tip of the virtual shadow 1510, as shown in fig. 29A and 29B, the distance between the tip of the finger 210 and the normal direction of the display surface 3a of the floating image 3 becomes 0. At this time, virtual shadow 1510 is displayed such that the distance between finger 210 and virtual shadow 1510 in the horizontal direction on display surface 3a of spatially suspended video 3 is zero. Thus, the user can recognize that the finger 210 touches the display surface 3a of the spatially-suspended image 3. At this time, if the front end of the finger 210 is in contact with the area of the first button BUT1, the user can recognize that it touches the first button BUT1. That is, in the example of fig. 29A and 29B, since the virtual light source 1500 is located on the user side with respect to the display surface 3a and is provided on the right side of the display surface 3a viewed from the user, the distance between the front end of the finger 210 and the front end of the virtual shadow 1510 changes in the horizontal direction when the front end of the finger 210 views the display surface 3a of the spatially-suspended image 3 in conjunction with the distance between the front end of the finger 210 and the normal direction of the display surface 3a of the spatially-suspended image 3. That is, the display position of the front end of virtual shadow 1510 is a position determined from the positional relationship between the position of virtual light source 1500 and the position of the front end of user's finger 210, and changes in linkage with the change in the position of the front end of user's finger 210.
According to the configuration and processing of the above-described "assist (1) of the touch operation using the virtual shadow", the user can grasp the distance (depth) between the finger 210 and the normal direction of the display surface 3a of the spatially floating image 3 better from the positional relationship between the finger 210 and the virtual shadow 1510 in the horizontal direction on the display surface 3a of the spatially floating image 3 during the touch operation. In addition, when the finger 210 is in contact with an object (for example, a button) that is the spatially floating image 3, the user can recognize that the object is touched. Thus, a better spatial floating image display device can be provided.
Auxiliary (2) of touch operation Using virtual shadow
Next, as another example of the touch operation supporting method using the virtual shadow, a case will be described in which the virtual light source 1500 is provided on the left side of the display surface 3a as viewed from the user. Fig. 30A to 32B are diagrams illustrating another example of a touch operation supporting method using virtual shadows. Fig. 30A and 30B correspond to fig. 27A and 27B, and show a state of the user at the first time point when the user tries to touch the first button BUT1 on the display surface 3a of the spatially suspended image 3 with the finger 210. Fig. 31A and 31B correspond to fig. 28A and 28B, and show a state in which the finger 210 is closer to the second time of the spatially suspended image 3 than in fig. 30A and 30B. Fig. 32A and 32B correspond to fig. 29A and 29B, and show a state when the finger 210 touches the spatially floating image 3. For convenience of explanation, fig. 30B, 31B, and 32B are shown as views from the opposite directions to fig. 27B, 28B, and 29B.
In fig. 30A to 32B, the virtual light source 1500 is located on the user side with respect to the display surface 3a and is provided on the left side of the display surface 3a as viewed from the user. A virtual shadow 1510 simulating the shadow of the finger 210 formed by the light irradiated by the virtual light source 1500 is displayed in the spatially suspended image 3. In fig. 30A-32B, virtual shadow 1510 is shown on the right side of finger 210. The touch operation by the user is assisted by this virtual shadow 1510.
In the state of fig. 30B, the distal end of the finger 210 is farthest from the normal direction of the display surface 3a of the spatially-suspended image 3 than in the states of fig. 31B and 32B. In fig. 30B, the distance between the tip of the finger 210 and the normal direction of the display surface 3a of the spatially-suspended image 3 at this time is dz10. In fig. 30A, the distance between the tip of the finger 210 and the tip of the virtual shadow 1510 in the horizontal direction on the display surface 3a of the spatially-suspended image 3 is dx10.
In fig. 31B, the finger 210 approaches the spatially floating image 3 as compared with fig. 27B. Therefore, in fig. 31B, the distance dz20 between the tip of the finger 210 and the normal direction of the display surface 3a of the spatially-suspended image 3 is smaller than dz10. At this time, in fig. 31A, virtual shadow 1510 is displayed at a position where the distance between the tip of finger 210 and the tip of virtual shadow 1510 in the horizontal direction of display surface 3a of spatially suspended image 3 is smaller than dx20 of dx10. That is, in the example of fig. 31A and 31B, since the virtual light source 1500 is located on the user side with respect to the display surface 3a and is provided on the left side of the display surface 3a as viewed from the user, the distance between the front end of the finger 210 and the front end of the virtual shadow 1510 changes in the horizontal direction when the front end of the finger 210 views the display surface 3a of the spatially-suspended image 3 in conjunction with the distance between the front end of the finger 210 and the normal direction of the display surface 3a of the spatially-suspended image 3.
Then, when the tip of the finger 210 contacts the tip of the virtual shadow 1510, as shown in fig. 32A and 32B, the distance between the tip of the finger 210 and the normal direction of the display surface 3a of the floating image 3 becomes 0. At this time, virtual shadow 1510 is displayed such that the distance between finger 210 and virtual shadow 1510 in the horizontal direction on display surface 3a of spatially suspended video 3 is zero. Thus, the user can recognize that the finger 210 touches the display surface 3a of the spatially-suspended image 3. At this time, if the front end of the finger 210 is in contact with the area of the first button BUT1, the user can recognize that it touches the first button BUT1. That is, in the example of fig. 32A and 32B, since the virtual light source 1500 is positioned on the user side with respect to the display surface 3a and is provided on the left side of the display surface 3a viewed from the user, the distance between the front end of the finger 210 and the front end of the virtual shadow 1510 changes in the horizontal direction when the front end of the finger 210 views the display surface 3a of the spatially-suspended image 3 in conjunction with the distance between the front end of the finger 210 and the normal direction of the display surface 3a of the spatially-suspended image 3.
The configuration and processing of the above-described "assist (2) of touch operation using virtual shadow" can also obtain the same effects as those of the configuration of fig. 27A to 29B.
Here, when the above-described processing of "assist (1) of touch operation using virtual shadow" and/or the processing of "assist (2) of touch operation using virtual shadow" are to be implemented in the spatially suspended video display device 1000, the following embodiments are possible.
The first embodiment is a method of implementing only "assist (1) of touch operation using virtual shadow" in the spatially floating video display device 1000. In this case, since the virtual light source 1500 is located on the user side with respect to the display surface 3a and is provided on the right side of the display surface 3a as viewed from the user, the virtual shadow 1510 is displayed on the left side of the tip of the user's finger 210 as viewed from the user. Thus, if the user's finger 210 is a right-hand finger, it is desirable that the visibility of the display of virtual shadow 1510 is not obscured by the user's right hand or arm. Therefore, according to the tendency that there are many statistically right-handed users, even if only "assist (1) of touch operation using virtual shadow" is realized in the spatial floating video display device 1000, the probability that the display of the virtual shadow 1510 can be viewed well is sufficiently high, which is desirable.
Further, as the first embodiment, a configuration may be adopted in which both the processing of "assist (1) of the touch operation using the virtual shadow" and the processing of "assist (2) of the touch operation using the virtual shadow" are realized, and which processing is switched by the user in accordance with which of the right hand and the left hand is touched. In this case, the probability of being able to view the virtual shadow 1510 favorably can be further improved, and convenience for the user can be improved.
Specifically, when the user touches with the right hand, a virtual shadow 1510 is displayed on the left side of the finger 210 using the configuration of fig. 27A to 29B. In this case, it is desirable that the visibility of the display of virtual shadow 1510 is not blocked by the right hand or the right arm of the user. On the other hand, when the user performs a touch operation with the left hand, a virtual shadow 1510 is displayed on the right side of the finger 210 using the configuration of fig. 30A to 32B. In this case, it is desirable that the visibility of the display of virtual shadow 1510 is not obscured by the left hand or left arm of the user. Thus, when the user touches with the right hand and when the user touches with the left hand, virtual shadow 1510 is displayed at a position that is easy for the user to view, and convenience for the user is improved.
Here, the determination of whether the touch operation is performed in the right hand or the touch operation is performed in the left hand may be performed based on the captured image generated by the image capturing unit 1180, for example. For example, the control unit 1110 performs image processing on the captured image, and detects the face, arm, hand, and finger of the user from the captured image. Then, the image pickup unit 1180 estimates the gesture or motion of the user based on the detected arrangement of the face, arm, hand, and finger, and determines whether the user is touching with the right hand or touching with the left hand. In this determination, if the vicinity of the center of the user's body in the lateral direction can be determined from other parts, it is not necessarily necessary to take a picture of the face. The determination may be performed based on the arrangement of the arms alone. The above determination may be made only based on the arrangement of the hands. The above determination may be made based on a combination of the arm configuration and the hand configuration. In addition, in these determinations, the determination may be performed by combining the face arrangement.
Fig. 27A to 29B and fig. 30A to 32B show virtual shadows 1510 extending at angles corresponding to the actual extending directions of the fingers 210. The actual extending direction of the finger 210 may be calculated by photographing the finger using a certain imaging unit described above. Here, the virtual shadow 1510 having the extending direction fixed at a predetermined angle may be displayed instead of reflecting the angle corresponding to the extending direction of the finger 210. Thus, the load on the video control unit 1160 or the control unit 1110 for performing the display control of the virtual shadow 1510 is reduced.
For example, if the finger 210 is a right-hand finger, naturally, the user tries to touch the display surface 3a of the spatially-suspended image 3 with the finger 210 pointing upward left facing the display surface 3a of the spatially-suspended image 3 by extending the arm from the right side in front of the display surface 3a of the spatially-suspended image 3 (near the user). Thus, in the case where the finger 210 is a right-hand finger, if the shadow of the finger indicated by the virtual shadow 1510 is displayed in a predetermined direction facing the upper right direction on the display surface 3a of the spatially-suspended image 3, the display is natural even if the angle corresponding to the finger 210 is not reflected.
In addition, for example, if the finger 210 is a left-hand finger, naturally, the user tries to touch the display surface 3a of the spatially-suspended image 3 in a state where the finger 210 is directed rightward and upward facing the display surface 3a of the spatially-suspended image 3 by extending the arm from the front left side of the display surface 3a of the spatially-suspended image 3. Thus, in the case where the finger 210 is a left-hand finger, if the shadow of the finger indicated by the virtual shadow 1510 is displayed in a predetermined direction facing the upper left direction on the display surface 3a of the spatially-suspended image 3, the display is natural even if the angle corresponding to the finger 210 is not reflected.
In addition, when the finger 210 of the user is positioned on the opposite side of the display surface 3a of the spatially-suspended image 3 from the user, a display may be performed in which the user can recognize that the finger 210 is positioned on the back side of the spatially-suspended image 3 and is in a non-touchable state, for example, a message may be displayed on the spatially-suspended image 3 and the user may be notified that the finger 210 is positioned on the back side of the spatially-suspended image 3 and is in a non-touchable state. Alternatively, for example, virtual shadow 1510 may be displayed in a different color from usual, such as red. This can give the user a better indication of the return of the finger 210 to the proper position.
One example of setting conditions of virtual light source
A method of setting the virtual light source 1500 will be described herein. Fig. 33 is a diagram illustrating a method of setting a virtual light source. Fig. 33 shows a case where the user performs a touch operation with the left hand, but the following description is also applicable to a case where the user performs a touch operation with the right hand.
Fig. 33 shows a normal L1 of the display surface 3a extending from a point C at the center of the display surface 3a of the spatially suspended video 3 toward the user, a line L2 connecting the virtual light source 1500 and the point C at which the normal L1 intersects the display surface 3a, and a virtual light source installation angle α defined by an angle between the normal L1 and the line L2. For simplicity of explanation, fig. 33 shows the moment when the tip of the user's finger 210 is located on the line L2.
In fig. 27A to 33, for simplicity of explanation, the virtual light source 1500 is illustrated as being arranged at a position not far from the display surface 3a of the spatially suspended image 3 and the finger 210 of the user. The virtual light source 1500 may be set at such a position, but the most preferable setting example is as follows. That is, the distance between the virtual light source 1500 and the point C at the center of the display surface 3a of the spatially suspended image 3 is preferably set to infinity. The reason for this is as follows. It is assumed that there is an object plane having a contact surface on the same coordinate system as the display surface 3a of the spatially-suspended image 3 of fig. 27A to 32B, and that the light source is not a virtual light source but the sun, in which case the distance of the sun can be approximated to almost infinity, so when the distance (z direction) of the front end of the user's finger from the object plane changes, the position in the horizontal direction (x direction) of the front end of the shadow of the user's finger on the real object plane changes linearly therewith. Therefore, in the setting of the virtual light source 1500 shown in fig. 27A to 33 of the present embodiment, the distance between the virtual light source 1500 and the point C at the center of the display surface 3a of the spatially-suspended image 3 is set to infinity so that the position in the horizontal direction (x-direction) of the tip of the virtual shadow 1510 in the spatially-suspended image 3 linearly changes according to the change in the distance (z-direction) between the tip of the finger 210 of the user and the display surface 3a of the spatially-suspended image 3, and thus the virtual shadow can be more naturally recognized by the user.
When the virtual light source 1500 is set to be disposed at a position not far from the display surface 3a of the spatially floating image 3 and the finger 210 of the user, the position of the front end of the virtual shadow 1510 in the spatially floating image 3 in the horizontal direction (x-direction) changes nonlinearly as the distance (z-direction) between the front end of the finger 210 of the user and the display surface 3a of the spatially floating image 3 changes, and the calculation of the position of the front end of the virtual shadow 1510 in the horizontal direction (x-direction) becomes somewhat complicated. In contrast, if the distance between the virtual light source 1500 and the point C in the center of the display surface 3a of the spatially floating image 3 is set to infinity, the position in the horizontal direction (x-direction) of the tip of the virtual shadow 1510 in the spatially floating image 3 linearly changes with the change in the distance (z-direction) between the tip of the user's finger 210 and the display surface 3a of the spatially floating image 3, and therefore, there is an effect that the calculation of the position in the horizontal direction (x-direction) of the tip of the virtual shadow 1510 can be simplified.
When the virtual light source setting angle α is small, the angle between the line connecting the virtual light source 1500 and the finger 210 and the normal line L1 cannot be increased from the user's perspective, and therefore the distance between the front end of the finger 210 and the front end of the virtual shadow 1510 in the horizontal direction (x direction) of the display surface 3a of the spatially suspended image 3 becomes short. Therefore, the change in the position of the virtual shadow 1510 when the tip of the finger 210 is touched is difficult to be recognized by the user, and there is a risk that the effect of depth recognition by the user during the touch operation is reduced. To avoid this, it is preferable to provide the virtual light source 1500 such that the angle between the line L2 connecting the virtual light source 1500 and the point C and the normal L1 is, for example, 20 ° or more.
On the other hand, if the angle between the line connecting the virtual light source 1500 and the finger 210 and the normal L1 is around 90 °, the distance between the tip of the finger 210 and the tip of the virtual shadow 1510 becomes very long. In this way, the probability that the display position of virtual shadow 1510 is out of the range of spatially suspended video 3 increases, and the probability that virtual shadow 1510 cannot be displayed in spatially suspended video 3 increases. Therefore, in order to prevent the angle between the line L2 connecting the virtual light source 1500 and the point C and the normal line L1 from excessively approaching 90 °, for example, the installation angle α of the virtual light source 1500 is preferably 70 ° or less.
That is, the preferable installation position of the virtual light source 1500 satisfies that it is not excessively close to the surface including the normal line passing through the finger 210, and is not excessively close to the surface including the display surface 3a of the spatially suspended image 3.
The spatially suspended image display device 1000 according to the present embodiment can display virtual shadows as described above. In this way, compared with the case where a predetermined mark is superimposed and displayed on an image in order to assist the touch operation of the user, it is possible to realize image processing with a physically more natural effect. Therefore, the touch operation support technique implemented by displaying the virtual shadow in the spatial floating image display device 1000 according to the present embodiment can provide a user with a state in which the depth in the touch operation can be more naturally recognized.
Method for detecting position of finger
Next, a method for detecting the position of the finger 210 will be described. The structure of detecting the position of the finger 210 of the user 230 is specifically described below.
Method (1) for detecting the position of a finger
Fig. 34 is a configuration diagram showing an example of a method for detecting the position of a finger. In the example shown in fig. 34, the position of the finger 210 is detected using 1 imaging unit 1180 and 1 air operation sensor 1351. In addition, the imaging sections in the embodiments of the present invention all have imaging sensors.
The first imaging unit 1180a (1180) is disposed on the opposite side of the spatially suspended image 3 from the user 230. As shown in fig. 34, the first image pickup unit 1180a may be provided in the housing 1190 or may be provided at a position remote from the housing 1190.
The image capturing area of the first image capturing unit 1180a is set to include, for example, a display area of the spatially-suspended image 3, a finger, a hand, an arm, a face, and the like of the user 230. The first image capturing unit 1180a captures the user 230 who touches the spatially-suspended image 3, and generates a first captured image. In addition, even if the display area of the spatially suspended image 3 is photographed from the first image pickup unit 1180a, the spatially suspended image 3 itself cannot be seen as an image because it is photographed from the opposite side of the traveling direction of the directional light beam of the spatially suspended image 3. In the example of the method (1) for detecting the position of the finger, the first imaging unit 1180a is not a simple imaging unit, but a depth sensor is incorporated in addition to the imaging sensor. The structure and processing of the depth sensor may be performed using existing techniques. The depth sensor of the first image capturing unit 1180a detects the depth of each part (for example, a finger, a hand, an arm, a face, etc. of the user) in the captured image of the first image capturing unit 1180a, and generates depth information.
The overhead sensor 1351 is provided at a position where the display surface 3a of the spatially suspended image 3 can be sensed as a sensing target surface. In fig. 34, the overhead sensor 1351 is provided below the display surface 3a of the spatially suspended image 3, or may be provided on the side or above the display surface 3 a. The air operation sensor 1351 may be provided in the housing 1190 as shown in fig. 34, or may be provided at a position remote from the housing 1190.
The air operation detection sensor 1351 in fig. 34 is a sensor for detecting a position where the display surface 3a of the spatially suspended image 3 is in contact with or overlaps the finger 210. That is, when the tip of the finger 210 approaches the display surface 3a of the spatially-suspended image 3 from the user side of the display surface 3a of the spatially-suspended image 3, the overhead detection sensor 1351 can detect the contact of the finger 210 with the display surface 3a of the spatially-suspended image 3.
For example, the control unit 1110 shown in fig. 3C reads a program for performing image processing and a program for displaying the virtual shadow 1510 from the nonvolatile memory 1108. The control unit 1110 performs first image processing on the first captured image generated by the imaging sensor of the first imaging unit 1180a, detects the finger 210, and calculates the position (x-coordinate, y-coordinate) of the finger 210. The control unit 1110 calculates the position (z-coordinate) of the tip of the finger 210 facing the spatially-suspended image 3 based on the first captured image generated by the imaging sensor of the first imaging unit 1180a and the depth information generated by the depth sensor of the first imaging unit 1180 a.
In the example of fig. 34, the touch detection unit is constituted by an imaging sensor and a depth sensor of the first imaging unit 1180a, an overhead operation sensor 1351, an overhead operation detection unit 1350, and a control unit 1110, and detects the position of the user's finger and the touch of the object of the spatially floating image 3. From this, the position (x, y, z) of the finger 210 is calculated. The touch detection result is calculated based on the detection result of the air operation detection unit 1350 or the combination of the detection result of the air operation detection unit 1350 and the information generated by the first image pickup unit 1180 a.
Then, the control unit 1110 calculates a position (display position) at which the virtual shadow 1510 is displayed based on the position (x-coordinate, y-coordinate, z-coordinate) of the finger 210 and the position of the virtual light source 1500, and generates video data of the virtual shadow 1510 based on the calculated display position.
The calculation of the display position of the virtual shadow 1510 in the video data by the control unit 1110 may be performed each time the position of the finger 210 is calculated. The calculation of the display position of the virtual shadow 1510 in the video data may be performed not every time the position of the finger 210 is calculated, but the display position of the virtual shadow 1510 corresponding to each of the plurality of positions of the finger 210 may be calculated, the display position map data may be stored in the nonvolatile memory 1108 in advance, and after the position of the finger 210 is calculated, the video data of the virtual shadow 1150 may be generated based on the display position map data stored in the nonvolatile memory 1108. The control unit 1110 may calculate the front end of the finger 210 and the extending direction of the finger 210 in the first image processing, calculate the display position of the front end of the finger 210 and the extending direction of the virtual shadow 1510 corresponding to the extending direction, and generate image data of the virtual shadow 1510 adjusted to the display angle corresponding to the direction of the actual finger 210 based on the calculated display position and extending direction.
The control unit 1110 outputs the video data of the virtual shadow 1510 generated to the video control unit 1160. The video control unit 1160 generates video data (superimposed video data) obtained by superimposing video data of the virtual shadow 1510 and other video data of the object or the like, and outputs superimposed video data including the video data of the virtual shadow 1510 to the video display unit 1102.
The video display unit 1102 can display the spatially suspended video 3 obtained by superimposing the virtual shadow 1510 on the object or the like by displaying a video based on superimposed video data including video data of the virtual shadow 1510.
Touch detection for an object is performed, for example, as follows. The air operation detection unit 1350 and the air operation detection sensor 1351 are configured as described in fig. 3A to 3C, and when the finger 210 is in contact with or overlapped with the plane including the display surface 3A of the spatially suspended image 3, detect the position thereof, and output touch position information indicating the position where the finger 210 is in contact with or overlapped with the display surface 3A to the control unit 1110. When the touch position information is input, the control unit 1110 determines whether or not the position (x-coordinate, y-coordinate) of the finger 210 calculated by the first image processing is included in the display range of each object displayed on the display surface 3a of the spatially-suspended image 3. Then, when the position of the finger 210 is included in the display range of a certain object, the control unit 1110 determines that a touch is made on the object.
According to the detection method described above, the detection of the position of the finger 210 and the detection of the touch operation can be performed by a simple configuration in which 1 imaging unit 1180 (first imaging unit 1180 a) having an imaging sensor and a depth sensor is combined with 1 air operation detection sensor 1351.
As a modification of the method (1) for detecting the position of the finger, the control unit 1110 may detect the touch operation performed by the finger 210 based on only the first captured image generated by the imaging sensor of the first imaging unit 1180a and the depth information generated by the depth sensor of the first imaging unit 1180a without using the detection results of the air operation detection unit 1350 and the air operation detection sensor 1351. For example, the control unit 1110 may be configured to switch to the second mode when a problem occurs in the operation of the air operation detection sensor 1351 or the air operation detection unit 1350 during the normal operation, in which the detection result of the depth sensor and the detection result of the imaging sensor of the first imaging unit 1180a are combined to detect the touch operation performed by the finger 210, and in the second mode, the detection result of the air operation detection sensor 1351 and the detection result of the air operation detection sensor 1350 are not used, and the touch operation performed by the finger 210 is detected based on only the first imaging image generated by the imaging sensor of the first imaging unit 1180a and the depth information generated by the depth sensor of the first imaging unit 1180 a.
Method (2) for detecting the position of a finger
Fig. 35 is a configuration diagram showing another example of a method for detecting the position of a finger. In the example shown in fig. 35, the position of the finger 210 is detected using 2 imaging units. The second image capturing unit 1180b (1180) and the third image capturing unit 1180c (1180) are disposed on the opposite side of the spatially suspended image 3 from the user 230.
The second image pickup unit 1180b is provided, for example, on the right side as viewed from the user 230. The imaging region of the second imaging unit 1180b is set to include, for example, the spatially floating image 3, the finger, hand, arm, face, and the like of the user 230. The second image capturing unit 1180b captures the user 230 performing a touch operation on the spatially-suspended image 3 from the right side of the user 230, and generates a second captured image.
The third image pickup unit 1180c is provided, for example, on the left side as viewed from the user 230. The imaging region of the third imaging unit 1180c is set to include, for example, the spatially floating image 3, the finger, hand, arm, face, and the like of the user 230. The third imaging unit 1180c captures the user 230 performing a touch operation on the spatially-suspended image 3 from the left side of the user 230, and generates a third captured image. In this way, in the example of fig. 35, the second image capturing unit 1180b and the third image capturing unit 1180c constitute a so-called stereo camera.
The second image pickup unit 1180b and the third image pickup unit 1180c may be provided in the housing 1190 or may be provided at a position away from the housing 1190 as shown in fig. 35. In addition, one image pickup unit may be provided in the housing 1190, and the other image pickup unit may be provided at a position away from the housing 1190.
The control unit 1110 performs a second image process for the second captured image and a third image process for the third captured image, respectively. Then, the control section 1110 calculates the position (x-coordinate, y-coordinate, z-coordinate) of the finger 210 based on the result of the second image processing (second image processing result) and the result of the third image processing (third image processing result).
In the example of fig. 35, the second image capturing unit 1180b, the third image capturing unit 1180c, and the control unit 1110 constitute a touch detection unit that detects the position of the user's finger and the touch of the object in the spatially floating image 3. The position (x-coordinate, y-coordinate, z-coordinate) of the finger 210 is calculated as a position detection result or a touch detection result.
In this way, in the example of fig. 35, the virtual shadow 1510 is generated based on the position of the finger 210 calculated from the second image processing result and the third image processing result. In addition, based on the position of the finger 210 calculated from the second image processing result and the third image processing result, it is determined whether or not the object is touched.
According to this configuration, an imaging unit having a depth sensor is not required. In addition, according to this configuration, by using the second image capturing section 1180b and the third image capturing section 1180c as a stereo camera, the detection accuracy of the position of the finger 210 can be improved. In particular, the detection accuracy of the x-coordinate and the y-coordinate can be improved as compared with the example of fig. 34. Therefore, whether or not the object is touched can be more accurately determined.
As a modification of the method (2) for detecting the position of the finger, the display of the virtual shadow 1510 may be controlled by detecting the position (x-coordinate, y-coordinate, z-coordinate) of the finger of the user based on the second captured image of the second imaging unit 1180b and the third captured image of the third imaging unit 1180c as described above, and whether or not the object of the spatially floating image 3 is touched may be detected by the overhead detection unit 1350 or the control unit 1110 based on the detection result of the overhead detection sensor 1351. According to this modification, since the overhead sensor 1351 that senses the display surface 3a of the spatially floating image 3 is used, the contact between the finger 210 of the user and the display surface 3a of the spatially floating image 3 can be detected with higher accuracy than the detection in the depth direction of the stereo camera composed of the second image capturing unit 1180b and the third image capturing unit 1180 c.
Method (3) for detecting the position of a finger
Fig. 36 is a configuration diagram showing another example of a method for detecting the position of a finger. The example shown in fig. 36 also uses 2 imaging units to detect the position of the finger 210. The example of fig. 36 is different from the example of fig. 35 in that a fourth image pickup unit 1180d (1180) which is one of the image pickup units is arranged at a position of a display surface 3a where a spatially suspended image 3 is picked up from the side. As shown in the example of fig. 34, the first imaging unit 1180a (1180) is provided on the opposite side of the spatially suspended image 3 from the user 230. In the example of fig. 36, the first imaging unit 1180a (1180) may be provided as long as imaging is possible, and a depth sensor is not required.
Thus, the fourth image pickup unit 1180d is provided around the display surface 3a of the spatially suspended image 3. In fig. 36, the fourth imaging unit 1180d is provided below the side surface of the display surface 3a of the spatially suspended image 3, but may be provided above or beside the display surface 3 a. As shown in fig. 36, the fourth imaging unit 1180d may be provided in the housing 1190 or may be provided at a position remote from the housing 1190.
The imaging region of the fourth imaging unit 1180d is set to include, for example, the spatially floating image 3, the finger, hand, arm, face, and the like of the user 230. The fourth imaging unit 1180d captures a user 230 who touches the spatially-suspended image 3 from the periphery of the display surface 3a of the spatially-suspended image 3, and generates a fourth captured image.
The control unit 1110 performs fourth image processing on the fourth captured image, and calculates a distance (z-coordinate) between the display surface 3a of the spatially-suspended image 3 and the tip of the finger 210. Then, the control unit 1110 performs processing on the virtual shadow 1510 and determines whether or not the object has been touched, based on the position (x-coordinate, y-coordinate) of the finger 210 calculated by the first image processing performed on the first captured image by the first image capturing unit 1180a and the position (z-coordinate) of the finger 210 calculated by the fourth image processing.
In the example of fig. 36, the first image capturing unit 1180a, the fourth image capturing unit 1180d, and the control unit 1110 constitute a touch detection unit that detects the position of the user's finger and the touch on the object. The position (x-coordinate, y-coordinate, z-coordinate) of the finger 210 is calculated as a position detection result or a touch detection result.
According to this configuration, compared with the configuration of the stereo camera of fig. 35, the detection accuracy of the depth of the finger 210 facing the display surface 3a of the spatially-suspended image 3, which is the distance between the display surface 3a of the spatially-suspended image 3 and the tip of the finger 210, can be improved.
As a modification of the method (3) for detecting the position of the finger, the display of the virtual shadow 1510 may be controlled by detecting the position (x-coordinate, y-coordinate, z-coordinate) of the finger of the user based on the first captured image of the first imaging unit 1180a and the fourth captured image of the fourth imaging unit 1180d as described above, and whether or not the object of the spatially floating image 3 is touched may be detected by the overhead detection unit 1350 or the control unit 1110 based on the detection result of the overhead detection sensor 1351. According to this modification, since the overhead sensor 1351 that senses the display surface 3a of the spatially floating image 3 is used, the contact between the finger 210 of the user and the display surface 3a of the spatially floating image 3 can be detected with higher accuracy than the detection accuracy of the fourth captured image of the fourth imaging unit 1180 d.
Method for displaying input content to assist touch operation
Examples of touch operations that assist the user by other methods are described. For example, the input content can also be displayed to assist the touch operation. Fig. 37 is a diagram illustrating a method of highlighting the content of an input to assist a touch operation. Fig. 37 shows a case where a number is input by a touch operation.
The spatially-suspended image 3 of fig. 37 includes, for example, a key input UI (user interface) display area 1600 and an input content display area 1610 that displays input content, wherein the key input UI (user interface) display area 1600 includes a plurality of objects including a plurality of objects for inputting numerals and the like, an object 1601 for clearing the input content, an object 1603 for deciding the input content, and the like.
In the input content display area 1610, content (e.g., numerals) input by a touch operation is sequentially displayed in the spatially-suspended image 3 from the left end to the right. The user can confirm the content input through the touch operation while viewing the input content display area 1610. Then, the user touches the object 1603 when all desired numbers are entered. Thereby, the input content displayed in the input content display area 1610 is registered. The touch operation on the spatially floating image 3 is different from the physical contact on the surface of the display device, and the user cannot get the feeling of contact. Therefore, it is desirable that the user can perform the operation while confirming whether or not the touch operation itself is effectively performed by separately displaying the input contents in the input contents display area 1610.
On the other hand, in a case where a content different from the desired content is input such as when the touched object is mistakenly touched, the user can clear the content (here, "9") that was last input by touching the object 1601. Then, the user continues the touch operation on the object for inputting the number or the like. When the user inputs all desired digits, the user touches the object 1603.
In this way, by displaying the input content in the input content display area 1610, the user can be made to confirm the input content, and convenience can be improved. In addition, when the user touches the wrong object, the user can correct the input content, and convenience can be improved.
Method for highlighting input content to assist touch operation
Then, the input content can be highlighted to assist the touch operation. Fig. 38 is a diagram illustrating a method of highlighting input content to assist a touch operation.
Fig. 38 shows an example of highlighting a number entered by a touch operation. As illustrated in fig. 38, when an object corresponding to the numeral "6" is touched, the touched object is cleared, and the inputted numeral "6" is displayed in the area where the object was displayed.
In this way, by displaying the number corresponding to the touched object instead of the object, the user can grasp that the user touched the object, and convenience can be improved. The number corresponding to the touched object may also be referred to as a replacement object that replaces the touched object.
As another method of highlighting the input content, for example, the object touched by the user may be brightly lighted (lighted), or the object touched by the user may be blinked. Although not shown here, the distance between the finger 210 and the display surface 3a described in the embodiment of fig. 27A to 28B can be recognized, and as the finger approaches the display surface, the object to be touched is changed to be brighter than the surrounding object, and finally, the emphasis degree is maximized at the stage of contacting the display surface, or the object is further brightly lighted (lighted) or blinked. In such a configuration, the user can recognize that the user touches the object, and convenience can be improved.
Method (1) for assisting touch operation by vibration
Next, a method of assisting a touch operation by vibration will be described. Fig. 39 is a diagram illustrating an example of a method of assisting a touch operation by vibration. Fig. 39 shows a case where a touch operation is performed using a stylus (touch input device) 1700 instead of the finger 210. The stylus 1700 includes, for example, a communication unit for transmitting and receiving various information such as signals and data to and from a device such as a suspended image display device, and a vibration mechanism for generating vibrations based on an input signal.
The user operates the stylus 1700, and touches an object displayed in the key input UI display region 1600 of the spatially hovering image 3 with the stylus 1700. At this time, for example, the control unit 1100 transmits a touch detection signal indicating that a touch to the object is detected from the communication unit 1132. When the stylus 1700 receives the touch detection signal, the vibration mechanism is vibrated based on the touch detection signal. Thereby, the stylus 1700 vibrates. The vibration of the stylus 1700 is then transferred to the user, who recognizes that the object is touched. In this way, the vibration of the stylus 1700 is used to assist the touch operation.
According to this structure, the user can recognize that the object is touched by the vibration.
Here, the case where the stylus 1700 receives the touch detection signal transmitted from the spatially floating imaging device has been described, but other configurations are also possible. For example, when a touch to an object is detected, the spatial hover image display apparatus notifies an upper device of the detection of the touch to the object. Then, the upper device transmits a touch detection signal to the stylus 1700.
Alternatively, the spatially floating image display device and the higher-level device may transmit the touch detection signal via a network. In this way, the stylus 1700 may also indirectly receive touch detection signals from the spatially floating image display device.
Method (2) of vibration assisted touch operation
Next, another method of assisting the touch operation by vibration will be described. Here, the user recognizes that the object is touched by vibrating the terminal held by the user. Fig. 40 is a diagram illustrating another example of an assisting method of a touch operation by vibration. In the example of fig. 40, the user 230 wearing the wristwatch-type wearable terminal 1800 performs a touch operation.
The wearable terminal 1800 is equipped with a communication unit for transmitting and receiving various information such as signals and data to and from devices such as a suspended image display device, and a vibration mechanism for generating vibrations based on the input signals.
The user performs a touch operation using the finger 210, touching an object displayed in the key input UI display area 1600 of the spatially-hovered image 3. At this time, for example, the control unit 1100 transmits a touch detection signal indicating that a touch to the object is detected from the communication unit 1132. When the wearable terminal 1800 receives the touch detection signal, the vibration mechanism is vibrated based on the touch detection signal. Thereby, the wearable terminal 1800 vibrates. The vibrations of the wearable terminal 1800 are then transferred to the user, who recognizes that the object is touched. In this way, the assistance of the touch operation is performed with the vibration of the wearable terminal 1800. Here, the wristwatch-type wearable terminal is described as an example, but the mobile terminal may be a smart phone or the like that is carried by a user.
In addition, the wearable terminal 1800 may receive a touch detection signal from a host device, similarly to the stylus 1700. In addition, the wearable terminal 1800 may also receive a touch detection signal via a network. In addition, in addition to the wearable terminal 1800, for example, the information processing terminal such as a smart phone held by the user can be used to assist the touch operation.
With this configuration, the user can recognize that the object is touched via various terminals such as the wearable terminal 1800 held by the user.
Method (3) for assisting touch operation by vibration
Next, another method of assisting the touch operation by vibration will be described. Fig. 41 is a diagram illustrating another example of an assisting method of a touch operation by vibration. In the example of fig. 41, the user 230 stands on the vibration plate 1900 for a touch operation. The vibrating plate 1900 is provided at a predetermined position where the user 230 performs a touch operation. As a practical use mode, the vibrating plate 1900 is disposed below a cushion, not shown, for example, and the user 230 stands on the vibrating plate 1900 with the cushion interposed therebetween.
As shown in fig. 41, the vibrating plate 1900 is connected to, for example, a communication unit 1132 of the spatially suspended image display device 1000 via a cable 1910. When a touch on the object is detected, for example, the control unit 1110 supplies an ac voltage to the vibration plate 1900 via the communication unit 1132 for a predetermined time. The vibration plate 1900 vibrates during the period of supplying the ac voltage. That is, the ac voltage is a control signal for vibrating the vibration plate 1900, which is output from the communication unit 1132. Vibrations generated by the vibrating plate 1900 are transmitted from under the foot to the user 230, and the user 230 can recognize that the object is touched. In this way, the touch operation is assisted by the vibration of the vibration plate 1900.
The frequency of the ac voltage is set to a value within a range in which the user 230 can feel vibration. The frequency of vibration that a person can feel is approximately in the range of 0.1Hz to 500 Hz. Therefore, the frequency of the alternating voltage is preferably set within this range.
The frequency of the ac voltage is preferably changed appropriately according to the characteristics of the vibrating plate 1900. For example, when the vibration plate 1900 vibrates in the vertical direction, the sensitivity of a person to vibrations of about 410Hz is highest. When the vibration plate 1900 vibrates in the horizontal direction, the sensitivity of a person to vibrations of about 12Hz is highest. Further, at frequencies above 34Hz, the sensitivity of the person to the vertical direction is higher than the horizontal direction.
Then, when the vibration plate 1900 vibrates in the vertical direction, the frequency of the ac voltage is preferably set to a value within a range including 410Hz, for example. When the vibration plate 1900 vibrates in the horizontal direction, the frequency of the ac voltage is preferably set to a value within a range including 12Hz, for example. The peak voltage and the frequency of the ac voltage may be appropriately adjusted according to the performance of the vibrating plate 1900.
According to this structure, the user 230 can recognize that a touch is made to the object by the vibration from the foot. In addition, in the case of this configuration, it is possible to set the time-space-floating image 3 to be displayed unchanged when the object is touched, and even when another person peeps at the touch operation, it is possible to reduce the possibility that the input content is known, and further improve the security.
Modification 1 of object display
Other examples of the object display in the spatially-suspended image 3 by the spatially-suspended image display device 1000 will be described. The spatially-suspended image display device 1000 displays a spatially-suspended image 3, which is an optical image of a rectangular image displayed by the display device 1. The rectangular image displayed by the display device 1 has a correspondence with the spatially floating image 3. Therefore, when displaying an image having brightness over the entire display range of the display device 1, the spatially-suspended image 3 displays an image having brightness over the entire display range. In this case, although the aerial suspension feeling of the entire rectangular aerial suspension image 3 can be obtained, there is a problem that it is difficult to obtain the aerial suspension feeling of each object itself displayed in the aerial suspension image 3. In contrast, a method may be employed in which only a portion of the object in the spatially-suspended image 3 is displayed as an image having brightness. However, the method of displaying only a portion of the object as an image having brightness can satisfactorily obtain a feeling of suspension of the object, but on the other hand, there is a problem that it is difficult to recognize the depth of the object.
Then, in the display example of fig. 42A of the present embodiment, 2 objects, that is, the first button BUT1 displayed as "yes" and the second button BUT2 displayed as "no", are displayed within the display range 4210 of the spatially-suspended image 3. The 2 target areas, i.e., the first button BUT1 and the second button BUT2 displayed as no, are areas of the display device 1 including an image having brightness. A black display region 4220 is disposed around the display regions of the 2 objects so as to surround the display regions of the objects.
The black display region 4220 is a region in which black is displayed in the display device 1. That is, the black display region 4220 is a region having image information without luminance in the display device 1. In other words, the black display region 4220 is a region where no image information having brightness exists. The black region displayed on the display device 1 is a spatial region in which the user cannot see the spatially suspended image 3 as an optical image. Further, in the display example of fig. 42A, a frame image display area 4250 is arranged so as to surround the black display area 4220 within the display range 4210.
The frame image display area 4250 is an area in which a simulated frame is displayed using an image having brightness in the display device 1. Here, the simulated frame in the frame image display area 4250 may display a single color as a frame image. Alternatively, the simulated frame in the frame image display area 4250 may be a frame image displayed using a designed image. Alternatively, the frame image display area 4250 may display a frame such as a broken line.
By displaying the frame image of the frame image display area 4250 as described above, the user can easily recognize the planes to which the 2 objects of the first button BUT1 and the second button BUT2 belong, and can easily grasp the depth positions of the 2 objects of the first button BUT1 and the second button BUT 2. At the same time, there is a black display area 4220 around these objects, which is invisible to the user, so that the air suspension feeling of 2 objects, i.e., the first button BUT1 and the second button BUT2, can be emphasized. In the spatially-suspended image 3, the frame image display area 4250 is present at the outermost periphery of the display range 4210, but may not be the outermost periphery of the display range 4210 depending on the situation.
As described above, according to the display example of fig. 42A, it is possible to better achieve both the sense of airborne sound and the recognition of the depth position of the object displayed in the spatially-suspended image 3.
Modification 2 of object display
Fig. 42B is a modification of the object display of fig. 42A. This is a display example in which a message indicating "touch operation is possible" is displayed near an object that the user can perform touch operation, such as the first button BUT1 and the second button BUT 2. Here, as shown in fig. 42B, a mark such as an arrow indicating an object that the user can perform a touch operation may be displayed. In this way, the user can easily recognize the object capable of the touch operation.
Here, the above-described message display and the mark display can be displayed so as to be surrounded by the black display area 4220, whereby an air-floating feeling can be obtained.
Modification of spatial floating image display device
Next, a modification of the spatial floating image display apparatus will be described with reference to fig. 43. The spatially-suspended image display device of fig. 43 is a modification of the spatially-suspended image display device of fig. 3A. The same reference numerals are given to the same components as those described in fig. 3A. In the explanation of fig. 43, the differences from the components shown in fig. 3A are explained, and the same components as those shown in fig. 3A are explained in fig. 3A, so that redundant explanation is omitted.
Here, the spatially floating image display device of fig. 43 converts the image light from the display device 1 into the spatially floating image 3 via the polarization separation member 101, the λ/4 plate 21, and the retro-reflection member 2, as in the spatially floating image display device of fig. 3A.
Unlike the spatially floating image display device of fig. 3A, the spatially floating image display device of fig. 43 is provided with a physical frame 4310 surrounding the spatially floating image 3 from the periphery. Here, an opening window is provided along the outer periphery of the spatially suspended image 3 in the physical frame 4310, and the user can view the spatially suspended image 3 at the position of the opening window in the physical frame 4310. In the case where the spatially suspended image 3 is rectangular, the shape of the opening window of the physical frame 4310 is also rectangular.
In the example of fig. 43, an air operation detection sensor 1351 is provided in a part of the opening window of the physical frame 4310. As described above with reference to fig. 3C, the air operation detection sensor 1351 is capable of detecting a touch operation of a user's finger on an object displayed in the spatially suspended image 3.
In the example of fig. 43, the physical frame 4310 has a cover structure for covering the polarization separation member 101 on the upper surface of the spatially suspended image display device. The cover structure is not limited to the polarization separation member 101, and may cover the storage portions of the display device 1 and the retroreflective member 2. However, the physical frame 4310 of fig. 43 is only an example of the present embodiment, and does not necessarily have a cover structure.
Fig. 44 shows a physical frame 4310 and an opening 4450 of the spatially suspended image display device of fig. 43 when the spatially suspended image 3 is not displayed. At this time, the user cannot see the spatially suspended image 3.
On the other hand, fig. 45 shows an example of the structure of the opening window 4450 of the physical frame 4310 of the spatially floating image display device of fig. 43 and the display of the spatially floating image 3 according to the present embodiment. In the example of fig. 45, the opening window 4450 is configured to substantially coincide with the display range 4210 of the spatially suspended image 3.
Further, in the display example of the spatially-suspended image 3 in fig. 45, for example, an object display similar to the example in fig. 42A is performed. Specifically, objects, such as a first button BUT1 and a second button BUT2, which the user can perform touch operations are displayed. These objects that can be touched by the user are surrounded by the black display area 4220, and a sense of spatial suspension can be obtained appropriately.
A frame image display region 4470 is provided around the black display region 4220. The outer periphery of the frame image display area 4470 is a display range 4210, and is disposed so that the edge of the opening window 4450 of the spatially floating image display device substantially coincides with the display range 4210.
Here, in the display example of fig. 45, the frame image of the frame image display area 4470 is displayed in the same color as the physical frame 4310 surrounding the opening window 4450. For example, if the physical bezel 4310 is white, the image of the bezel image display area 4470 is also displayed in white. If the physical border 4310 is gray, the image of the border image display area 4470 is also displayed in gray. For example, if the physical bezel 4310 is yellow, the image of the bezel image display area 4470 is also displayed in yellow.
In this way, by displaying the frame image of the frame image display area 4470 in the same color as the color of the physical frame 4310 around the opening window 4450, the user can be emphasized to feel spatial continuity between the physical frame 4310 and the frame image of the frame image display area 4470.
In general, a user is better able to spatially identify a physical structure than spatially floating images. Therefore, as shown in the display example of fig. 45, by displaying the spatially-suspended image so as to emphasize spatial continuity with the physical frame, the user can easily grasp the depth of the spatially-suspended image more easily.
Further, in the display example of fig. 45, the spatial floating images of the objects, for example, the first button BUT1 and the second button BUT2, which the user can touch are imaged on the same plane as the frame image display area 4470, so that the user can grasp the depths of the first button BUT1 and the second button BUT2 better based on the depth awareness of the physical frame 4310 and the frame image display area 4470.
That is, according to the display example of fig. 45, it is possible to better recognize both the feeling of suspension in the air and the depth position of the object displayed in the spatially suspended image 3. Further, the depth position of the object displayed in the spatially-suspended image 3 can be more easily recognized than in the display example of fig. 42A.
In the display example of fig. 45, as in the display example of fig. 42B, a mark such as an arrow indicating an object that the user can perform a touch operation may be displayed.
As a modification of the structure of the spatial floating image display device of fig. 43, as shown in fig. 46, a light shielding plate 4610 and a light shielding plate 4620 having a black surface with low light reflectance may be provided inside the cover structure of the physical frame 4310. By providing the light shielding plate in this way, even if the user looks into the space-floating image display device through the opening window, it is possible to prevent the user from seeing a member or the like which is not related to the space-floating image 3. This can prevent the viewing of the spatially-suspended image 3 from being difficult due to the fact that an actual object is not related to the spatially-suspended image 3 being viewed on the rear side of the black display area 4220 of fig. 42A or the like. In addition, stray light generated by the spatially suspended image 3 can be prevented.
Here, the light shielding plate 4610 and the light shielding plate 4620 may be configured as a cylindrical quadrangular prism corresponding to the rectangle of the spatially suspended image 3, and may be configured to extend from the vicinity of the opening window of the spatially suspended image display device to the storage portion of the display device 1 and the retroreflective member 2. In addition, in consideration of the divergence angle of light and the degree of freedom for securing the viewpoint of the user, the light shielding plate may be formed in a rectangular parallelepiped shape in which the light shielding plates are opposed to each other, and may extend from the vicinity of the opening window of the spatially suspended image display device to the storage portions of the display device 1 and the retroreflective member 2. In this case, the rectangular parallelepiped shape is gradually enlarged from the vicinity of the opening window of the spatially suspended image display device to the storage portion of the display device 1 and the retroreflective member 2.
The cover structure and the light shielding plate shown in fig. 46 may be used in a suspended image display device that performs a display other than the display example shown in fig. 45. That is, the frame image display area 4470 is not necessarily displayed. If the physical frame 4310 of the cover structure of the spatially suspended image display device is disposed so as to surround the display range 4210 of the spatially suspended image 3, it is possible to facilitate the improvement of the recognition of the depth position of the displayed object even if the frame image display region 4470 is not provided in fig. 45.
The various embodiments are described in detail above, but the present invention is not limited to the above-described embodiments, including various modifications. For example, the above-described embodiments describe the entire system in detail for easy understanding of the present invention, but are not limited to the configuration in which all the described structures are necessarily provided. In addition, a part of the structure of one embodiment may be replaced with the structure of another embodiment, and the structure of another embodiment may be added to the structure of one embodiment. In addition, other structures may be added, deleted, or replaced for a part of the structures of the embodiments.
In the technique of the present embodiment, by displaying high-resolution and high-brightness image information in a state suspended in space, for example, a user can operate without feeling uneasiness in contact infection of an infectious disease. The technique of the present embodiment, if used in a system where an indefinite number of users are using, can reduce the risk of contagious diseases, provides a non-contact user interface that can be used without feeling uneasy. Thus contributing to the "3 good health and well-being" of the sustainable development target (SDGs: sustainable Development Goals) advocated by the United nations.
In the technique of the present embodiment, the divergence angle of the outgoing image light is reduced and the outgoing image light is unified into a specific polarization, so that only the reflected light normal to the retro-reflective member is efficiently reflected, and therefore the light utilization efficiency is high, and a bright and clear spatially-suspended image can be obtained. According to the technology of the present embodiment, it is possible to provide a noncontact user interface with excellent usability that can greatly reduce power consumption. Thus, "9 industries, innovations, and infrastructure" and "11 sustainable cities and communities" contribute to the sustainable development goal advocated by the united nations (SDGs: sustainable Development Goals).
Further, the technique of the present embodiment can form a spatially suspended image formed by image light having high directivity (straight traveling property). With the technique of the present embodiment, when displaying an image requiring high security or an image requiring high security for a person facing the user in an ATM of a bank, a ticket vending machine of a station, or the like, a non-contact user interface with a small risk of a spatially floating image being peeped by persons other than the user can be provided by displaying image light with high directivity. Thus, the "11 sustainable cities and communities" that contribute to the sustainable development goal (SDGs: sustainable Development Goals) advocated by the United nations.
Description of the reference numerals
1 … … display device, 2 … … retro-reflective member, 3 … … aerial image (aerial suspended image), 105 … … window glass, 100 … … transparent member, 101 … … polarization separation member, 12 … … absorption type polarizer, 13 … … light source device, 54 … … light direction conversion panel, 151 … … retro-reflective member, 102, 202 … … LED substrate, 203 … … light guide, 205, 271 … … reflector, 206, 270 … … phase difference plate, 300 … … aerial suspended image, ghost image of 301 … … aerial suspended image, ghost image of 302 … … aerial suspended image, 230 … … user, 1000 … … aerial suspended image display device, 1110 … … control portion, 1160 … … image control portion, 1180 image capturing portion, 1102 … … shadow image display portion, 1350 … … aerial operation detection portion, 1351 … … aerial operation detection sensor, 1500 … … virtual light source, 1510 … … virtual, 1610 … … input content display area, … … touch control, 1800 … … wearable terminal, … … panel, 5220, frame 421900 display area 4237 black display area 4250, 1700 black display area 4237 vibration area
Claims (36)
1. A spatially suspended image display device, comprising:
A display device for displaying the image;
a retro-reflective member that reflects image light from the display device and forms a spatially floating image in the air using the reflected light;
a sensor that detects a position of a finger of a user performing a touch operation on 1 or more objects displayed in the spatially-suspended image; and
the control part is used for controlling the control part to control the control part,
wherein the control unit controls image processing for an image to be displayed on the display device based on the position of the user's finger detected by the sensor, thereby displaying a virtual shadow of the user's finger on the display surface of the spatially-suspended image where no physical contact surface exists.
2. The spatially suspended image display device of claim 1, wherein:
when the position of the tip of the finger of the user changes in the normal direction on the near side of the display surface of the spatially-suspended image as viewed by the user, the position of the tip of the virtual shadow displayed in the spatially-suspended image changes in the left-right direction on the display surface of the spatially-suspended image.
3. The spatially suspended image display device of claim 2, wherein:
The position of the front end of the virtual shadow displayed in the spatially-suspended image in the left-right direction on the display surface of the spatially-suspended image changes linearly with the change in the normal direction of the position of the front end of the finger of the user.
4. The spatially suspended image display device of claim 1, wherein:
comprising an image pickup part for picking up the hands or arms of a user,
when the finger of the user touching 1 or more objects displayed in the floating image is the right hand, the virtual shadow is displayed at a position on the left side of the front end of the finger as seen from the user in the floating image,
when a finger of a user performing a touch operation on 1 or more objects displayed in the spatially-suspended image is left-handed, the virtual shadow is displayed in the spatially-suspended image at a position on the right side of the tip of the finger as viewed from the user.
5. The spatially suspended image display device of claim 1, wherein:
the control unit detects a position of a tip of the finger on a display surface of the spatially-suspended image and a height position of the tip of the finger with respect to the display surface using the sensor that detects a position of the finger of the user.
6. The spatially suspended image display device of claim 1, wherein:
whether the finger of the user touches the display surface of the spatially-suspended image is detected by a sensor different from the sensor that detects the position of the finger of the user.
7. The spatially suspended image display device of claim 1, wherein:
the position of the virtual shadow displayed on the display surface of the spatially-suspended image is a position determined based on the positional relationship between the position of the virtual light source and the position of the user's finger detected by the sensor.
8. The spatially suspended image display device of claim 7, where:
the position of the virtual light source satisfies:
the virtual light source setting angle is defined as an angle between a normal line extending from a point at the center of the display surface of the spatially-suspended image to the user side and a line connecting the virtual light source and the point at the center of the display surface of the spatially-suspended image, and is 20 DEG or more.
9. The spatially suspended image display device of claim 1, wherein:
an angle of an extension direction of a virtual shadow displayed on the display surface of the floating image changes in association with an angle of the user's finger captured by an imaging unit included in the floating image display device.
10. The spatially suspended image display device of claim 1, wherein:
the angle of the virtual shadow displayed on the display surface of the floating image is not linked with the angle of the finger of the user captured by the imaging unit of the floating image display device, and is a fixed angle.
11. A spatially suspended image display device, comprising:
a display device for displaying the image;
a retro-reflective member that reflects image light from the display device and forms a spatially floating image in the air using the reflected light;
a sensor that detects a touch operation performed by a finger of a user on 1 or more objects displayed in the spatially-suspended image; and
the control part is used for controlling the control part to control the control part,
wherein the control unit assists the user with the touch operation based on a detection result of the touch operation obtained by using the sensor when the user performs the touch operation on the object.
12. The spatially suspended image display device of claim 11, where:
the spatially-suspended image includes an input content display area that displays content input by the touch operation at a different position from the object.
13. The spatially suspended image display device of claim 11, where:
and when the object is touched, clearing the touched object, and displaying a replacement object representing the content corresponding to the touched object.
14. The spatially suspended image display device of claim 11, where:
the object being touched is lit up when the object is touched.
15. The spatially suspended image display device of claim 11, where:
the object being touched is caused to blink when the object is touched.
16. The spatially suspended image display device of claim 11, where:
the user performs the touch operation using a touch input device, and vibrates the touch input device when the object is touched.
17. The spatially suspended image display device of claim 11, where:
vibrating a terminal held by the user when the object is touched.
18. The spatially suspended image display device of claim 17, where:
the terminal is a wearable terminal.
19. The spatially suspended image display device of claim 17, where:
The terminal is a smart phone.
20. The spatially suspended image display device of claim 11, where:
when the object is touched, a control signal for vibrating a vibrating plate disposed under the foot of the user is outputted from a communication unit provided in the floating image display device.
21. A spatially suspended image display device, comprising:
a display device for displaying the image; and
a retro-reflective plate for reflecting the image light from the display device to form a spatially floating image in the air by the reflected light,
in the display range of the spatially-suspended image, there is an area in which an object is displayed, a black display area surrounding the area in which the object is displayed is arranged, and a frame image display area surrounding the black display area is arranged.
22. The spatially suspended image display device of claim 21, where:
the black display region is a region in which no image information having brightness exists in a display image of the display device corresponding to the spatially floating image.
23. The spatially suspended image display device of claim 21, where:
A sensor is included that detects a position of a finger of a user performing a touch operation on the object.
24. The spatially suspended image display device of claim 23, where:
a message indicating that the object is an object capable of touch operation is displayed in the vicinity of the object.
25. The spatially suspended image display device of claim 24, where:
a marker indicating the object is displayed in addition to the message.
26. The spatially suspended image display device of claim 21, where:
the image processing device includes a physical frame disposed so as to surround the space and suspend the image from the surrounding.
27. The spatially suspended image display device of claim 26, where:
the display color of the frame image display area is the color of the same color system as the color of the physical frame.
28. The spatially suspended image display device of claim 26, where:
the physical frame forms an opening window of a cover structure covering a storage portion for storing the display device and the retroreflective sheet.
29. The spatially suspended image display device of claim 28, where:
A light shielding plate is provided in the cover structure, and extends from the vicinity of the opening window to a storage portion for storing the display device and the retroreflective sheet.
30. The spatially suspended image display device of claim 29, where:
the light shielding plate forms a cylindrical quadrangular prism.
31. The spatially suspended image display device of claim 29, where:
the light shielding plate forms a quadrangular frustum pyramid.
32. The spatially suspended image display device of claim 31, where:
the rectangular parallelepiped shape is gradually enlarged from the vicinity of the opening window toward a housing portion housing the display device and the retroreflective sheet.
33. A spatially suspended image display device, comprising:
a display device for displaying the image;
a retro-reflective plate for reflecting the image light from the display device and forming a spatially floating image in the air by the reflected light; and
a physical frame arranged to surround the spatially suspended image from the surroundings,
wherein the physical frame forms an opening window of a cover structure covering a receiving portion for receiving the display device and the retro-reflective plate,
And a light shielding plate extending from the vicinity of the opening window to the storage portion for storing the display device and the retroreflective sheet.
34. The spatially suspended image display device of claim 33, where:
the light shielding plate forms a cylindrical quadrangular prism.
35. The spatially suspended image display device of claim 33, where:
the light shielding plate forms a quadrangular frustum pyramid.
36. The spatially suspended image display device of claim 35, where:
the rectangular parallelepiped shape is gradually enlarged from the vicinity of the opening window toward a housing portion housing the display device and the retroreflective sheet.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-211142 | 2020-12-21 | ||
JP2021109317A JP7570293B2 (en) | 2021-06-30 | Space-floating image display device | |
JP2021-109317 | 2021-06-30 | ||
PCT/JP2021/045901 WO2022138297A1 (en) | 2020-12-21 | 2021-12-13 | Mid-air image display device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116783644A true CN116783644A (en) | 2023-09-19 |
Family
ID=85107502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180086904.9A Pending CN116783644A (en) | 2020-12-21 | 2021-12-13 | Space suspension image display device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116783644A (en) |
-
2021
- 2021-12-13 CN CN202180086904.9A patent/CN116783644A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
JP2023006618A (en) | 2023-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6270898B2 (en) | Non-contact input method | |
WO2022138297A1 (en) | Mid-air image display device | |
JP6757779B2 (en) | Non-contact input device | |
JP2016154035A5 (en) | ||
US20100225564A1 (en) | Image display device | |
WO2023276921A1 (en) | Air floating video display apparatus | |
WO2022137940A1 (en) | Spatial floating image display apparatus | |
WO2022113745A1 (en) | Floating-in-space-image display device | |
US12118136B2 (en) | Air floating video display apparatus | |
JP6663736B2 (en) | Non-contact display input device and method | |
JP2022097901A (en) | Space floating video display device | |
CN116783644A (en) | Space suspension image display device | |
JP7570293B2 (en) | Space-floating image display device | |
CN116348806A (en) | Space suspension image display device and light source device | |
JP5856357B1 (en) | Non-contact input device and method | |
WO2022270384A1 (en) | Hovering image display system | |
WO2023068021A1 (en) | Aerial floating video display system | |
CN118696537A (en) | Aerial suspension image display device | |
WO2023112463A1 (en) | Aerial image information display system | |
JP5957611B1 (en) | Non-contact input device and method | |
JP2022089271A (en) | Space floating picture display device | |
CN118235080A (en) | Aerial suspension image display device | |
CN117837138A (en) | Space suspension image information display system and three-dimensional sensing device used by same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |