WO2008041315A1 - Dispositif d'affichage d'images - Google Patents
Dispositif d'affichage d'images Download PDFInfo
- Publication number
- WO2008041315A1 WO2008041315A1 PCT/JP2006/319707 JP2006319707W WO2008041315A1 WO 2008041315 A1 WO2008041315 A1 WO 2008041315A1 JP 2006319707 W JP2006319707 W JP 2006319707W WO 2008041315 A1 WO2008041315 A1 WO 2008041315A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- detected
- attribute
- detected object
- display device
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/26—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
- G02B30/27—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/50—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
- G02B30/56—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
Definitions
- the present invention relates to a technical field of an image display device that displays a two-dimensional image stereoscopically based on, for example, a 3D (Dimension) floating vision system.
- This type of stereoscopic two-dimensional image can improve the sense of reality, visibility, amusement, and the like in upholstery equipment, sales promotion displays, communication terminal devices, game machines, and the like. Therefore, various methods for displaying a stereoscopic two-dimensional image have been proposed. For example, a polarization method has been proposed in which a viewer wears polarized glasses and visually observes left and right parallax images based on different polarization states. However, with this method, it may be troublesome for the viewer to wear polarized glasses.
- a lenticular lens method has been proposed as a stereoscopic image display method without using, for example, polarized glasses (see, for example, Patent Document 1).
- this method three-dimensional expression and moving picture expression are realized by displaying a plurality of screens on a single screen and showing the plurality of screens through a translucent screen in which semi-cylindrical lenses of a certain width are connected in the horizontal direction. .
- a 3D floating vision system According to this method, a two-dimensional image can be displayed with a relatively simple configuration by forming a two-dimensional image as a real image by a microlens array.
- a technique has been proposed that uses a position detection sensor that realizes an interactive device in this method, and changes the stereoscopic two-dimensional image displayed on the imaging plane in accordance with the output signal. (For example, see Patent Document 2).
- Patent Document 1 Japanese Patent Laid-Open No. 10-221644
- Patent Document 2 JP-A-2005-141102
- Patent Document 1 may have the following cost problems. That is, the lenticular lens method described above requires a parallax image corresponding to both eyes of the viewer from the imaging stage in order to make a plurality of screens latent images on one screen. In order to supply this image, many operations such as computer image processing, lenticular lens design, and an accurate combination of the lens and the image are necessary, which increases the cost.
- Patent Document 2 the cost problem related to Patent Document 1 can be solved, and a certain degree of effect and interactivity are ensured, but these effects And there is still room for improvement in interactivity.
- toy guns, knives, forks, dryers, brushes, and the like actually have different attributes (shape, application, function, etc.).
- shape shape, application, function, etc.
- the present invention has been made in view of the above-described problems, for example, to display a stereoscopic two-dimensional image relatively easily, and to improve the rendering effect and interactivity. It is an object of the present invention to provide an image display device capable of performing the above.
- an image display device is arranged in a display unit that displays an image on a screen, an optical path of display light that constitutes the image, and An image transmission means for transmitting the display light constituting the image so as to display a real image as a floating image on an imaging plane located in a space opposite to the screen; and in a real space portion including the space Attribute specifying means for specifying the attribute of the object to be detected, and the display means so that the floating image changes to a form pre-associated with the attribute of the specified object to be detected.
- Control means for controlling.
- an image is displayed on the screen by display means such as a color liquid crystal display device.
- image transmission means force including a microlens array is arranged in the optical path of the display light constituting the image.
- the display light constituting the image is transmitted by the image transmission means, and the real image is displayed as a floating image on the imaging plane located in the space opposite to the screen.
- the “floating image” is an image that appears to the user at the observation position (that is, within the range of the viewing angle) as if the force is floating in the air and is a real image.
- an image display method such as 3D floating vision (registered trademark of the present applicant) can be considered.
- the attribute of the detected object located in the real space including the above-described space is specified by the attribute specifying means.
- the “detected object” referred to here is typically a tool having some attribute, and includes, for example, a toy gun or a fork.
- the “attribute” of the detected object is a unique property or characteristic of the detected object itself, and is a comprehensive concept including, for example, a shape, a function, or an idea.
- Such an attribute of the object to be detected is specified by, for example, being detected by an attribute detection means which is an example of an attribute specifying unit as described later.
- the display means is controlled by a control means including an arithmetic circuit, a recording circuit, and the like so that the floating image changes to a form preliminarily associated with the attribute of the detected object to be detected.
- a control means including an arithmetic circuit, a recording circuit, and the like so that the floating image changes to a form preliminarily associated with the attribute of the detected object to be detected.
- Be controlled For example, an attribute of a toy gun that is a tool that “fires” is associated in advance with a form “a character drawn in a floating image is scared”.
- the form of “the pasta is displayed on the plate drawn in the floating image” is associated in advance with the attribute of the fork, which is a tool for “spinning food”. In this way, the floating image changes into various forms that can be derived by the attribute of the detected object.
- the attribute specifying unit includes an attribute detection unit that detects the attribute.
- the attribute of the detected object is detected by the attribute detection means as follows. That is, for example, by attaching an IC tag in which an attribute is recorded in advance to an object to be detected and reading the IC tag electromagnetically with an IC tag reader, the attribute of the object to be detected is detected.
- the image of the detected object imaged by an imaging device such as a CCD camera and the image of the detected object candidate stored in the database are pattern-recognized and recorded in advance in association with the detected object candidate.
- the attributes of the object to be detected are detected by reading the attributes that are being read.
- the image display device further comprises position detection means for detecting where the position of the detected object is in the real space portion, and the control means includes: The display means is controlled so that the floating image changes to a form preliminarily associated with the detected position of the detected object in addition to the specified attribute.
- the reality in addition to the effects and interactivity, the reality can be remarkably improved. That is, first, the force where the position of the detected object is in the real space is detected by a position detecting means such as an XYZ sensor, a CCD image sensor, an infrared sensor or an ultrasonic sensor.
- the “position of the detected object” mentioned here includes not only the planar position of the detected object but also the spatial position. For example, when the detected object intersects the imaging plane or the plane before and after the imaging plane, the plane area occupied by the detected object in the imaging plane or the plane before and after the imaging plane may be detected.
- the display means controls so that the floating image changes to a form associated in advance with the position of the detected object in addition to the attribute of the detected object specified as described above. Controlled by means. For example, if the “toy ball” penetrates the “target floating image”, the “target floating image” is associated with the “toy ball” attribute in advance.
- the display means is controlled by the control means so as to change to “floating image”. In this way, the floating image changes dynamically according to the position of the object to be detected, so that it is possible to improve the effect and interactivity. Power! In this case, the “ballistic hole” is adjusted to a position where the “toy ball” penetrates at an appropriate position, so that the reality can be remarkably improved.
- a memory for storing a locus of the position of the detected object that changes.
- the control means changes to a form in which the floating image is previously associated with the stored locus of the detected object in addition to the specified attribute.
- the display means may be controlled.
- the reality in addition to the effects and interactivity, the reality can be remarkably improved. That is, first, when the position of the detected object to be detected changes over time, the trajectory of the position of the detected object to be detected is stored, for example, by a storage means that also has a logic operation circuit power centered on the memory, for example. Memorized every few hundred milliseconds.
- the “trajectory of the position of the object to be detected” referred to here includes not only the planar position of the object to be detected but also a spatial trajectory.
- a trajectory that satisfies a predetermined condition such as a trajectory when an object to be detected intersects the imaging plane or a plane before and after the imaging plane, may be indicated.
- the floating image is displayed in such a manner that the floating image changes in advance to the form associated with the locus of the position of the detected object in addition to the attribute of the detected object specified as described above.
- Means are controlled by control means. For example, if the “apple floating image” is cut up and down with a knife while crossing the image plane, the locus is stored. Then, the display means is controlled by the control means so that the “floating image of the apple” is previously associated with the attribute of the knife and changes to the “floating image of the apple with the cut face”. At this time, the “cutting edge” is adjusted to a position where the knife passes through at an appropriate position. Since the trajectory is also memorized, the reality does not disappear even if the position of the knife changes, so that the reality can be remarkably improved.
- the locus of this position it is determined where the position of the detected object changes in the real space portion after the time when the position of the detected object is detected.
- the control means prefetches the image according to a form in which the display means is associated with the predicted position of the detected object in addition to the specified attribute.
- the display means may be controlled.
- the locus of the position of the detected object to be detected is stored by the storage means, for example, every several hundred milliseconds. Then, for example, the position of the detected object is detected after the time point when the position of the detected object is detected by the predicting means that also has an arithmetic circuit power (typically, at the latest detection in detecting the position multiple times) Where the real space portion changes is predicted based on the stored locus of the detected object. For example, the subsequent trajectory can be predicted by specifying the velocity vector based on the trajectory of the position of the detected object stored over time.
- the display means prefetches an image according to a form that is associated in advance with the position of the object to be predicted in addition to the attribute of the object to be detected, and the display means controls the means. Controlled by. In this way, it is possible to eliminate the response delay by predicting the movement of the position from the trajectory as well as the current position of the object to be detected and prefetching the image in advance according to the prediction result. As a result, even if the object to be detected is moved, if the floating image changes later or changes after one tempo, the discomfort is reduced, which is very useful in practice.
- the image display device further includes state detection means for detecting the state of the detected object, and the control means adds the floating image to the specified attribute. Then, the display means is controlled so as to change to a V, which is associated with at least the state of the detected object to be detected in advance.
- the floating image changes depending on the state of the object to be detected or its change, so that the effect and interactivity are further improved. That is, first, the state of the detected object is detected by the state detection means.
- the “state of the object to be detected” here refers to a certain state of the object to be detected qualitatively or quantitatively. For example, a discontinuous two-stage state such as switching on / off. Or a continuous multi-stage state such as low, medium and high in volume.
- the display means is controlled by the control means so as to change to a form that is associated with at least the state of the detected object in advance.
- the toy gun switch is turned off, it is considered synonymous with firing, and the “target floating image” is associated with the “toy gun” attribute “
- the display means is controlled by the control means so as to change to “a certain floating image”.
- the “floating image of a long-haired woman” force is associated with the attribute of “dryer” in advance, and changes to the “floating image of a woman with fluttering hair”. May be.
- the floating image changes dynamically according to the state of the object to be detected, so that it is possible to further improve the rendering effect and interactivity.
- the attribute information attached to the detected object and indicating the attribute of the detected object is recorded so as to be readable electromagnetically.
- the attribute detecting means detects the attribute by reading the recorded attribute information electromagnetically.
- tag means such as an IC tag or a barcode is attached to an object to be detected.
- attribute information indicating the attribute of the detected object is recorded so as to be read out electromagnetically.
- “electromagnetically readable” means that the attribute information recorded in the tag means can be read using electricity, magnetism, or light. Then, for example, attribute information such as an IC tag reader or a bar code reader is read out electromagnetically, and the attribute of the detected object is detected.
- the above-mentioned attribute information can be read out electromagnetically by irradiating the circuit in the IC tag with electromagnetic waves or optically by recognizing the image of the barcode.
- the readout form is preferably a non-contact type, but may be a contact type.
- the attribute information can be read using the tag means, and the effect and interactivity can be improved based on the read attribute information.
- the position detecting means detects where the position of the tag means attached to the detected object is in the real space portion, The position of the detected object may be detected.
- a position can be detected in addition to an attribute. So there are two birds with one stone. That is, the position force of the tag means attached to the detected object The position of the real space part is detected by the position detecting means such as an IC tag or a barcode, and this is used to detect the detected object. The position is detected. Specifically, when the electromagnetic wave is irradiated toward the IC tag, the position of the detected object is detected from the response time and the response direction. In this way, the effect of two birds with one stone as described above can be obtained.
- the position detecting means such as an IC tag or a barcode
- the direction may be detected by attaching a plurality of tag means to the detected object.
- the floating image can be changed not only according to the position but also according to the direction, the interactivity of the floating image is further improved.
- the tag means records state information indicating the state of the detected object in addition to the attribute information so as to be read out electromagnetically, and at least the You may further provide the rewriting means which rewrites status information.
- a state can be detected in addition to an attribute. So there are two birds with one stone.
- the tag means in addition to the attributes of the detected object, the tag means also records state information indicating the state of the detected object so that it can be read out electromagnetically. At least the status information is rewritten by a rewriting means such as an IC tag writer or a bar code writer. For example, when the toy gun switch is switched from OFF to ON, the status information recorded on the IC tag is rewritten from OFF to ON.
- the state information to be rewritten is detected by the state detecting means, and the “target floating image” is changed to the “target floating image with bullet holes” as described above. In this way, the effect of two birds with one stone is obtained as described above, which is very useful in practice.
- the image transmission means includes a microlens array, and the floating image is displayed as a real image of the image.
- the image transmission means comprises a microlens array.
- the “microlens array” here is based on the 3D floating vision system, and integrates one or more lens array halves including a plurality of microconvex lenses arranged in a two-dimensional matrix. Configured. According to such an image transmission means, the floating image is displayed as a real image (preferably an erect image) of the image.
- the autostereoscopic system can be realized by a method different from that of the image display device according to the present invention, the user can directly touch the floating image without feeling uncomfortable as in the image display device according to the present invention. That is difficult.
- a parallax barrier method As a typical method for realizing a stereoscopic vision system without using dedicated glasses, there are a parallax barrier method, a lenticular method, and the like as a representative method.
- the left eye realizes a stereoscopic image by a virtual image generated by showing a left eye image, and the focal position of the observer's eye is different from the position where the floating image is perceived.
- the focal position of the eye is set on the image display surface.
- the floating image displayed by the image display device is a real image formed by the microlens array, and the focus position of the eye is at the position of the floating image from the beginning. Even if you bring the object to be detected to the position of the image, you can easily recognize that you are touching it directly without feeling uncomfortable.
- the display means As described above, according to the image display device of the present invention, the display means, the image transmission hand, Since the stage, the attribute specifying means, and the control means are provided, it is possible to display a stereoscopic two-dimensional image relatively easily, and to improve the effect and interactivity.
- FIG. 1 is a perspective view showing a basic configuration of an image display device capable of displaying a floating image according to an embodiment.
- FIG. 2 is an arrow view of the image display apparatus according to the embodiment as seen from AA force in FIG.
- FIG. 3 is a cross-sectional view schematically showing the structure of an image transmission panel.
- FIG. 4 is a cross-sectional view schematically showing the structure of the image transmission panel and the orientation of the image (two sheets).
- FIG. 5 is a cross-sectional view schematically showing the structure of an image transmission panel and the orientation of an image (a: 1 sheet, b: 3 sheets).
- FIG. 6 is a block diagram conceptually showing the basic structure of an image display apparatus in a first example.
- FIG. 7 is a flowchart showing a basic operation of the image display apparatus according to the first embodiment.
- FIG. 8 is a schematic diagram for explaining the basic operation of the image display apparatus according to the first embodiment (a toy gun 120 is a: when it is Zb: when it is).
- FIG. 9 is a block diagram conceptually showing the basic structure of an image display apparatus in a second example.
- FIG. 10 is a flowchart showing a basic operation of the image display apparatus according to the second embodiment.
- FIG. 11 is a perspective view for explaining a state before and after the toy ball passes through the imaging plane in the image display apparatus according to the second embodiment (a: state before passing; b: In the comparative example
- FIG. 12 is a side view for explaining the state before and after the toy ball passes through the imaging plane in the image display apparatus according to the second embodiment (a: state before passing; b: In the comparative example
- FIG. 13 is a schematic diagram showing how a fork is inserted into a floating image in the image display apparatus according to the second embodiment (a: perspective view, b: front view showing change of floating image).
- FIG. 14 is a schematic diagram showing how the floating image is cut with a knife in the image display apparatus according to the second embodiment (a: perspective view, b: front view showing change of floating image).
- FIG. 15 In the image display apparatus according to the second embodiment, when a floating image is cut with a knife, It is a schematic diagram which shows a mode that the movement of a floating image is prefetched by predicting the movement of a knife (a: path PO
- FIG. 16 is a block diagram conceptually showing the basic structure of an image display apparatus in a third example.
- FIG. 17 is a flowchart showing the basic operation of the image display apparatus according to the third embodiment. Explanation of symbols
- FIG. 1 is a perspective view showing a basic configuration of an image display apparatus capable of displaying a floating image according to the embodiment.
- FIG. 2 is an arrow view of the image display apparatus according to the embodiment as seen from AA in FIG.
- the image display device 1 includes a display unit 11 having an image display surface 111 and an image transmission panel 17, and a space on the opposite side to the display unit 11. Floating image 13 is displayed on 15 image planes 21.
- the display unit 11 corresponds to an example of the “display unit” according to the present invention
- the image transmission panel 17 corresponds to an example of the “image transmission unit” according to the present invention.
- the display unit 11 is, for example, a color liquid crystal display (LCD), and includes a color liquid crystal driving circuit (not shown), a backlight illumination unit (not shown), and the like, and displays a two-dimensional image.
- the color liquid crystal drive circuit outputs a display drive signal based on an externally input video signal.
- the backlight illumination unit illuminates the image display surface 111 from behind when the display unit 11 is not self-luminous.
- the image display surface 111 is Based on the output display drive signal, for example, the direction of the liquid crystal molecules is changed to increase or decrease the light transmittance, thereby displaying a two-dimensional image.
- the displayed two-dimensional image is finally displayed as a floating image, it is preferable that the displayed two-dimensional image is three-dimensionally depicted with a sense of depth.
- the display unit 11 instead of a color liquid crystal display (LCD), various display devices such as a cathode ray tube, a plasma display, or an organic-electric-luminescence display may be used.
- the image transmission panel 17 is configured by, for example, a microlens array (details will be described later with reference to FIG. 3), and is spaced apart from the display unit 11. Then, the image transmission panel 17 floats by imaging light emitted from the image display surface 111 of the display unit 11 (that is, display light constituting the two-dimensional image) on the imaging surface 21 of the space 15. Display image 13.
- the imaging plane 21 is a plane virtually set in space according to the working distance of the microlens array, and is not an entity.
- the floating image 13 formed on the imaging surface 21 is displayed floating in the space 15, so that it appears to the observer that a three-dimensional image is also reflected by the observer.
- the floating image 13 is recognized by the observer as a pseudo stereoscopic image.
- the 2D image displayed on the display unit 11 should have a sense of depth in advance, or the background image should be black on the image display surface 111 and the contrast should be emphasized. ,.
- the floating image 13 is displayed on the imaging surface 21 so that the stereoscopic image is projected with the applied force. Is possible.
- FIG. 3 is a cross-sectional view schematically showing the structure of the image transmission panel.
- Fig. 4 is a cross-sectional view schematically showing the structure of the image transmission panel and the orientation of the image (two sheets).
- FIG. 5 is a cross-sectional view schematically showing the structure of the image transmission panel and the orientation of the image (a: 1 sheet, b: 3 sheets).
- the image transmission panel 17 includes a microlens array 25.
- the microlens array 25 is configured, for example, by integrating two lens array halves 251 and 252.
- the lens array halves 251 and 252 each have a plurality of micro-convex lenses 23 arranged in a two-dimensional matrix on both surfaces of a transparent substrate 24 made of glass or resin having excellent light transmittance.
- each micro-convex lens is arranged so as to coincide with the optical axis 1S of the micro-convex lens 231 arranged on one surface of the transparent substrate 24 and the optical axis of the micro-convex lens 232 at the opposite position of the other surface.
- the lens array halves are overlapped so that the optical axes of the adjacent micro-convex lenses 232.231 between the lens array halves 251 and 252 also coincide.
- the image transmission panel 17 is disposed so as to face the image display surface 111 of the display unit 11 at a position separated by a predetermined separation distance (operating distance of the microlens array 25).
- the image transmission panel 17 transmits the display light of the two-dimensional image emitted from the image display surface 111 of the display unit 11 to the space 15 on the side opposite to the display unit 11, and the image transmission panel 1 Form an image on an image plane 21 that is a predetermined distance away from 7.
- the image transmission panel 17 can display the two-dimensional image displayed by the display unit 11 as the floating image 13.
- the two-dimensional image displayed by the display unit 11 is flipped upside down once by the lens array half 251 and turned upside down again by the lens array half 252. Let it emit. Thereby, the image transmission panel 17 can display an erect image of the two-dimensional image as the floating image 13.
- the configuration of the microlens array 25 is limited to one in which the lens array halves 251 and 252 are integrated in a pair.
- it may be composed of one sheet as shown in FIG. 5 (a), or may be composed of two or more sheets as shown in FIG. 5 (b)! /.
- the image display device 1 can suitably display the floating image 13 as an erect image, for example.
- FIG. 6 is a block diagram conceptually showing the basic structure of the image display apparatus in the first example.
- the image display device 1 includes a display unit 11, an image transmission panel 17, an audio output unit 31, an audio drive unit 32, and an attribute detection unit 60. And a control device 100.
- the attribute detection unit 60 corresponds to an example of the “attribute specifying unit” according to the present invention
- the control device 100 corresponds to an example of the “control unit” according to the present invention.
- the display unit 11 is a color liquid crystal display device, for example, and includes an image display surface 111 and a display driving unit 112.
- the display driving unit 112 outputs a display driving signal based on the video signal input from the control device 100 and displays a two-dimensional image of a moving image or a still image on the image display surface 111.
- the image transmission panel 17 is disposed in the optical path of the display light that forms the two-dimensional image displayed on the screen of the display unit 11, and this two-dimensional image is displayed.
- the display light of the display unit 11 is transmitted so that the real image (that is, the floating image) is displayed on the imaging surface 21 located in the space opposite to the screen of the display unit 11.
- 3D image display or 3D image display by the 3D floating vision method is performed.
- a real image appears to float on the imaging surface 21 on the near side of the image transmission panel 17.
- the audio output unit 31 is, for example, a speaker, and generates an audible sound by converting the music signal input from the audio drive unit 32 into mechanical vibration.
- the attribute detection unit 60 is an image recognition device, an IC tag reader, or the like, and detects an attribute of an object to be detected that exists within a detectable range (for example, several centimeters to several tens of centimeters).
- the detected object here is, for example, a toy gun 120, fork 122, knife 123, lipstick 124, dryer 125, or brush 126, and has a unique attribute (for example, shape, function, idea, etc.) A typical one is preferred.
- the attribute detection unit 60 detects attributes unique to these objects to be detected by various methods.
- the attribute detection unit 60 includes an image sensor such as a CCD camera
- the captured image of the object to be detected is collated with an image of a tool or the like that is stored in the image database together with the attribute in advance.
- Attributes may be detected. Especially this Such attributes are easily detected by limiting the number of candidates that can be detected in advance.
- the attribute detection unit 60 is an IC tag reader, as shown in FIG. 6, a unique attribute identification IC tag 50 to 56 is attached to each detected object, and this IC tag is attached. The attribute may be detected by reading.
- the “IC tag” is a general term for a small information chip having, for example, several micron force and several millimeters square, and corresponds to an example of “tag means” according to the present invention.
- this IC tag circuit a very small amount of electric power is generated by the radio waves emitted by the IC tag reader, and the information is processed with the electric power and sent to the reader.
- the IC tag and the IC tag reader need to be brought close to each other due to the radio wave output that can be used.
- the control device 100 includes a control unit 101, an image generation unit 102, and a storage unit 103.
- the storage unit 103 corresponds to an example of “storage means” according to the present invention
- the control device 100 corresponds to an example of “prediction means” according to the present invention.
- the control unit 101 is, for example a known CPU (Central Processing Unit: CPU), control read only memory which stores a control program (R ea d Only Memory: ROM ), random access memory for storing various data (Random Access Memory: RAM) and other logic operation circuits are provided.
- the image generation unit 102 generates data such as a display image.
- the storage unit 103 stores the attribute of the detected object detected by the attribute detecting unit 60, the image “sound” displayed corresponding to the attribute, the history of the position related to the moving detected object, and the like.
- the control device 100 is inputted as an electrical signal via a bus (not shown) of the attribute force of the detected object detected by the attribute detection unit 60.
- a video signal is output to the display driver 112 or an audio signal is output to the audio driver 32.
- FIG. 7 is a flowchart showing the basic operation of the image display apparatus according to the first embodiment.
- FIG. 8 is a schematic diagram for explaining the basic operation of the image display apparatus according to the first embodiment (a toy gun 120 is a: what !, a situation when Zb: a situation when there is).
- the control device 100 causes the image generation unit 102 to execute a two-dimensional image (original image).
- Image is generated (step S101).
- This original image is an image of a doll with a target as shown in Fig. 8 (a), for example.
- step S102 the force / force force at which the attribute of the detected object is detected by the attribute detection unit 60 is determined.
- step S102 when no attribute of the detected object is detected (step S102: NO), for example, when the detected object does not exist within the detectable range of the attribute detection unit 60, the original image is particularly changed. There is no need to let it. Accordingly, the doll of the original image is displayed so as to look like a normal face or a smile as shown in FIG. 8 (a), for example.
- step S102 when the attribute of the detected object is detected (step S102: YES), the following processing is performed according to the detected attribute.
- the attribute of the detected object is detected when, for example, the user holds the toy gun 120, which is an example of the detected object, within the detectable range of the attribute detection unit 60. This is a case where the IC tag 50 in which 120 attributes are written is read by the attribute detection unit 60 and the attributes are detected.
- the image generation unit 102 generates a mask image corresponding to the detected attribute (step S 103).
- the correspondence as to what the mask image corresponding to the detected attribute is stored in the storage unit 103 in advance.
- the toy gun 120 is a tool for firing, and therefore, a mask image depicting a “fear” state is associated and stored.
- Examples of mask images associated with other objects to be detected are as follows. That is, since the fork 122 is a tool that stabs food, a mask image depicting a state that “the stomach is hungry” is associated and stored. Since the knife 123 is a tool for cutting food, a mask image depicting a “hungry” state is associated and stored.
- the lipstick 124 is a tool for applying lipstick
- a mask image depicting a “joyful” appearance is associated and stored.
- the dryer 125 is a tool that blows warm air
- a mask image depicting the state of “heating” is associated and stored.
- the brush 126 is a tool for applying paint of each color, a mask image depicting a state of “exciting” is stored in association with each other.
- step S 104 the original image and the mask image are synthesized.
- the control unit 101 is connected to the display drive unit 112. Send video signal.
- the display unit 11 displays the combined two-dimensional image (step S 105).
- the display light constituting the displayed two-dimensional image is transmitted by the image transmission panel 17 arranged in its own optical path, and is displayed as a real image on the imaging plane 21 via the image transmission panel 17 (step S106).
- the present embodiment it is possible to display a stereoscopic two-dimensional image relatively easily, and to improve the rendering effect and interactivity.
- the attribute of the object to be detected can be detected, a variety of reactions can be realized according to the attribute, which is not uniform, and the effect of rendering a three-dimensional image is enormous.
- FIG. 9 is a block diagram conceptually showing the basic structure of the image display apparatus in the second example.
- FIG. 9 the same components as those in the first embodiment (ie, FIG. 6) described above are denoted by the same reference numerals, and detailed description thereof is omitted as appropriate.
- the image display device 1 according to the present embodiment further includes a position detection unit 61 that detects the position of the detected object.
- the position detection unit 61 corresponds to an example of a “position detection unit” according to the present invention.
- the position detection unit 61 detects the intersecting plane area. The detection result is transmitted to the control device 100.
- the position detection unit 61 is, for example, a non-contact type sensor or a camera type sensor.
- the plane area detected by the position detector 61 may be in front of or behind the imaging plane 21 that does not necessarily need to coincide with the imaging plane 21.
- the position detection unit 61 detects the spatial position of the toy ball 121 within the detectable range in addition to or instead of the plane area, and transmits the detection result to the control device 100.
- the position detection unit 61 is, for example, an XYZ sensor, a CCD image sensor, an infrared sensor, or an ultrasonic sensor arranged so as to capture the frontal force of the imaging surface 21, and a predetermined interval. It may be substituted by a sensor that detects planar areas arranged in an overlapping manner.
- the detection results from one position detection unit 61 are accumulated over time in a memory built in the control device 100 or externally attached, and the toy ball 121 passing through the imaging surface 21 is planarized. You can detect it as a set of areas! /,
- the detection of the planar position and the detection of the spatial position as described above may be static or dynamic, and can take a mode according to the application. In other words, it may be detected based on the shape and position information of the detected object registered in the memory in advance, or it may be detected in real time by various sensors such as an XYZ sensor. .
- FIG. 10 is a flowchart showing the basic operation of the image display apparatus according to the second embodiment.
- the control device 100 generates a two-dimensional image (original image) by using the image generation unit 102 (step S101).
- This original image is an image of a doll with a target as shown in Fig. 8 (a), for example.
- step S102 the force / failure force by which the attribute of the detected object is detected by the attribute detection unit 60 is determined.
- the position detection unit 61 further determines whether or not the position of the detected object is detected (step S211). ).
- step S211 when the position of the detected object is detected (step S211: YES), the following processing is performed according to the detected position and attribute.
- the position and attribute of the detected object are detected when, for example, the toy ball 121 containing the IC tag 51 in which the attribute is written is directed to the image plane 21 by the toy gun 120.
- the toy ball 121 reaches the detectable range of the attribute detection unit 60 and the position detection unit 61.
- the image generation unit 102 generates a mask image corresponding to the position and attribute of the toy ball 121 to be detected (step S203).
- step S104 the processing of step S104, step S105, and step S106 is performed, and a floating image is generated corresponding to the position and attribute of the toy ball 121 to be detected. It changes suitably.
- FIG. 11 is a perspective view for explaining the state before and after the toy ball passes through the imaging plane in the image display device according to the second embodiment (a: state before passing, b: State after passing in the comparative example, c: State after passing in the second example).
- FIG. 11 is a perspective view for explaining the state before and after the toy ball passes through the imaging plane in the image display device according to the second embodiment (a: state before passing, b: State after passing in the comparative example, c: State after passing in the second example).
- FIG. 11 is a perspective view for explaining the state before and after the toy ball passes through the imaging plane in the image display device according to the second embodiment (a: state before passing, b: State after passing in the comparative example, c: State after passing in the second example).
- FIG. 12 is a side view for explaining the state before and after the toy ball passes through the imaging plane in the image display apparatus according to the second embodiment (a: state before passing, b : A state after passing in the comparative example, c: A state after passing in the second example).
- FIG. 11 (a) and FIG. 12 (a) which is a side view thereof, it is assumed that a toy ball 121 is fired from a toy gun 120. At this time, the toy ball 121 penetrates the target floating image displayed on the imaging plane 21.
- a sense of incongruity as shown in, for example, FIG. 11 (b) and its side view, FIG. 12 (b), will occur. That is, although the toy ball 121 penetrates the target floating image, there is no change in the target floating image. Or I don't feel interactivity.
- a mask image such as a “ballistic hole” is created based on the attribute of the toy ball 121, and the result is determined based on the position of the toy ball 121. Determine the location of the “bullet holes” on the image plane 21.
- FIG. 11 (c) and a side view thereof, FIG. 12 (c) the user looks at the target floating image displayed on the image plane 21, and the toy gun 120 As the toy ball 121 penetrates the imaging surface 21, the bullet hole remains in the target floating image.
- the target floating image changes with the user's operation, and the change varies depending on the tool used, that is, the detected object. Therefore, in addition to the interactivity, the reality is also greatly increased. It is.
- step S102 when no attribute of the detected object is detected (step S102: NO ), Or if the position of the object to be detected is not detected (step S211: NO), there is no need to change the original image. Alternatively, if any one of the position and the attribute of the detected object is detected, the floating image may be changed according to the detection result.
- FIG. 13 is a schematic diagram showing how the fork is inserted into the floating image in the image display apparatus according to the second embodiment (a: perspective view, b: front view showing change of floating image).
- the step numbers shown in FIG. 13 (b) correspond to the step numbers shown in the flowchart of FIG.
- FIG. 13 (a) a floating image of an apple is displayed on the image plane 21, and the state where the user stabs the floating image of the apple with a fork 122! /
- the Figure 13 (b) shows the changes in the series of floating images at this time.
- step S101 of FIG. 13 (b) the floating image of the apple is initially displayed without any stab opening.
- step S203 of FIG. 13 (b) the plane area force position detection unit 61 where the imaging surface 21 and the fork 122 intersect is detected, and is shown in step S203 of FIG. 13 (b).
- a mask image is generated at a position corresponding to the intersecting position.
- the mask image here is a piercing opening with a relatively low degree of damage based on the attribute of the fork 122 2 read from the IC tag 52, unlike the above-mentioned “bullet hole” (see FIG. 11).
- a floating image in which the fork 122 is stuck in the ring is obtained as shown in step S104 of FIG. 13 (b).
- a mask that is one size larger than the intersecting plane regions by a predetermined margin is generated.
- the mask image associated with the attribute of the fork 122 is not necessarily one. Multiple mask image forces may be selected according to the position of the fork 122 or a change in position (that is, movement). For example, floating images As the getty, when the position of the fork 122 changes only in the depth direction, a mask image in which the spaghetti is “stabbed” is selected. On the other hand, if the fork 122 rotates while crossing the image plane 21, the mask image with the spaghetti being "turned or wrapped" is selected, and the expression is more varied. Is possible
- FIG. 14 is a schematic diagram showing how the floating image is cut with a knife in the image display apparatus according to the second embodiment (a: perspective view, b: front view showing change of floating image).
- FIG. 14 (a) a floating image of an apple is displayed on the image plane 21, and a state in which the user cuts the floating image of the apple with a knife 123 is depicted.
- Figure 14 (b) shows the changes in the series of floating images at this time.
- the floating image of the apple is initially displayed without any cut.
- the mask image here is a relatively sharp cut based on the attribute of the knife 123 read from the IC tag 53, unlike the “bullet hole” described above (see FIG. 11). If you can see the contents of the apple at this end, the reality will increase.
- the generated mask may be processed in real time so as to follow the movement, or a plane area intersecting with the imaging plane 21.
- a set of spatial regions may be stored as a trajectory in the storage unit 103, and a mask corresponding to the trajectory may be created.
- FIG. 15 is a schematic diagram showing how the floating image is prefetched by predicting the movement of the knife when the floating image is cut with a knife in the image display apparatus according to the second embodiment (a: path PO — when cutting along P1, b: cutting along route QO-Q1). [0095] As shown in FIG.
- FIG. 16 the configuration and operation processing of the image display apparatus according to the third embodiment will be described with reference to FIGS. 16 and 17.
- FIG. 16 the configuration and operation processing of the image display apparatus according to the third embodiment will be described with reference to FIGS. 16 and 17.
- FIG. 16 is a block diagram conceptually showing the basic structure of the image display apparatus in the third example.
- FIG. 16 the same components as those in the first embodiment (ie, FIG. 6) described above are denoted by the same reference numerals, and detailed description thereof is omitted as appropriate.
- the image display device 1 according to the present embodiment further includes a state detection unit 62 and a rewrite unit 55 in addition to the configuration of the image display device 1 according to the first embodiment described above.
- the state detection unit 62 corresponds to an example of the “state detection unit” according to the present invention
- the rewrite unit 55 corresponds to an example of the “rewrite unit” according to the present invention. Yes.
- the state detection unit 62 is a reader of an IC tag similar to the attribute detection unit 60, for example, and reads the state of the detected object by reading the IC tag 50 in which the state is written via a wireless or wired connection. And detect.
- the “state of the object to be detected” here refers to any state of the object to be detected qualitatively or quantitatively.
- the state of the switch is discontinuous such as ON / OFF of the switch. It indicates a two-stage state or a continuous multi-stage state such as low, medium and high volume.
- the rewriting unit 55 is, for example, an IC tag writer, and can rewrite the information recorded in the IC tag 50 by dynamically rearranging the IC tag circuit, for example.
- the detection of the above-described state is not necessarily performed via an IC tag.
- the state detection unit 62 and the rewriting unit 55 can transmit and receive wireless communication using electromagnetic waves in a predetermined frequency band or wire communication, the state detection unit 62 can detect the state of the detected object.
- FIG. 17 is a flowchart showing the basic operation of the image display apparatus according to the third embodiment.
- the control device 100 causes the image generation unit 102 to generate a two-dimensional image (original image) (step S101).
- This original image is an image of a doll with a target as shown in Fig. 8 (a), for example.
- step S102 the force / force force at which the attribute of the detected object is detected by the attribute detection unit 60 is determined.
- step S102 when the attribute of the detected object is detected (step S102: YES), it is further determined by the state detection unit 62 whether or not the state of the detected object is detected (step S311). .
- step S311 when the state of the detected object is detected (step S311: YES), the following processing is performed according to the detected state and attributes.
- the state and attribute of the detected object are detected when, for example, the toy gun 120 containing the IC tag 50 in which the attribute of the detected object is written, the user detects the attribute detecting unit 60, The state detection unit 62 shoots from the detectable range toward the imaging plane 21.
- the rewriting unit 55 changes the state of the IC tag 50 in which the state of the detected object is written into the "fire switch". This is the case when rewriting from "off" to "fire switch 'on".
- the rewriting unit 55 electromagnetically transmits a “launch switch“ on ”effect to the state detection unit 62 with the firing.
- the attribute detection unit 60 and the state detection unit 62 generate a mask image corresponding to the detected state and attributes of the toy gun 120 (step S303). Then, similarly to the first embodiment, the processes of step S104, step S105, and step S106 are performed, and the floating image suitably changes corresponding to the state and attributes of the detected toy gun 120. . As a result, the user shoots the gun 120 of the toy aiming at the target floating image displayed on the image plane 21, and at the same time or before and after the "fire switch on", "Ballhole” remains in the target floating image.
- step S105 the timing for displaying the combined two-dimensional image may be after a predetermined interval after the state of the detected object is switched. After this predetermined interval, for example, in the case of the above-mentioned example, it is obtained from the position of the toy gun 120 at the time of launch.
- the mask image with respect to the original image may be determined in consideration of the firing angle in addition to the position of the toy gun 120 at the time of launching.
- Place multiple IC tags in multiple locations on the toy gun 120 preferably on a straight line along the launch direction
- the firing angle may be obtained by attaching and detecting the position of each IC tag.
- the firing angle may be directly recognized by the image sensor.
- a 6-axis eg, acceleration in XYZ direction, forward / backward tilt, left / right tilt, left / right swing
- the launch angle and launch direction may be obtained by detecting movement.
- step S102 when no attributes of the detected object are detected (step S102: NO), or when no state of the detected object is detected (step S311: NO), the original image is There is no need to change. Alternatively, if any one of the state and attribute of the detected object is detected, the floating image may be changed according to the detection result.
- the floating image is not particularly changed when the state of the dryer 125 is OFF.
- the switch of the dryer 125 is turned on, the image changes to a floating image of a female face with fluttering hair.
- the air flow strength switch which is one of the states of the dryer 125, is switched, the degree of hair fluttering may change.
- the direction and angle, position, and movement of the dryer 125 may be detected, and the position and appearance of the hair may change partially. Also, depending on the time, your hair may get wet and your hair will dry!
- the floating image changes with the user's operation, the change varies depending on the tool used, that is, the detected object. In addition to activity, reality increases. At this time, even if the exact position is not detected, the position can be changed according to the user's operation, and the interactivity is improved.
- the above-described various methods, methods, and means may be arbitrarily combined to detect the attribute, position, state, and the like of the detected object. This makes it possible to detect necessary information appropriately or accurately in accordance with the specifications of the image display device. For example, all information such as attributes, position, and status may be exchanged in a batch by wireless communication with a detected object incorporating a memory and a 6-axis sensor.
- the image display device can be used in the technical field of an image display device that stereoscopically displays a two-dimensional image based on, for example, a 3D floating vision system.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Liquid Crystal (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2006/319707 WO2008041315A1 (fr) | 2006-10-02 | 2006-10-02 | Dispositif d'affichage d'images |
JP2008537372A JP4939543B2 (ja) | 2006-10-02 | 2006-10-02 | 画像表示装置 |
US12/443,594 US20100134410A1 (en) | 2006-10-02 | 2006-10-02 | Image display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2006/319707 WO2008041315A1 (fr) | 2006-10-02 | 2006-10-02 | Dispositif d'affichage d'images |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008041315A1 true WO2008041315A1 (fr) | 2008-04-10 |
Family
ID=39268181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/319707 WO2008041315A1 (fr) | 2006-10-02 | 2006-10-02 | Dispositif d'affichage d'images |
Country Status (3)
Country | Link |
---|---|
US (1) | US20100134410A1 (ja) |
JP (1) | JP4939543B2 (ja) |
WO (1) | WO2008041315A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014126682A (ja) * | 2012-12-26 | 2014-07-07 | Nitto Denko Corp | 表示装置 |
JP2014171121A (ja) * | 2013-03-04 | 2014-09-18 | Ricoh Co Ltd | 投影システム、投影装置、投影方法、及び投影プログラム |
JP2018000941A (ja) * | 2016-07-07 | 2018-01-11 | ディズニー エンタープライゼス インコーポレイテッド | インタラクティブな製品を用いたロケーションベースの体験 |
JP2022050365A (ja) * | 2020-09-17 | 2022-03-30 | 神田工業株式会社 | 展示装置及び展示方法 |
JP7351561B1 (ja) | 2022-06-29 | 2023-09-27 | 株式会社Imuzak | マイクロレンズアレイを用いた画像伝達パネル、及びこれを用いた立体的2次元画像表示装置 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5304529B2 (ja) * | 2009-08-17 | 2013-10-02 | 富士ゼロックス株式会社 | 画像処理装置及び画像処理プログラム |
TWI507969B (zh) * | 2012-09-07 | 2015-11-11 | Benq Corp | 遙控裝置、顯示系統與方法 |
FR3016048B1 (fr) * | 2013-12-27 | 2016-01-15 | Patrick Plat | Dispositif interactif equipe d'une interface homme-machine |
US10359640B2 (en) | 2016-03-08 | 2019-07-23 | Microsoft Technology Licensing, Llc | Floating image display |
JP6992342B2 (ja) * | 2017-09-13 | 2022-01-13 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002149581A (ja) * | 2000-11-09 | 2002-05-24 | Nippon Telegr & Teleph Corp <Ntt> | 複数のユーザによる仮想空間共有システム |
JP2003085590A (ja) * | 2001-09-13 | 2003-03-20 | Nippon Telegr & Teleph Corp <Ntt> | 3次元情報操作方法およびその装置,3次元情報操作プログラムならびにそのプログラムの記録媒体 |
WO2006035816A1 (ja) * | 2004-09-30 | 2006-04-06 | Pioneer Corporation | 立体的二次元画像表示装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4952922A (en) * | 1985-07-18 | 1990-08-28 | Hughes Aircraft Company | Predictive look ahead memory management for computer image generation in simulators |
JP2002099475A (ja) * | 2000-09-25 | 2002-04-05 | Toshiba Corp | 無線装置、データ管理システム及びデータ管理方法 |
US6961055B2 (en) * | 2001-05-09 | 2005-11-01 | Free Radical Design Limited | Methods and apparatus for constructing virtual environments |
JP2003053025A (ja) * | 2001-08-10 | 2003-02-25 | Namco Ltd | ゲームシステム及びプログラム |
JP2005141102A (ja) * | 2003-11-07 | 2005-06-02 | Pioneer Electronic Corp | 立体的二次元画像表示装置及び方法 |
JP4179162B2 (ja) * | 2003-12-26 | 2008-11-12 | 株式会社セガ | 情報処理装置、ゲーム装置、画像生成方法、ゲーム画像生成方法 |
JP2006085499A (ja) * | 2004-09-16 | 2006-03-30 | Fuji Xerox Co Ltd | Icタグおよびicタグ付きシート |
-
2006
- 2006-10-02 WO PCT/JP2006/319707 patent/WO2008041315A1/ja active Application Filing
- 2006-10-02 US US12/443,594 patent/US20100134410A1/en not_active Abandoned
- 2006-10-02 JP JP2008537372A patent/JP4939543B2/ja not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002149581A (ja) * | 2000-11-09 | 2002-05-24 | Nippon Telegr & Teleph Corp <Ntt> | 複数のユーザによる仮想空間共有システム |
JP2003085590A (ja) * | 2001-09-13 | 2003-03-20 | Nippon Telegr & Teleph Corp <Ntt> | 3次元情報操作方法およびその装置,3次元情報操作プログラムならびにそのプログラムの記録媒体 |
WO2006035816A1 (ja) * | 2004-09-30 | 2006-04-06 | Pioneer Corporation | 立体的二次元画像表示装置 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014126682A (ja) * | 2012-12-26 | 2014-07-07 | Nitto Denko Corp | 表示装置 |
JP2014171121A (ja) * | 2013-03-04 | 2014-09-18 | Ricoh Co Ltd | 投影システム、投影装置、投影方法、及び投影プログラム |
JP2018000941A (ja) * | 2016-07-07 | 2018-01-11 | ディズニー エンタープライゼス インコーポレイテッド | インタラクティブな製品を用いたロケーションベースの体験 |
JP7037891B2 (ja) | 2016-07-07 | 2022-03-17 | ディズニー エンタープライゼス インコーポレイテッド | インタラクティブな製品を用いたロケーションベースの体験 |
JP2022050365A (ja) * | 2020-09-17 | 2022-03-30 | 神田工業株式会社 | 展示装置及び展示方法 |
JP7251828B2 (ja) | 2020-09-17 | 2023-04-04 | 神田工業株式会社 | 展示装置及び展示方法 |
JP7351561B1 (ja) | 2022-06-29 | 2023-09-27 | 株式会社Imuzak | マイクロレンズアレイを用いた画像伝達パネル、及びこれを用いた立体的2次元画像表示装置 |
WO2024005110A1 (ja) * | 2022-06-29 | 2024-01-04 | 株式会社Imuzak | マイクロレンズアレイを用いた画像伝達パネル、及びこれを用いた立体的2次元画像表示装置 |
JP2024004567A (ja) * | 2022-06-29 | 2024-01-17 | 株式会社Imuzak | マイクロレンズアレイを用いた画像伝達パネル、及びこれを用いた立体的2次元画像表示装置 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2008041315A1 (ja) | 2010-02-04 |
JP4939543B2 (ja) | 2012-05-30 |
US20100134410A1 (en) | 2010-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4939543B2 (ja) | 画像表示装置 | |
KR100812624B1 (ko) | 입체영상 기반 가상현실장치 | |
US7371163B1 (en) | 3D portable game system | |
KR101298848B1 (ko) | 표시 장치, 영상 표시 시스템, 및 영상 표시 방법 | |
US20200368616A1 (en) | Mixed reality gaming system | |
KR102296122B1 (ko) | 다단계 가상 객체 선택 | |
US8581966B2 (en) | Tracking-enhanced three-dimensional display method and system | |
KR100913173B1 (ko) | 3d 그래픽 처리장치 및 이를 이용한 입체영상 표시장치 | |
KR102275778B1 (ko) | 헤드 마운티드 디스플레이 장치 | |
US20170272735A1 (en) | Pulsed projection system for 3d video | |
JP5036875B2 (ja) | 画像表示装置及び画像表示システム | |
WO2015200406A1 (en) | Digital action in response to object interaction | |
CN103155006A (zh) | 图像显示装置、游戏程序、游戏控制方法 | |
TW201104494A (en) | Stereoscopic image interactive system | |
CN104380347A (zh) | 视频处理设备、视频处理方法和视频处理系统 | |
EP2902998A1 (en) | Display device, control system, and control programme | |
US20060214874A1 (en) | System and method for an interactive volumentric display | |
JP4624587B2 (ja) | 画像生成装置、プログラム及び情報記憶媒体 | |
WO2007114225A1 (ja) | 立体的二次元画像表示装置 | |
US10664103B2 (en) | Curved display apparatus providing air touch input function | |
KR101986687B1 (ko) | 3차원 스캐닝 방식을 포함한 홀로그램 박스를 이용하는 홀로그램 장치 및 그 방법 | |
US11767022B2 (en) | Apparatus for controlling augmented reality, method of implementing augmented reality by using the apparatus, and system of implementing augmented reality by including the apparatus | |
CN110928472B (zh) | 物品处理方法、装置及电子设备 | |
JP2010253264A (ja) | ゲーム装置、立体視画像生成方法、プログラム及び情報記憶媒体 | |
KR102294919B1 (ko) | 사용자의 시선과 손의 움직임 이벤트에 대응하는 홀로그램 영상을 출력하는 터미널 장치 및 그 제어 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 06811055 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008537372 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12443594 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06811055 Country of ref document: EP Kind code of ref document: A1 |