WO2022202700A1 - Method, program, and system for displaying image three-dimensionally - Google Patents

Method, program, and system for displaying image three-dimensionally Download PDF

Info

Publication number
WO2022202700A1
WO2022202700A1 PCT/JP2022/012787 JP2022012787W WO2022202700A1 WO 2022202700 A1 WO2022202700 A1 WO 2022202700A1 JP 2022012787 W JP2022012787 W JP 2022012787W WO 2022202700 A1 WO2022202700 A1 WO 2022202700A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
dimensional
pseudo
displaying
Prior art date
Application number
PCT/JP2022/012787
Other languages
French (fr)
Japanese (ja)
Inventor
ホースーン カン
Original Assignee
株式会社オルツ
ホースーン カン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2021092377A external-priority patent/JP2022146839A/en
Application filed by 株式会社オルツ, ホースーン カン filed Critical 株式会社オルツ
Publication of WO2022202700A1 publication Critical patent/WO2022202700A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume

Definitions

  • the present invention relates to a method, program and system for displaying images three-dimensionally.
  • the image When an image is displayed on a general display device, the image is displayed two-dimensionally. This is because the display surface of the display device is flat.
  • Patent Document 1 A special device has been developed for displaying images in three dimensions (for example, Patent Document 1)
  • An object of the present invention is to provide a method or the like capable of creating a pseudo-three-dimensional image in order to display an image three-dimensionally.
  • the present invention provides the following items.
  • (Item 1) A method for displaying an image three-dimensionally, comprising: receiving an image containing an image of interest; processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; and displaying the simulated three-dimensional image.
  • (Item 2) Item 1, wherein creating the pseudo-three-dimensional image includes creating a pseudo-three-dimensional animation as the pseudo-three-dimensional image by rotating the three-dimensional representation of the element around the image of the object. described method.
  • a portion of the three-dimensional representation of the element is superimposed on the image of the object such that the portion of the image of the object is hidden by the three-dimensional representation of the element; is superimposed under the image of interest and the other portion of the three-dimensional representation of the element is obscured by the image of interest.
  • the element includes a plurality of horizontal scan lines; 4. Any one of items 1-3, wherein adding a three-dimensional representation of the element within the image includes adding a three-dimensional representation of the plurality of horizontal scan lines onto the image of the object. The method described in section.
  • Creating the pseudo three-dimensional image includes generating a plurality of images with different viewpoints from the image, and temporally successively combining the plurality of images with different viewpoints to form the pseudo three-dimensional image. 5. The method according to any one of items 1 to 4, comprising creating a simulated three-dimensional animation.
  • the pseudo three-dimensional image is a pseudo three-dimensional video, The method includes: synchronizing sound with the simulated three-dimensional image; 6. The method of any one of items 1-5, further comprising: playing said synchronized sound while displaying said simulated three-dimensional image. (Item 7) 7. Method according to item 6, wherein the sound changes based on movement in the image. (Item 8) 8.
  • displaying the simulated three-dimensional image includes displaying the simulated three-dimensional image on a rotating display in which at least one member rotates about a first axis to form a planar display surface; The method according to any one of items 1-11.
  • the rotary display is configured such that the orientation of the display surface can be changed, The method includes: detecting a user's position relative to the rotating display; 13. The method of item 12, comprising: reorienting the display surface based on the detected position.
  • Displaying the pseudo three-dimensional image includes changing the orientation of the object in the pseudo three-dimensional image based on the orientation of the display surface and displaying the pseudo three-dimensional image on the display surface. 8.
  • the method of item 7, comprising (Item 15) Displaying the pseudo-three-dimensional image includes rotating at least one member about a first axis and about a second axis substantially perpendicular to the first axis to form a substantially spherical display surface. 12.
  • a program for displaying an image three-dimensionally said program being executed in a computer system comprising a processor and a display unit, said program comprising: receiving an image containing an image of interest; processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; A program causing the processor to perform processing including: displaying the pseudo three-dimensional image on the display unit.
  • a system for three-dimensionally displaying an image comprising: receiving means for receiving an image including an image of interest; creating means for creating, by processing the image, a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; and display means for displaying the pseudo three-dimensional image.
  • a storage medium storing a program for three-dimensionally displaying an image, the program being executed in a computer system comprising a processor and a display unit, the program comprising: receiving an image containing an image of interest; processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; A storage medium causing the processor to perform processing including: displaying the pseudo three-dimensional image on the display unit.
  • a storage medium according to item 18B comprising features according to one or more of the above items.
  • (Item 19) A method for displaying an image three-dimensionally, comprising: receiving an image; synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image; and playing said synchronized sound while displaying said image.
  • (Item 20) A program for three-dimensionally displaying an image, said program being executed in a computer system comprising a processor, a display unit, and a sound output unit, said program comprising: receiving an image; synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image on the display; and reproducing the synchronized sound from the sound output unit when the image is displayed.
  • (Item 20A) 21 Program according to item 20, including features according to one or more of the above items.
  • (Item 21) A system for three-dimensionally displaying an image, comprising: a receiving means for receiving an image; synchronization means for synchronizing sound with said image, said sound varying in response to movement in said image; display means for displaying said image; reproduction means for reproducing said synchronized sound when displaying said image.
  • a method of displaying an image on a display comprising: detecting the position of a user's viewpoint with respect to the display; Determining the portion of the image to be displayed on the display by processing the image, comprising: setting a virtual sphere centered at the user's viewpoint and having a radius equal to the distance between the user's viewpoint and the display; pasting the image onto the inner surface of the virtual sphere; identifying a portion of the image pasted on a portion of the inner surface of the phantom sphere corresponding to the display surface of the display; including and displaying the determined portion of the image on the display surface of the display.
  • the method of item 22, wherein the image is represented in an equirectangular projection.
  • (Item 24) 24 24.
  • FIG. 10 is a diagram showing how an image 10 is displayed;
  • FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of one embodiment of the present invention.
  • FIG. 1B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 20 shown in FIG. 1B;
  • FIG. 1B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 20 shown in FIG.
  • FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention.
  • FIG. 2B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 30 shown in FIG.
  • FIG. 1 shows an example of an image display device for displaying a pseudo three-dimensional image
  • 4B is a diagram showing an example of an image displayed on the display surface 23 of the rotary display 20 in the orientation of the display surface shown in FIG. 4A;
  • FIG. 4D is a diagram showing an example of an image displayed on the display surface 23 facing the user position shown in FIG. 4C.
  • FIG. 4 is a diagram showing how an image is displayed on the display surface 23 of the rotary display 20;
  • FIG. 2 shows a rotating display 25 in one embodiment of the present invention;
  • FIG. 3 shows a rotating display 27 in another embodiment of the invention;
  • FIG. 2 schematically illustrates an example flow of a technique in one embodiment of the present invention;
  • FIG. 4 is a diagram when the distance between the viewpoint of the user U and the display 20 is small;
  • FIG. 4 is a diagram when the distance between the viewpoint of the user U and the display 20 is small;
  • FIG. 4 is a diagram showing a case where the distance between the viewpoint of the user U and the display 20 is large;
  • a diagram showing an example of the configuration of the user device 200 FIG. 7 shows an example of a process 700 by the system 100 for displaying an image three-dimensionally.
  • FIG. 7 illustrates an example process 710 by system 100' for displaying an image in three dimensions;
  • FIG. 8 shows an example process 800 for displaying an image on a display.
  • image refers to an image that can be displayed on a two-dimensional plane.
  • the image includes not only a "two-dimensional image” containing two-dimensional information (length x width) but also a "three-dimensional image” containing three-dimensional information (length x width x depth).
  • a "three-dimensional image” can be acquired, for example, using an RGB-D camera.
  • a "three-dimensional image” can be obtained, for example, by performing a process of estimating depth information on a two-dimensional image and adding the depth information to the two-dimensional image.
  • Images include still images and moving images.
  • a moving image is considered to be a plurality of temporally consecutive still images.
  • three-dimensional refers to a state that is not three-dimensional, but appears to be three-dimensional.
  • three-dimensionally displaying means displaying something that is not three-dimensional (for example, something that is in a two-dimensional plane) as if it were three-dimensional.
  • three-dimensional representation means representation of something that is not three-dimensional (for example, something in a two-dimensional plane) as if it were three-dimensional.
  • the three-dimensional representation includes a three-dimensional representation by adding shading, a three-dimensional representation by adding light and shade, and a three-dimensional representation by adding parallax.
  • the "pseudo three-dimensional effect” refers to the effect of appearing three-dimensionally due to visual illusion (optical illusion). Depending on the viewer's perception, there can be varying degrees of pseudo three-dimensional effect.
  • a “pseudo three-dimensional image” refers to an image that produces a “pseudo three-dimensional effect”.
  • object refers to any object appearing in an image.
  • a subject may be, for example, animate or inanimate.
  • a subject may be, for example, a human, an animal, or a plant.
  • the inventors of the present invention have developed a method for three-dimensionally representing an image (two-dimensional image) displayed on a flat display.
  • this method is used, the image displayed on the flat display is displayed three-dimensionally, and the person viewing the flat display sees the image displayed on the flat display as if it were displayed in a three-dimensional space. can be misunderstood.
  • a pseudo three-dimensional image is an image that produces such an illusory effect. With this method, an image can be represented three-dimensionally even if a three-dimensional model of the image does not exist.
  • FIG. 1A shows how the image 10 is displayed.
  • the image 10 includes an image 11 of an object (a person in this example).
  • the target image 11 appears two-dimensional or two-dimensional.
  • FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of one embodiment of the present invention.
  • the cube 12 In the pseudo three-dimensional image 20 shown in FIG. 1B, compared to the original image 10, an element (cube 12 in this example) different from the target image 11 is added.
  • a cube 12 has been added to the image in a three-dimensional representation.
  • the cube 12 is represented as thicker on the side closer to the viewer and thinner on the side farther from the viewer.
  • the cube 12 has a sense of perspective.
  • the perspective of the cube 12 also causes the object image 11 to have a perspective, so that the object image 11 can appear three-dimensional.
  • the cube 12 is positioned around the target image 11 , the cube 12 overlaps the target image 11 such that a portion of the cube 12 obscures a portion of the target image 11 and a portion of the cube 12 . It overlaps the target image 11 so that the part is hidden by a part of the target image 11 . This may enhance the perspective of the cube 12 and, in turn, the perspective of the image 11 of the object.
  • the target image 11 itself is a two-dimensional image
  • the existence of the cube 12 makes it easy to misunderstand that the target image 11 is represented three-dimensionally.
  • FIGS. 1C and 1D show examples in which the perspective of the target image 11 is further enhanced in the pseudo three-dimensional image 20 shown in FIG. 1B to express the target image 11 three-dimensionally.
  • the pseudo three-dimensional image 20 is a moving image, and the images shown in FIGS. 1C and 1D can be considered one frame of the moving image.
  • the cube 12 is rotated around its axis.
  • the axis is, for example, the central axis passing through the top and bottom surfaces of the cube 12 .
  • the edges near and far from the viewer transition, as shown in FIGS. 1C and 1D.
  • the edge represented thick and the edge represented thin transition This further enhances the perspective of the rotated cube 12 .
  • the enhanced perspective of cube 12 also enhances the perspective of image of object 11, which may make image of object 11 appear more three-dimensional.
  • the cube 12 is rotated around the image 11 of the object. As the cube 12 is rotated, the portion of the cube 12 obscuring the portion of the image 11 of interest and the portion of the cube 12 obscuring the portion of the image 11 of the object transition. This may further enhance the perspective of the cube 12 and, in turn, the perspective of the image 11 of the object.
  • the target image 11 itself is a two-dimensional image
  • the presence of the rotated cube 12 makes the target image 11 appear three-dimensional. easy to be
  • the axis can be any axis.
  • the axis is preferably the axis whose rotation enhances the perspective of the cube 12 .
  • the axis may be, for example, a central axis passing through a side surface of the cube 12, an axis (a central axis or an off-center axis) passing through at least one surface of the cube 12, or a 12 out of the axis.
  • the image is three-dimensionally represented by a visual effect by elements other than the target image 11, but in one embodiment of the present invention, in addition to the visual effect, or Instead of visual effects, auditory effects can be used to represent images in three dimensions. It is assumed that the flat display in this embodiment has a speaker or is connected to a speaker.
  • Image 10 is a moving image that expresses how the target moves.
  • Image 10 will include image 11 of a moving object.
  • the target image 11 moves from the state shown in FIG. 1E to the state shown in FIG. 1F in the pseudo three-dimensional image 20 generated from such an image 10 .
  • This movement causes the image of the subject's arm to appear to be outside the cube 12 .
  • the sound can change as it extends farther. Sound may be played from the speaker.
  • the sound played at the moment the subject's arm image touches the cube 12 creates the illusion that the cube 12 is in three-dimensional space, and the distance between the cube 12 and the arm image.
  • the change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound.
  • the sound can increase as the subject's arm image extends farther from the cube 12 .
  • the sound may decrease as the subject's arm image extends further from the cube 12, or, for example, the sound may increase or decrease as the subject's arm image extends further from the cube 12. or, for example, the subject's arm image may approach a different tone as it extends farther from the cube 12 .
  • Such an auditory effect emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
  • the sound may vary in response to other motions of the subject's image outside the cube 12 in addition to or alternatively to movement of the subject's arm image away from the cube 12 .
  • the sound can be varied to match the movement.
  • the sound can be louder (or softer) as the subject's arm image moves down in an arc outside the cube 12 in the direction of the arrow, and softer (or louder) as it moves up.
  • the presence of the cube 12 can be emphasized by playing and varying the sound according to the relationship between the cube 12 and at least a portion of the image of interest 11, and the image of interest 11 is: If it is represented three-dimensionally, it becomes easier to be illusioned.
  • the relationship between the cube 12 and at least a portion of the target image 11 may be, for example, the distance between the cube 12 and the portion of the target image 11 (i.e., static relationship), as described above, It may be the movement of a portion of the image of interest 11 relative to the cube 12 (ie dynamic relationship) or other relationship.
  • the relationship with the cube 12 changes as the target image 11 moves, but the present invention is not limited to this.
  • the relationship between cube 12 and at least a portion of object image 11 also changes, and cube 12 and object image 11 change accordingly.
  • the sound may be reproduced and changed according to the relationship with at least part of 11 .
  • FIGS. 1H-1J the cube 12 is rotated about its axis, similar to FIGS. 1C and 1D.
  • FIG. 1H the image of the object 11 is contained within the cube 12 . Therefore, no sound is reproduced from the speaker.
  • the subject's arm image appears to be outside the cube 12, as shown in FIG. 1I.
  • Sound may be played from the speaker.
  • the sound played at the moment the cube 12 touches the subject's arm image creates the illusion that the cube 12 is in a three-dimensional space, and the distance between the cube 12 and the arm image.
  • the change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound.
  • the sound can be louder as the cube 12 is farther from the subject's arm image.
  • the sound may decrease as the cube 12 moves away from the target's arm image, or the sound may increase or decrease as the cube 12 moves away from the target's arm image.
  • the further away the cube 12 is from the target arm image the closer to a different tone it may be.
  • Such an auditory effect emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
  • the sound may be changed according to the movement of the target image outside the cube 12 .
  • the sound can be changed according to the movement.
  • the sound can be louder (or softer) as the subject's arm image moves down in an arc outside the cube 12 in the direction of the arrow, and softer (or louder) as it moves up.
  • the sound may be louder or softer as the subject's arm image extends farther from the cube 12, e.g. or, for example, the subject's arm image may approach a different tone as it extends farther from the cube 12 .
  • Such an auditory effect also emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
  • the boundary is not limited to this.
  • the boundary can be any boundary as long as it is defined near, eg, around the image of interest.
  • the boundary may be visible, such as cube 12, or invisible. If the boundaries are not visible, auditory effects will render the image three-dimensionally without relying on visual effects.
  • the boundary can have any shape. For example, it may be a shape that surrounds the target image (e.g., spherical, elliptical, cylindrical, prismatic, etc.), or a shape that does not surround the target image (e.g., planar, curved, hemispherical). shape, etc.).
  • the boundary may change over time or may not change over time. For example, as in the example above where the boundary is represented by cube 12, the boundary may rotate about an axis over time.
  • the sound may be, for example, a sound that is directly related to the image, a sound that is somewhat related to the image, or a sound that is unrelated to the image.
  • the sound may be sound unrelated to the image, more preferably sound directly related to the image.
  • Sound that is directly related to the image may, for example, have been synchronized to the original image 10 (eg, if the original image 10 was a still image with sound or a moving image with sound).
  • Sound that are somewhat related to images are, for example, sounds that are associated with images (e.g., bird wing sounds or chirping for bird images, car running sounds or horn sounds for car images, etc.). can be Even if the sound was synchronized to the original image 10 , the sound can be a sound other than the sound that was synchronized to the original image 10 .
  • visual effects and/or auditory effects are used to represent images in three dimensions. It is also possible to represent an image three-dimensionally.
  • FIG. 2A shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention.
  • an element different from the target image 11 is added and displayed.
  • a horizontal scan line 13 has been added over the image 11 of the object.
  • Horizontal scan lines 13 are then added to represent the contour shape of the object, based on the three-dimensional information contained in or derived from the image 10 .
  • the horizontal scan line 13 is represented as curving along the curved surface of the subject's face and curving along the contours of the subject's nose.
  • the horizontal scanning lines 13 express the contour shape of the object, so that the object image 11 has a sense of perspective, so that the object image 11 can be seen three-dimensionally.
  • the target image 11 itself is a two-dimensional image
  • the presence of the horizontal scanning lines 13 gives the illusion that the target image 11 is represented three-dimensionally. Cheap.
  • FIG. 2B shows an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 30 shown in FIG. 2A.
  • the cube 12 described above with reference to FIG. 1B has been added around the image 11 of interest.
  • the perspective of the cube 12 also causes the image of the object 11 to have a perspective, so that the image of the object 11 can appear three-dimensional.
  • the cube 12 can be rotated about its axis as described above with reference to Figures 1C and 1D. As a result, the pseudo three-dimensional image 30 becomes a moving image. By rotating the cube 12 about its axis, the perspective of the image 11 of the object can be further enhanced.
  • the presence of the cube 12 can be detected by playing and varying the sound according to the relationship between the cube 12 and at least a portion of the image 11 of interest. It can also be enhanced, further enhancing the perspective of the image 11 of the object.
  • the image of interest 11 itself is a two-dimensional image
  • the presence of the horizontal scan lines 13 as well as the presence of the cube 12 or the rotated cube 12, and also the presence of the cube 12 and the cube 12 to be rotated Due to the presence of sounds that reproduce and change according to their relationship with a portion of the image 11 of the object, the image 11 of the object is more likely to give the illusion of being represented three-dimensionally.
  • the element added to the image overlaps the target image 11, but the element added to the image does not necessarily overlap the target image 11.
  • Elements can be added anywhere in the image as long as it produces a pseudo three-dimensional effect.
  • the added elements can be placed adjacent to the image of interest 11, as shown in the pseudo-three-dimensional image 20' of FIG. 2C.
  • the three-dimensional representation of the added elements introduces some perspective in the image and may also introduce some perspective in the image 11 of interest. This allows the image 11 of the object to appear three-dimensional.
  • the number of added elements is not limited to this.
  • multiple elements 12 can be added to the image 10, as shown in the pseudo-three-dimensional image 20'' of FIG. 2D.
  • the added elements may, for example, be rotated about their respective axes, and at least some of the elements may be rotated, as shown in the pseudo-three-dimensional image 20'' of FIG. 2D. may be rotated about an axis common to .
  • the presence of the added element or the rotation of the added element creates perspective in the image and may also create perspective in the image 11 of interest. This allows the image 11 of the object to appear three-dimensional.
  • the pseudo-three-dimensional image gives a strong impression that it is a virtual image.
  • 3A-3B show an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the present invention.
  • the pseudo three-dimensional image is a moving image
  • the still images 41, 42 shown in FIGS. 3A and 3D can be considered one frame of the moving image.
  • FIG. 3A a still image 41 created from the image 10 shown in FIG. 1A and viewed from the first line-of-sight direction is displayed.
  • a technique for creating images with different line-of-sight directions from a given image may be a technique known in the art. For example, based on the three-dimensional information contained in the image or the three-dimensional information derived from the image, images with different line-of-sight directions can be created from a given image. For example, machine learning techniques can be used to create images with different viewing directions from an image. For example, if image 10 is a moving image, a still image can be generated for each frame of the moving image.
  • the first line-of-sight direction is the line-of-sight direction when the object is viewed from a more left direction than the line-of-sight direction of the image 10 shown in FIG. 1A.
  • FIG. 3B a still image 42 created from the image 10 shown in FIG. 1A is displayed, looking at the object from the second line-of-sight direction.
  • the method of creating images with different line-of-sight directions from a certain image may be a method known in the art.
  • the second line-of-sight direction is the line-of-sight direction when the object is viewed from a more right direction than the line-of-sight direction of the image 10 shown in FIG. 1A.
  • the created still images 41 and 42 are temporally continuously combined to generate and display a pseudo three-dimensional moving image.
  • a pseudo three-dimensional moving image having two frames of still images from different viewpoints gives the target image 11 a sense of perspective due to the parallax generated from the different viewpoints. This allows the image 11 of the object to appear three-dimensional.
  • still images 41 and 42 may appear alternately and repeatedly. This can generate animations of arbitrary length.
  • the still images 41 and 42 created from each frame may appear continuously in frame order in the generated moving image.
  • the frame rate can be set to any value.
  • maintaining the frame rate of the image 10 may generate a pseudo-3D animation that is twice the length of the image 10 .
  • a pseudo-three-dimensional video of the same length as the image 10 can be generated.
  • the above-described elements may be added to the generated pseudo-three-dimensional video, and sound may be played along with the image. Thereby, the perspective of the target image 11 can be enhanced. If the element 12 is added, the element 12 can be rotated around the axis. This can further enhance the perspective of the target image 11 .
  • the target image can be represented in 3D.
  • the technique described above even if there is no 3D model of the target and only a 2D image of the target, the target image can be represented in 3D.
  • FIG. 4A shows an example of an image display device for displaying a pseudo three-dimensional image.
  • the image display device is a rotary display 20 (also called a "hologram display") in which at least one member 21 rotates to form a display surface. At least one member 21 is rotatable around a rotation axis C1. By rotating at least one linear member 21, it is possible to form a planar display surface.
  • a light source (for example, an LED) is arranged on at least one member 21 . Light emission from the light source on at least one member 21 is controlled according to the rotation angle of at least one member 21, so that an image can be projected onto the display surface by the afterimage effect.
  • the background can be seen through the rotation of at least one member 21, so there is an effect that the image appears as if it is floating in the air.
  • the frame rate of the image displayed on the rotary display 20 depends on the rotation speed of at least one member 21 .
  • the frame rate of images displayed on rotating display 20 is significantly lower than the frame rate of images displayed on typical display devices.
  • the frame rate of rotating display 20 can be, for example, from about 20 fps to about 40 fps, such as about 30 fps.
  • the image displayed on the rotary display 20 can be rougher than the image displayed on a typical display device. By displaying a rough image, the impression that the image displayed on the rotary display 20 is a virtual image is enhanced.
  • the rotary display 20 has a main body 22.
  • the main body 22 is configured to be rotatable around the rotation axis C2.
  • the orientation of the display surface formed by the at least one member 21 can be changed.
  • the rotatable display 20 detects the position of the user viewing the rotatable display 20 by a detection means (not shown), and rotates the main body 22 around the rotation axis C2 so as to face the detected position of the user U. can be rotated to
  • FIG. 4B shows an example of an image displayed on the display surface 23 of the rotary display 20 in the orientation of the display surface shown in FIG. 4A.
  • the target image 11 is displayed on the display surface 23, similar to the example shown in FIG. 1A.
  • the object faces the front.
  • the user U viewing the display surface sees the front side of the object.
  • the display surface 23 can display the pseudo three-dimensional image described above with reference to FIGS. 1B to 3B.
  • the pseudo three-dimensional image By displaying the pseudo three-dimensional image on the display surface 23 through which the background of the rotary display 20 is visible, the pseudo three-dimensional image appears as if it is floating in the air, and the pseudo three-dimensional image is displayed. Three-dimensional feeling can be emphasized.
  • the impression that the pseudo three-dimensional image is a virtual image is enhanced.
  • the rotatable display 20 detects the detection means (not shown). ) to detect the position of the user U and rotate the main body 22 clockwise around the rotation axis C2 so as to change the orientation of the display surface.
  • the display surface of the rotary display 20 faces the user U.
  • the user U can see the display surface of the rotary display 20 even after moving.
  • FIG. 4D shows an example of an image displayed on the display surface 23 facing the user position shown in FIG. 4C.
  • the user U Since the user position shown in FIG. 4C has moved to the left with respect to the user position shown in FIG. 4A, the user U is facing forward in the image displayed on the display surface shown in FIG. 4A. can be viewed from the left side. Therefore, on the display surface 23 facing the user position shown in FIG. 4C, it is possible to display an image 11' of a front facing object viewed from the left side. This may give the user U the illusion that the object in the object images 11, 11' is a three-dimensional object. Such an illusion can be further enhanced by having the display surface of rotating display 21 directed toward user U at both the user position shown in FIG. 4A and the user position shown in FIG. 4C. This is because the display surface always faces the user U, so the user U does not easily perceive that the display surface is planar.
  • the user U can face the display surface 23 from any angular position with respect to the rotatable display 20, and can visually recognize an image without distortion. For example, even if the user U1 views the rotary display 20 from any angular position with respect to the rotary display 20, the image of the horse displayed on the display surface 23 is not distorted, as shown in FIG. 4E(a). , is presented to user U1.
  • display surface 23 of rotary display 20 will not be visible to user U2.
  • user U2 sees a distorted image. For example, the horse image displayed on the display surface 23 is distorted and presented to the user U2 as shown in FIG. 4E(b).
  • the rotating display 25 rotates at least one member 26 about a first rotation axis C1 and rotates at least one member 26 about a second rotation axis C2.
  • a three-dimensional display surface can be formed.
  • the second axis of rotation C2 may be substantially perpendicular to the first axis of rotation C1.
  • the direction of rotation about the first rotation axis C1 is indicated by RC1
  • the direction of rotation about the second rotation axis C2 is indicated by RC2.
  • a light source eg, an LED
  • Light emission from the light source on at least one member 26 is controlled according to the rotation angle of the at least one member 26, so that an image can be projected onto the substantially spherical display surface by the afterimage effect.
  • the rotary display 25 can form a substantially spherical display surface, as shown in FIG. 4F(b). Except for the configuration described above, the rotary display 25 may have the same configuration as the rotary display 20 described above.
  • the pseudo-three-dimensional image described above with reference to FIGS. 1B to 3B can be displayed on the substantially spherical display surface of the rotary display 25 .
  • the pseudo-three-dimensional image appears as if it is floating in the air.
  • a sense of dimension can be emphasized.
  • the impression that the pseudo three-dimensional image is a virtual image is enhanced.
  • the undistorted pseudo three-dimensional image can be viewed from any angular position with respect to the rotary display 25, the three-dimensional feel of the pseudo three-dimensional image can be emphasized.
  • the rotary displays 20, 25 described above rotate at least one member 21, 26 around a common axis to form one planar display surface and one substantially spherical display surface.
  • the invention is not limited to this.
  • FIG. 4G shows an example of the rotating display 27 in one embodiment.
  • the rotary display 27 has a first display surface 28 formed by rotating at least one first member and a second display surface 29 formed by rotating at least one second member. is configured to form a The first member and the second member can be rotated about two axes to each form a substantially spherical viewing surface. In the example shown in FIG. 4G, the first member and the second member are rotated about a common axis (body axis). Except for the configuration described above, rotary display 27 may have a configuration similar to rotary display 20 or 25 described above.
  • the display area can be expanded. For example, a separate image may be displayed on each display surface, or one image may be displayed over a plurality of display surfaces. Multiple display surfaces expand the range of video expression.
  • the pseudo three-dimensional image described above with reference to FIGS. 1B to 3B can be displayed on the substantially spherical display surface of the rotary display 27.
  • FIG. By displaying the pseudo-three-dimensional image on the display surface of the rotary display 27 through which the background is visible, the pseudo-three-dimensional image appears as if it is floating in the air. A sense of dimension can be emphasized. Further, by displaying the pseudo three-dimensional image on the display surface of the rotary display 27 having a significantly low frame rate, the impression that the pseudo three-dimensional image is a virtual image is enhanced. Furthermore, since the undistorted pseudo three-dimensional image can be visually recognized from any angular position with respect to the rotary display 27, the three-dimensional feeling of the pseudo three-dimensional image can be conspicuous. Furthermore, it is possible to express a variety of pseudo three-dimensional images using a plurality of display surfaces.
  • the pseudo three-dimensional image is displayed on the special rotary displays 21, 25, and 27, but the image display device that displays the pseudo three-dimensional image is not limited to this.
  • the pseudo three-dimensional image can be displayed on any other image display device.
  • the image display device can be a transparent display through which the background can be seen. This is because the pseudo-three-dimensional effect of the pseudo-three-dimensional image displayed is enhanced.
  • the image display device may be a display with a significantly lower frame rate. As a result, the pseudo three-dimensional image is displayed at a low frame rate, the impression that the pseudo three dimensional image is a virtual image is enhanced, and the pseudo three dimensional effect can be enhanced. is.
  • FIG. 5A schematically illustrates an example flow of a technique in one embodiment of the invention.
  • an image 51 that is the basis of an image displayed as a virtual reality image is acquired.
  • the image 51 is preferably represented in a specific drawing method.
  • a particular projection may be, for example, an equirectangular projection (also called an equirectangular projection).
  • the lines of latitude and longitude are at right angles and the lines of latitude and longitude intersect at regular intervals. As a result, the distance between the two points is represented correctly.
  • the equirectangular projection is a projection that is often used when displaying virtual reality images. As shown in FIG. 5A, an image 51 represented by the equitangler projection appears to have distortion in the image 51 .
  • the image 51 is pasted on the inner surface of the virtual sphere 52 .
  • the image 51 is pasted on the inner surface of the virtual sphere 52 .
  • a natural image without distortion can be generated.
  • the virtual sphere 52 is a virtual sphere whose center is the viewpoint of the user U and whose radius is the distance between the viewpoint of the user U and the display 20 that displays the virtual reality image. For example, when the distance between the user U's viewpoint and the display 20 is small, as shown in FIG. When the distance from the display 20 is large, the diameter of the phantom sphere 52 is large.
  • the distance between the viewpoint of the user U and the display 20 can be measured, for example, by sensing means (not shown) that the display 20 may have.
  • the detection means can detect the position of the user's U eyes and measure the distance between the user's U viewpoint and the display 20 using techniques known in the field of distance measurement.
  • the portion of the image 51 pasted on the portion of the inner surface of the phantom sphere 52 corresponding to the display surface of the display 20 is specified, and the image 53 of the specified portion is displayed on the display surface of the display 20.
  • User U can see image 53 .
  • the diameter of the virtual sphere 52 is small, so the inner surface of the virtual sphere 52 corresponding to the display surface of the display 20 is The part of the image pasted on the part becomes relatively large.
  • the diameter of the virtual sphere 52 is large. The portion of the image pasted on the inner surface of the is relatively small.
  • an object in the foreground in the image is perceived as close to the user U, regardless of whether the user U approaches the display or moves away from the display, and an object in the background in the image is perceived. is perceived as far from the user U, whether the user U is closer to the display or farther from the display. Since the perspective of the image displayed on the display 20 is maintained in this manner, the user U can view the image through the display 20 with a feeling of the real world. In other words, the user U can experience virtual reality images through the display 20 .
  • the image is displayed on the rotary display 20, but the present invention is not limited to this.
  • the image can be displayed on any display as long as the distance between the user U and the display can be measured.
  • the technique of three-dimensionally representing an image and the technique of providing a virtual reality image described above can be implemented, for example, by the system 100 for three-dimensionally displaying an image, which will be described later.
  • FIG. 6A shows an example of the configuration of a system 100 for three-dimensional display of images.
  • the system 100 comprises receiving means 110 , creating means 120 and displaying means 130 .
  • the receiving means 110 are arranged to receive images.
  • the receiving means 110 can receive images in any manner.
  • the received image contains the image of the object.
  • the receiving means 110 may receive an image from outside the system 100, or may receive an image from inside the system 100 (for example, from a storage means that the system may have).
  • the receiving means 110 may, for example, receive the image from a storage medium connected to the system 100, or may receive the image via a network connected to the system 100.
  • the type of network does not matter, and any network such as the Internet or LAN can be used.
  • the received image can be in any data format.
  • the received image may be a two-dimensional image containing two-dimensional information (length x width) or a three-dimensional image containing three-dimensional information (length x width x depth).
  • the received image is passed to the creation means 120.
  • the creating means 120 is configured to create a pseudo three-dimensional image by processing the image.
  • the generating means 120 may generate a pseudo-three-dimensional image by, for example, processing the image to add in the image a three-dimensional representation of elements separate from the image of interest (see, for example, FIGS. 1B-2D). can be created.
  • the three-dimensional representation of the elements is at least one of, for example, shading the elements, lighting the elements, giving the elements different sizes, or giving perspective to the elements. including one.
  • the processing by the creating means 120 may be image processing known in the art.
  • the creating means 120 can create a pseudo-three-dimensional animation, for example, by rotating the three-dimensional representation of the elements added in the image around the target image.
  • a pseudo three-dimensional moving image is preferable in that it enhances the pseudo three dimensional effect of the target image.
  • the creation means 120 superimposes a part of the three-dimensional representation of the element on the image of the object, and hides a part of the image of the object by the three-dimensional representation of the element.
  • a three-dimensional representation of an element can be added such that another part is superimposed under the image of interest and said other part of the three-dimensional representation of the element is obscured by the image of interest. This can further enhance the pseudo three-dimensional effect of the image of the object.
  • the element can be any object, and can have any shape, size, color, etc.
  • the creation means 120 can add, for example, a three-dimensional representation of a plurality of horizontal scanning lines onto the target image.
  • the three-dimensional representation of the plurality of horizontal scanlines can be scanlines drawn along the three-dimensional outline of the object.
  • the three-dimensional contour shape of the object can be determined, for example, based on three-dimensional information contained in the image or derived from the image.
  • the process of deriving three-dimensional information from images can be performed, for example, by techniques known in the art.
  • the process of deriving 3D information from images can be performed using an AI model capable of estimating depth information from images.
  • the generating means 120 generates, for example, a plurality of images with different viewpoints from the images (see, for example, FIGS. 3A and 3B), and temporally successively combines the plurality of images with different viewpoints to create a pseudo three-dimensional image.
  • a plurality of images from different viewpoints can be created, for example, based on three-dimensional information contained in the images or three-dimensional information derived from the images.
  • images with different viewpoints can be created by setting a virtual viewpoint and estimating how it looks from the virtual viewpoint based on three-dimensional information.
  • a plurality of images from different viewpoints can be created using techniques known in the art, for example. For example, multiple images from different viewpoints can be performed using an AI model capable of creating image pairs with parallax.
  • the pseudo three-dimensional image created by creating means 120 is passed to display means 130 .
  • the display means 130 is configured to display a pseudo three-dimensional image.
  • the display means 130 can be any display means as long as it can display an image.
  • the display means 130 is, for example, a liquid crystal display, an LED display, or the like, but is not limited to these.
  • display means 130 may be a rotating display in which at least one member rotates to form a display surface.
  • the rotating display can be, for example, rotating display 20, 25, 27, etc., described above.
  • the display means 130 can be configured so that the orientation of the display surface can be changed.
  • the display means 130 may be able to change the orientation of the display surface using any mechanism.
  • the system 100 may further comprise detection means configured to detect the position of the user positioned in front of the display means 130 .
  • the detection means can be any sensor.
  • the detection means can be, for example, a camera.
  • the system 100 can change the orientation of the display surface of the display means 130 so that the display surface of the display means 130 faces the position of the user detected by the detection means. Thereby, the user can always see the display surface of the display means 130 from the front.
  • FIG. 6B shows an example configuration of a system 100' for three-dimensionally displaying an image in another embodiment.
  • the system 100' has the same configuration as the system 100, except that it includes means for synchronously reproducing sound for enhancing the pseudo three-dimensional effect of the pseudo three dimensional image.
  • the same reference numerals are given to the same configurations as those described above with reference to FIG. 6A, and detailed description thereof will be omitted.
  • the system 100 ′ comprises receiving means 110 , creating means 120 , displaying means 130 , synchronizing means 140 and reproducing means 150 .
  • the receiving means 110 is configured to receive an image.
  • the received image is passed to the creating means 120 .
  • the creating means 120 is configured to create a pseudo three-dimensional image by processing the image.
  • the pseudo three-dimensional image created by creating means 120 is passed to display means 130 and synchronization means 140 .
  • the display means 130 is configured to display a pseudo three-dimensional image.
  • the synchronizing means 140 is configured to synchronize the sound with the image.
  • the image may be a pseudo three-dimensional image created by creating means 120 or an image received by receiving means 110 .
  • Synchronizer 140 can synchronize the sound with the image using any technique known in the art of motion picture creation.
  • the sound can be any sound.
  • the sound may be, for example, a sound that is directly related to the image, a sound that is somewhat related to the image, or a sound that is unrelated to the image.
  • the sound may be sound unrelated to the image, more preferably sound directly related to the image.
  • Sound directly related to the image may be synchronized, for example, if the sound was already synchronized to the image received by the receiving means 110 (eg, if the image was a still image with sound or a moving image with sound). It could be the sound that was being played.
  • Sounds that are somewhat related to images are, for example, sounds that are associated with images (e.g., bird wing sounds or chirping for bird images, car running sounds or horn sounds for car images, etc.). can be Even if the sound was synchronized to the image, the sound can be a sound other than the sound that was synchronized to the image.
  • Synchronization means 140 can synchronize sounds such that when the synchronized sounds are played, they appear to change based on motion in the image.
  • the synchronization means 140 may be arranged such that the sound is changing according to the relationship between the three-dimensional representation of the elements in the pseudo-three-dimensional image and the image of interest, as described above with reference to FIGS. 1E-1J. Sounds can be synchronized so that they can be heard.
  • the change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound.
  • the sound can be synchronized such that the sound is played when at least a portion of the image of interest touches the three-dimensional representation of the element.
  • the sound becomes louder or quieter, or, for example, at least a portion of the image of the object moves farther from the three-dimensional representation of the element.
  • Sounds can be synchronized such that they become higher or lower as they extend into the 3D representation, for example, approaching a different timbre as at least a portion of the image of interest extends further from the three-dimensional representation of the element.
  • the sound can be synchronized such that when at least a portion of the image of interest moves outside the three-dimensional representation of the element, the sound appears to change with the movement.
  • the synchronizing means 140 can synchronize the sounds so that the sounds change according to the relationship between the boundary set in the pseudo three-dimensional image and the target image. This corresponds to the example above where the 3D representation of the elements in the pseudo 3D image is invisible.
  • the sound can be synchronized such that the sound is played in response to at least a portion of the image of interest crossing a boundary. For example, to make the sound louder or quieter as at least a portion of the image of interest extends farther from the boundary, or to make the sound louder or lower as at least a portion of the image of interest extends farther from the boundary, for example.
  • the sounds can be synchronized such that at least a portion of the image of interest extends farther from the boundary and approaches a different timbre.
  • the sound can be synchronized such that when at least a portion of the image of interest moves outside the boundary, the sound appears to change with the movement.
  • the boundary can have any shape. For example, it may be a shape that surrounds the target image (e.g., spherical, elliptical, cylindrical, prismatic, etc.), or a shape that does not surround the target image (e.g., planar, curved, hemispherical). shape, etc.).
  • the boundary may change over time or may not change over time.
  • the sound synchronized with the image by the synchronization means 140 is passed to the reproduction means 150.
  • the reproduction means 150 is configured to reproduce sound synchronized with the image while the display means 130 is displaying the image.
  • the reproduction means 150 can be any reproduction means as long as it can reproduce sound in time with the image being displayed.
  • the reproduction means 150 is, for example, a speaker.
  • the speaker may be built in the display means 130 or may be externally attached to the display means 130 .
  • the systems 100, 100' described above may be implemented in the user equipment 200, for example.
  • FIG. 6C shows an example of the configuration of the user device 200.
  • the user device 200 can be any terminal device such as smart phones, tablet computers, smart glasses, smart watches, laptop computers, desktop computers, and the like.
  • the user device 200 includes a communication interface section 210 , an input section 220 , a display section 230 , a memory section 240 and a processor section 250 .
  • the communication interface unit 210 controls communication of the user device 200 with the outside.
  • the processor unit 250 of the user device 200 can receive information from outside the user device 200 via the communication interface unit 210 and can transmit information to the outside of the user device 200 .
  • the processor portion 250 of the user device 200 can receive images via the communication interface portion 210 . It is possible to transmit the pseudo three-dimensional image to the outside of the user device 200 .
  • Communication interface unit 210 may control communications in any manner.
  • the receiving means 110 of the system 100 can be implemented by the communication interface section 210.
  • the input unit 220 allows the user to input information into the user device 200 . It does not matter in what manner the input unit 220 allows the user to input information into the user device 200 . For example, if the input unit 220 is a touch panel, the user may input information by touching the touch panel. Alternatively, if the input unit 220 is a mouse, the user may input information by operating the mouse. Alternatively, if the input unit 220 is a keyboard, the user may input information by pressing keys on the keyboard. Alternatively, if the input unit 220 is a microphone, the user may input information by voice.
  • the display unit 230 can be any display for displaying information.
  • the display means 130 of the system 100 may be implemented by the display unit 230.
  • the memory unit 240 stores programs for executing processes in the user device 200 and data required for executing the programs.
  • the memory unit 240 stores, for example, part or all of a program for three-dimensionally displaying an image (for example, a program for realizing processing shown in FIGS. 7A and 7B described later).
  • the memory unit 240 may store, for example, part or all of a program for displaying an image on the display (for example, a program for realizing processing shown in FIG. 8, which will be described later).
  • the memory unit 240 may store applications that implement arbitrary functions. Here, it does not matter how the program is stored in the memory unit 240 .
  • the program may be pre-installed in memory unit 240 .
  • the program may be installed in memory unit 240 by being downloaded via network 500 .
  • the program may be stored on a computer-readable tangible storage medium.
  • Memory unit 240 may be implemented by any storage means.
  • the processor unit 250 controls the operation of the user device 200 as a whole.
  • the processor unit 250 reads a program stored in the memory unit 240 and executes the program. This allows the user device 200 to function as a device that executes desired steps.
  • the processor unit 250 may be implemented by a single processor or may be implemented by multiple processors.
  • the creating means 120 of the system 100 may be implemented by the processor unit 250.
  • the synchronization means 140 of system 100 may be implemented by processor portion 250 .
  • the user device 200 can include, for example, a detector configured to detect the position of the user positioned in front of the display 230 .
  • the detector can be any sensor.
  • the detector can be, for example, a camera.
  • the system 100 detection means may be implemented by a detection unit.
  • the user device 200 may include, for example, a reproduction unit (not shown) for reproducing sound.
  • the reproduction unit can be any speaker for reproducing sound.
  • the reproducing means 150 of the system 100 may be implemented by a reproducing section.
  • each component of the user device 200 is provided in the user device 200 in the example shown in FIG. 6C, the present invention is not limited to this. Any of the components of user device 200 may be provided external to user device 200 .
  • the display unit 230 can be provided outside the user device 200 (that is, the display unit 230 is an external display).
  • each hardware component may be connected via an arbitrary network. . At this time, the type of network does not matter.
  • Each hardware component may be connected via a LAN, wirelessly, or wired, for example.
  • User device 200 is not limited to a particular hardware configuration. For example, it is within the scope of the present invention to configure the processor section 250 with analog circuits instead of digital circuits. The configuration of user device 200 is not limited to that described above as long as its functions can be realized.
  • the components of the system 100 may be provided on the user device 200 side as described above, or distributed to both the user device 200 and the server device. If the components of system 100 are distributed in both user device 200 and server device, user device 200 comprises display means 130 (and playback means 150) and server device comprises receiving means 110 and creating means 120 (and synchronization means). 140).
  • FIG. 7A shows an example of processing 700 by system 100 for three-dimensional display of images.
  • the case where the system 100 is implemented by the user device 200 and the processing is executed by the processor unit 250 of the user device 200 will be described as an example.
  • processor unit 250 may implement creating means 120 .
  • the processor unit 250 receives an image.
  • the image contains the image of the object.
  • the processor unit 250 can receive images received from outside the system 100 via the communication interface unit 210, for example.
  • step S702 the processor unit 250 creates a pseudo three-dimensional image by processing the image received in step S701.
  • the processor unit 250 processes the image to create a pseudo-three-dimensional image by adding in the image three-dimensional representations of elements separate from the image of interest (see, for example, FIGS. 1B-2D). can do.
  • the three-dimensional representation of the elements is at least one of, for example, shading the elements, lighting the elements, giving the elements different sizes, or giving perspective to the elements. including one.
  • the processing by the processor unit 250 may be image processing known in the art.
  • the processor unit 250 can create a pseudo-three-dimensional animation, for example, by rotating the three-dimensional representation of the elements added in the image around the image of interest.
  • a pseudo three-dimensional moving image is preferable in that it enhances the pseudo three dimensional effect of the target image.
  • the processor unit 250 may, for example, superimpose a portion of the three-dimensional representation of the element on the image of the object so that a portion of the image of the object is a three-dimensional representation of the element. and such that another part of the three-dimensional representation of the element is superimposed under the target image such that said other part of the three-dimensional representation of the element is hidden by the target image.
  • a three-dimensional representation can be added. This can further enhance the pseudo three-dimensional effect of the image of the object.
  • the processor unit 250 can add, for example, a three-dimensional representation of multiple horizontal scan lines onto the image of interest.
  • the three-dimensional representation of the plurality of horizontal scanlines can be scanlines drawn along the three-dimensional outline of the object.
  • the processor unit 250 can determine the three-dimensional contour shape of the object based on the three-dimensional information contained in the image or derived from the image.
  • the process of deriving three-dimensional information from images can be performed, for example, by techniques known in the art.
  • the process of deriving 3D information from images can be performed using an AI model capable of estimating depth information from images.
  • the processor unit 250 may, for example, generate a plurality of images with different viewpoints from the image (see, for example, FIGS. 3A and 3B), and generate a plurality of images with different viewpoints over time.
  • a pseudo three-dimensional moving image can be created by combining the images continuously.
  • the processor unit 250 can create a plurality of images with different viewpoints based on, for example, three-dimensional information contained in the images or three-dimensional information derived from the images.
  • a plurality of images from different viewpoints can be created using techniques known in the art, for example. For example, multiple images from different viewpoints can be performed using an AI model capable of creating image pairs with parallax.
  • step S703 the processor unit 250 displays the pseudo three-dimensional image created in step S702 on the display unit 230.
  • the processor unit 250 can display the pseudo three-dimensional image created in step 702 as it is on the display unit 230, for example.
  • the processor unit 250 may display the pseudo three-dimensional image on the display unit 230 by changing the orientation of the object in the pseudo three-dimensional image according to the orientation of the display surface of the display unit 230 .
  • an image corresponding to the orientation of the user with respect to the display unit 230 is displayed on the display unit 230 .
  • the pseudo three-dimensional image is displayed on the display unit 230 by changing the orientation of the object in the pseudo three-dimensional image according to the orientation of the display surface of the display unit 230. It is not limited to the pseudo three-dimensional image created at 702 .
  • the image received in step S701 is a three-dimensional image
  • the orientation of the object in the image received in step S701 can be changed and the image can be displayed on display unit 230 .
  • the object in the image appears to be a three-dimensional object, so that the displayed image can result in a pseudo-three-dimensional image.
  • the process 700 may be distributed to both the user device 200 and the server device.
  • steps S701 and S702 can be performed by the server device
  • step S703 can be performed by the user device 200 .
  • FIG. 7B shows an example of processing 710 by system 100' for displaying an image three-dimensionally.
  • the system 100 ′ is implemented by the user device 200 and the processing is executed by the processor unit 250 of the user device 200 as an example.
  • processor portion 250 may implement creating means 120 and synchronizing means 140 .
  • Step S711 the processor unit 250 receives the image.
  • Step S711 is the same as step S701.
  • step S702 the processor unit 250 creates a pseudo three-dimensional image by processing the image received in step S701.
  • Step S712 is similar to step S702.
  • the processor unit 250 synchronizes the sound with the pseudo three-dimensional image created at step S702.
  • Processor unit 250 can synchronize sound with images using any technique known in the art of motion picture production.
  • the processor unit 250 can synchronize the sounds such that when the synchronized sounds are played, they appear to change based on motion in the image.
  • the processor unit 250 can synchronize the sounds so that the sounds change according to the relationship between the boundary set in the pseudo three-dimensional image and the target image.
  • the boundary may or may not be a three-dimensional representation of the sculpture in the pseudo-three-dimensional image.
  • step S714 the processor unit 250 displays the pseudo three-dimensional image created in step S712 on the display unit 230.
  • Step S714 is similar to step S703.
  • step S715 the processor unit 250 reproduces the sound synchronized in step S713 from the reproduction unit while the pseudo three-dimensional image is being displayed in step S714.
  • the processing 710 adds the auditory pseudo-three-dimensional effect of the sound reproduced in time with the motion in the pseudo-three-dimensional image, thereby enhancing the three-dimensional appearance of the image. feeling will be emphasized.
  • processing 710 may be distributed to both the user device 200 and the server device.
  • steps S711 to S713 can be performed by the server device
  • steps S714 and S715 can be performed by the user device 200.
  • step S712 is omitted, sound is synchronized with the image received in step S711 in step S713, and the image received in step S711 is displayed in step S714.
  • FIG. 8 shows an example of a process 800 for displaying an image on a display.
  • Process 800 can provide a virtual reality image to a user without using a dedicated display device (eg, VR goggles, head-mounted display, etc.).
  • Process 800 is performed by processor unit 250 of user device 200, for example.
  • the detection unit of the user device 200 detects the position of the user's viewpoint with respect to the display.
  • the detection unit can detect the position of the user's viewpoint by any detection means.
  • the detection unit can detect the position of the user's viewpoint based on the image captured by the camera.
  • the position of the user's viewpoint may be, for example, the position of the user's eyes (more specifically, for example, the midpoint between the user's eyes).
  • the processor unit 250 of the user device 200 receives an image to be displayed as a virtual reality image and processes the image to determine the portion of the image to be displayed on the display. Determining the portion of the image to be displayed on the display can be performed, for example, by steps S8021-S8023 below.
  • step S8021 the processor unit 250 sets a virtual sphere.
  • a virtual sphere is a virtual sphere whose center is the user's viewpoint and whose radius is the distance between the user's viewpoint and the display. For example, when the distance between the user's viewpoint and the display is small, as shown in FIG. 5B, the diameter of the phantom sphere becomes small, while the distance between the user's viewpoint and the display is small, as shown in FIG. 5C. If the distance of is large, the diameter of the phantom sphere will be large.
  • step S8022 the processor unit 250 pastes the image to be displayed as a virtual reality image on the inner surface of the virtual sphere set in step S8021.
  • the processor unit 250 can apply an image to the inner surface of the sphere by any processing known in the field of image processing.
  • the image is preferably represented by the equirectangular projection method. This is because an image represented by the equirectangular projection can be pasted on the inner surface of the sphere without distortion.
  • the processor unit 250 identifies the portion of the image pasted on the inner surface portion of the virtual sphere corresponding to the display surface of the display.
  • the portion of the inner surface of the virtual sphere that corresponds to the display surface of the display is the portion that overlaps the display surface of the display when the virtual sphere is virtually arranged around the user's viewpoint.
  • the processor unit 250 can, for example, derive the inner surface portion of the virtual sphere corresponding to the display surface of the display from the relative positional relationship between the user and the display. Then, the processor unit 250 can specify the portion of the image from the relationship between the derived portion and the image pasted on the virtual sphere.
  • step S803 When the portion of the image to be displayed on the display is thus determined, the process proceeds to step S803.
  • step S803 the portion of the image determined in step S802 is displayed on the display surface of the display.
  • the image displayed by process 800 will maintain the perspective perceived by the user. That is, objects that are far away in the virtual reality image will still appear far away whether the user moves closer to the display or farther away from the display, and objects that are closer in the virtual reality image will appear farther away when the user moves away from the display. Even when the user moves away from the display, it still appears to be close. In this way, the user U can see the image with a sense of the real world through the display.
  • This display may be a dedicated display device (e.g., VR goggles, head-mounted display, etc.), but may be a general stationary display, or the above-described rotary display 20, 25, 27, etc. good. That is, the user can view a natural virtual reality image without wearing a dedicated display device.
  • each step shown in FIGS. 7A and 7B and part of the processing shown in FIG. Although it has been described that the present invention is realized by and, the present invention is not limited to this. At least one of the processing of each step shown in FIGS. 7A and 7B and part of the processing shown in FIG. 8 may be realized by a hardware configuration such as a control circuit.
  • the present invention is useful in that it can provide a method and the like capable of creating a pseudo-three-dimensional image in order to three-dimensionally display an image.

Abstract

The purpose of the present invention is to provide a method or the like capable of creating a pseudo three-dimensional image in order to display an image three-dimensionally. The present invention provides a method for displaying an image three-dimensionally, the method comprising: receiving an image including a target image; creating a pseudo three-dimensional image with a pseudo three-dimensional effect by adding, in the image, three-dimensional representation of elements other than the target image, by processing the image; and displaying the pseudo three-dimensional image.

Description

画像を3次元的に表示するための方法、プログラムおよびシステムMethod, program and system for displaying images three-dimensionally
 本発明は、画像を3次元的に表示するための方法、プログラムおよびシステムに関する。 The present invention relates to a method, program and system for displaying images three-dimensionally.
 一般的なディスプレイ装置に画像を表示すると、画像は、平面的に表示される。ディスプレイ装置の表示面が平面だからである。 When an image is displayed on a general display device, the image is displayed two-dimensionally. This is because the display surface of the display device is flat.
 画像を3次元的に表示するための特殊なデバイスが開発されている(例えば、特許文献1) A special device has been developed for displaying images in three dimensions (for example, Patent Document 1)
特表2010-513970号公報Japanese translation of PCT publication No. 2010-513970
 本発明は、画像を3次元的に表示するために、疑似3次元画像を作成することが可能な方法等を提供することを目的とする。 An object of the present invention is to provide a method or the like capable of creating a pseudo-three-dimensional image in order to display an image three-dimensionally.
 本発明は、以下の項目を提供する。
(項目1)
 画像を3次元的に表示するための方法であって、
 対象の画像を含む画像を受信することと、
 前記画像を処理することにより、前記対象の画像とは別の要素の3次元的表現を前記画像内に追加することによって疑似3次元効果を奏する疑似3次元画像を作成することと、
 前記疑似3次元画像を表示することと
 を含む方法。
(項目2)
 前記疑似3次元画像を作成することは、前記要素の3次元的表現を前記対象の画像の周囲で回転させることにより前記疑似3次元画像として疑似3次元動画を作成することを含む、項目1に記載の方法。
(項目3)
 前記要素の3次元的表現の一部が前記対象の画像の上に重ね合わせられて前記対象の画像の前記一部が前記要素の3次元的表現によって隠れ、前記要素の3次元的表現の他の一部が前記対象の画像の下に重ね合わせられて前記要素の3次元的表現の前記他の一部が前記対象の画像によって隠れるように前記要素の3次元的表現が追加される、項目1または項目2に記載の方法。
(項目4)
 前記要素は、複数の水平方向走査線を含み、
 前記要素の3次元的表現を前記画像内に追加することは、前記複数の水平方向走査線の3次元的表現を前記対象の画像上に追加することを含む、項目1~3のいずれか一項に記載の方法。
(項目5)
 前記疑似3次元画像を作成することは、前記画像から視点の異なる複数の画像を生成することと、前記視点の異なる複数の画像を時間的に連続して結合することにより前記疑似3次元画像として疑似3次元動画を作成することを含む、項目1~4のいずれか一項に記載の方法。
(項目6)
 前記疑似3次元画像は、疑似3次元動画であり、
 前記方法は、
 前記疑似3次元画像に音を同期させることと、
 前記疑似3次元画像を表示しているときに、前記同期させられた音を再生することと
 をさらに含む、項目1~5のいずれか一項に記載の方法。
(項目7)
 前記音は、前記画像中の動きに基づいて変化する、項目6に記載の方法。
(項目8)
 前記音は、前記対象の画像が前記対象の画像の周囲に規定される境界を超えることに応答して、変化する、項目6または項目7に記載の方法。
(項目9)
 前記音は、前記境界の外での前記対象の動きに応答して、変化する、項目8に記載の方法。
(項目10)
 前記境界は、前記対象の画像の周囲に配置された前記要素の3次元的表現によって規定される、項目8または項目9に記載の方法。
(項目11)
 前記音の変化は、前記音の大きさ、音程、音色のうちの少なくとも1つの変化を含む、項目7~10のいずれか一稿に記載の方法。
(項目12)
 前記疑似3次元画像を表示することは、少なくとも1つの部材が第1の軸周りに回転して平面状の表示面を形成する回転型ディスプレイ上に前記疑似3次元画像を表示することを含む、項目1~11のいずれか一項に記載の方法。
(項目13)
 前記回転型ディスプレイは、前記表示面の向きを変更可能に構成されており、
 前記方法は、
 前記回転型ディスプレイに対するユーザの位置を検出することと、
 前記検出された位置に基づいて、前記表示面の向き変更することと
 を含む、項目12に記載の方法。
(項目14)
 前記疑似3次元画像を表示することは、前記表示面の向きに基づいて、前記疑似3次元画像中の前記対象の向きを変更して前記表示面上に前記疑似3次元画像を表示することを含む、項目7に記載の方法。
(項目15)
 前記疑似3次元画像を表示することは、少なくとも1つの部材が第1の軸周りに回転しかつ第1の軸に略垂直な第2の周りに回転して略球面状の表示面を形成する回転型ディスプレイ上に前記疑似3次元画像を表示することを含む、項目1~11のいずれか一項に記載の方法。
(項目16)
 前記回転型ディスプレイは、複数の部材がそれぞれ回転して複数の表示面を形成する、項目12~15のいずれか一項に記載の方法。
(項目17)
 画像を3次元的に表示するためのプログラムであって、前記プログラムは、プロセッサと、表示部とを備えるコンピュータシステムにおいて実行され、前記プログラムは、
 対象の画像を含む画像を受信することと、
 前記画像を処理することにより、前記対象の画像とは別の要素の3次元的表現を前記画像内に追加することによって疑似3次元効果を奏する疑似3次元画像を作成することと、
 前記疑似3次元画像を前記表示部に表示することと
 を含む処理を前記プロセッサに行わせる、プログラム。
(項目17A)
 上記項目のうちの1つまたは複数に記載の特徴を含む、項目17に記載のプログラム。
(項目18)
 画像を3次元的に表示するためのシステムであって、
 対象の画像を含む画像を受信する受信手段と、
 前記画像を処理することにより、前記対象の画像とは別の要素の3次元的表現を前記画像内に追加することによって疑似3次元効果を奏する疑似3次元画像を作成する作成手段と、
 前記疑似3次元画像を表示する表示手段と
 を備えるシステム。
(項目18A)
 上記項目のうちの1つまたは複数に記載の特徴を含む、項目18に記載のシステム。
(項目18B)
 画像を3次元的に表示するためのプログラムを格納する記憶媒体であって、前記プログラムは、プロセッサと、表示部とを備えるコンピュータシステムにおいて実行され、前記プログラムは、
 対象の画像を含む画像を受信することと、
 前記画像を処理することにより、前記対象の画像とは別の要素の3次元的表現を前記画像内に追加することによって疑似3次元効果を奏する疑似3次元画像を作成することと、
 前記疑似3次元画像を前記表示部に表示することと
 を含む処理を前記プロセッサに行わせる、記憶媒体。
(項目18C)
 上記項目のうちの1つまたは複数に記載の特徴を含む、項目18Bに記載の記憶媒体。
(項目19)
 画像を3次元的に表示するための方法であって、
 画像を受信することと、
 前記画像に音を同期させることであって、前記音は、前記画像中の動きに応答して変化する、ことと
 前記画像を表示することと、
 前記画像を表示しているときに、前記同期させられた音を再生することと
 を含む方法。
(項目19A)
 上記項目のうちの1つまたは複数に記載の特徴を含む、項目19に記載の方法。
(項目20)
 画像を3次元的に表示するためのプログラムであって、前記プログラムは、プロセッサと、表示部と、音出力部とを備えるコンピュータシステムにおいて実行され、前記プログラムは、
 画像を受信することと、
 前記画像に音を同期させることであって、前記音は、前記画像中の動きに応答して変化する、ことと
 前記画像を前記表示部に表示することと、
 前記画像を表示しているときに、前記同期させられた音を前記音出力部から再生することと
 を含む処理を前記プロセッサに行わせる、プログラム。
(項目20A)
 上記項目のうちの1つまたは複数に記載の特徴を含む、項目20に記載のプログラム。
(項目21)
 画像を3次元的に表示するためのシステムであって、
 画像を受信する受信手段と、
 前記画像に音を同期させる同期手段であって、前記音は、前記画像中の動きに応答して変化する、同期手段と
 前記画像を表示する表示手段と、
 前記画像を表示しているときに、前記同期させられた音を再生する再生手段と
 を備えるシステム。
(項目21A)
 上記項目のうちの1つまたは複数に記載の特徴を含む、項目21に記載のシステム。
(項目22)
 画像をディスプレイ上に表示する方法であって、
 前記ディスプレイに対するユーザの視点の位置を検出することと、
 画像を処理することにより、前記ディスプレイ上に表示されるべき前記画像の部分を決定することであって、
  前記ユーザの視点を中心とし、前記ユーザの視点と前記ディスプレイとの間の距離を半径とする仮想球体を設定することと、
  前記画像を前記仮想球体の内面に貼り付けることと、
  前記ディスプレイの表示面に対応する前記仮想球体の内面の部分に貼り付けられている前記画像の部分を特定することと、
 を含む、ことと、
 前記決定された画像の部分を前記ディスプレイの前記表示面に表示することと
 を含む方法。
(項目23)
 前記画像は、エクイレクタングラー図法で表現されている、項目22に記載の方法。
(項目24)
 前記ディスプレイは、据え置き型ディスプレイである、項目22または項目23に記載の方法。
(項目25)
 前記ディスプレイは、少なくとも1つの部材が回転して表示面を形成する回転型ディスプレイである、項目24に記載の方法。
The present invention provides the following items.
(Item 1)
A method for displaying an image three-dimensionally, comprising:
receiving an image containing an image of interest;
processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest;
and displaying the simulated three-dimensional image.
(Item 2)
Item 1, wherein creating the pseudo-three-dimensional image includes creating a pseudo-three-dimensional animation as the pseudo-three-dimensional image by rotating the three-dimensional representation of the element around the image of the object. described method.
(Item 3)
a portion of the three-dimensional representation of the element is superimposed on the image of the object such that the portion of the image of the object is hidden by the three-dimensional representation of the element; is superimposed under the image of interest and the other portion of the three-dimensional representation of the element is obscured by the image of interest. 2. The method of item 1 or item 2.
(Item 4)
the element includes a plurality of horizontal scan lines;
4. Any one of items 1-3, wherein adding a three-dimensional representation of the element within the image includes adding a three-dimensional representation of the plurality of horizontal scan lines onto the image of the object. The method described in section.
(Item 5)
Creating the pseudo three-dimensional image includes generating a plurality of images with different viewpoints from the image, and temporally successively combining the plurality of images with different viewpoints to form the pseudo three-dimensional image. 5. The method according to any one of items 1 to 4, comprising creating a simulated three-dimensional animation.
(Item 6)
The pseudo three-dimensional image is a pseudo three-dimensional video,
The method includes:
synchronizing sound with the simulated three-dimensional image;
6. The method of any one of items 1-5, further comprising: playing said synchronized sound while displaying said simulated three-dimensional image.
(Item 7)
7. Method according to item 6, wherein the sound changes based on movement in the image.
(Item 8)
8. The method of item 6 or item 7, wherein the sound changes in response to the image of the object exceeding a boundary defined around the image of the object.
(Item 9)
9. The method of item 8, wherein the sound changes in response to movement of the object outside the boundary.
(Item 10)
10. Method according to item 8 or item 9, wherein the boundary is defined by a three-dimensional representation of the elements arranged around the image of the object.
(Item 11)
11. The method of any one of items 7-10, wherein the change in sound includes a change in at least one of loudness, pitch and timbre of the sound.
(Item 12)
displaying the simulated three-dimensional image includes displaying the simulated three-dimensional image on a rotating display in which at least one member rotates about a first axis to form a planar display surface; The method according to any one of items 1-11.
(Item 13)
The rotary display is configured such that the orientation of the display surface can be changed,
The method includes:
detecting a user's position relative to the rotating display;
13. The method of item 12, comprising: reorienting the display surface based on the detected position.
(Item 14)
Displaying the pseudo three-dimensional image includes changing the orientation of the object in the pseudo three-dimensional image based on the orientation of the display surface and displaying the pseudo three-dimensional image on the display surface. 8. The method of item 7, comprising
(Item 15)
Displaying the pseudo-three-dimensional image includes rotating at least one member about a first axis and about a second axis substantially perpendicular to the first axis to form a substantially spherical display surface. 12. The method of any one of items 1-11, comprising displaying the simulated three-dimensional image on a rotating display.
(Item 16)
16. A method according to any one of items 12 to 15, wherein the rotatable display comprises a plurality of members each rotating to form a plurality of display surfaces.
(Item 17)
A program for displaying an image three-dimensionally, said program being executed in a computer system comprising a processor and a display unit, said program comprising:
receiving an image containing an image of interest;
processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest;
A program causing the processor to perform processing including: displaying the pseudo three-dimensional image on the display unit.
(Item 17A)
18. Program according to item 17, including features according to one or more of the above items.
(Item 18)
A system for three-dimensionally displaying an image, comprising:
receiving means for receiving an image including an image of interest;
creating means for creating, by processing the image, a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest;
and display means for displaying the pseudo three-dimensional image.
(Item 18A)
19. The system of item 18, including the features of one or more of the above items.
(Item 18B)
A storage medium storing a program for three-dimensionally displaying an image, the program being executed in a computer system comprising a processor and a display unit, the program comprising:
receiving an image containing an image of interest;
processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest;
A storage medium causing the processor to perform processing including: displaying the pseudo three-dimensional image on the display unit.
(Item 18C)
A storage medium according to item 18B, comprising features according to one or more of the above items.
(Item 19)
A method for displaying an image three-dimensionally, comprising:
receiving an image;
synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image;
and playing said synchronized sound while displaying said image.
(Item 19A)
20. Method according to item 19, including features according to one or more of the above items.
(Item 20)
A program for three-dimensionally displaying an image, said program being executed in a computer system comprising a processor, a display unit, and a sound output unit, said program comprising:
receiving an image;
synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image on the display;
and reproducing the synchronized sound from the sound output unit when the image is displayed.
(Item 20A)
21. Program according to item 20, including features according to one or more of the above items.
(Item 21)
A system for three-dimensionally displaying an image, comprising:
a receiving means for receiving an image;
synchronization means for synchronizing sound with said image, said sound varying in response to movement in said image; display means for displaying said image;
reproduction means for reproducing said synchronized sound when displaying said image.
(Item 21A)
22. The system of item 21, including the features of one or more of the above items.
(Item 22)
A method of displaying an image on a display, comprising:
detecting the position of a user's viewpoint with respect to the display;
Determining the portion of the image to be displayed on the display by processing the image, comprising:
setting a virtual sphere centered at the user's viewpoint and having a radius equal to the distance between the user's viewpoint and the display;
pasting the image onto the inner surface of the virtual sphere;
identifying a portion of the image pasted on a portion of the inner surface of the phantom sphere corresponding to the display surface of the display;
including
and displaying the determined portion of the image on the display surface of the display.
(Item 23)
23. The method of item 22, wherein the image is represented in an equirectangular projection.
(Item 24)
24. The method of item 22 or item 23, wherein the display is a stationary display.
(Item 25)
25. The method of item 24, wherein the display is a rotating display in which at least one member rotates to form a viewing surface.
 本発明によれば、表示されるべき画像から疑似3次元画像を作成することができ、これにより、例えば、3次元モデルが存在しない画像であっても、3次元的に表現されることができるようになる。 According to the invention, it is possible to create a pseudo three-dimensional image from the image to be displayed, so that, for example, even an image for which no three-dimensional model exists can be represented three-dimensionally. become.
画像10が表示されている様子を示す図FIG. 10 is a diagram showing how an image 10 is displayed; 本発明の一実施形態における手法に従って、図1Aに示される画像10内の対象の画像11を3次元的に表現した例を示す図FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of one embodiment of the present invention. 図1Bに示される疑似3次元画像20において、対象の画像11の遠近感をさらに増強して、対象の画像11を3次元的に表現した例を示す図FIG. 1B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 20 shown in FIG. 1B; 図1Bに示される疑似3次元画像20において、対象の画像11の遠近感をさらに増強して、対象の画像11を3次元的に表現した例を示す図FIG. 1B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 20 shown in FIG. 1B; 対象の画像11が動く過程を示す図A diagram showing a process in which the target image 11 moves 対象の画像11が動く過程を示す図A diagram showing a process in which the target image 11 moves 対象の画像11が動く様子を示す図The figure which shows a mode that the image 11 of object moves. 立方体12が動く過程を示す図A diagram showing the process in which the cube 12 moves 立方体12が動く過程を示す図A diagram showing the process in which the cube 12 moves 立方体12が動く様子を示す図The figure which shows a mode that the cube 12 moves. 本発明の別の実施形態における手法に従って、図1Aに示される画像10内の対象の画像11を3次元的に表現した例を示す図FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention. 図2Aに示される疑似3次元画像30において、対象の画像11の遠近感をさらに増強して、対象の画像11を3次元的に表現した例を示す図FIG. 2B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 30 shown in FIG. 2A; 疑似3次元画像20’を示す図A diagram showing a pseudo three-dimensional image 20' 疑似3次元画像20’’を示す図A diagram showing a pseudo three-dimensional image 20 ″ 本発明の別の実施形態における手法に従って、図1Aに示される画像10内の対象の画像11を3次元的に表現した例を示す図FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention. 本発明の別の実施形態における手法に従って、図1Aに示される画像10内の対象の画像11を3次元的に表現した例を示す図FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention. 疑似3次元画像を表示するための画像表示装置の一例を示す図FIG. 1 shows an example of an image display device for displaying a pseudo three-dimensional image; 図4Aに示される表示面の向きにおいて、回転型ディスプレイ20の表示面23に表示される画像の一例を示す図4B is a diagram showing an example of an image displayed on the display surface 23 of the rotary display 20 in the orientation of the display surface shown in FIG. 4A; FIG. ユーザUが回転型ディスプレイ20の周りで矢印の方向に移動した状態を示す図A diagram showing a state in which the user U has moved around the rotary display 20 in the direction of the arrow. 図4Cに示されるユーザ位置に向いた表示面23に表示される画像の一例を示す図FIG. 4D is a diagram showing an example of an image displayed on the display surface 23 facing the user position shown in FIG. 4C. 回転型ディスプレイ20の表示面23に画像を表示する様子を示す図FIG. 4 is a diagram showing how an image is displayed on the display surface 23 of the rotary display 20; 本発明の一実施形態における回転型ディスプレイ25を示す図FIG. 2 shows a rotating display 25 in one embodiment of the present invention; 本発明の別の実施形態における回転型ディスプレイ27を示す図FIG. 3 shows a rotating display 27 in another embodiment of the invention; 本発明の一実施形態における手法のフローの一例を概略的に示す図FIG. 2 schematically illustrates an example flow of a technique in one embodiment of the present invention; ユーザUの視点とディスプレイ20との間の距離が小さい場合の図FIG. 4 is a diagram when the distance between the viewpoint of the user U and the display 20 is small; ユーザUの視点とディスプレイ20との間の距離が大きい場合の図FIG. 4 is a diagram showing a case where the distance between the viewpoint of the user U and the display 20 is large; 画像を3次元的に表示するためのシステム100の構成の一例を示す図A diagram showing an example of the configuration of a system 100 for displaying an image three-dimensionally. 別の実施形態における画像を3次元的に表示するためのシステム100’の構成の一例を示す図A diagram showing an example of the configuration of a system 100' for three-dimensionally displaying an image in another embodiment. ユーザ装置200の構成の一例を示す図A diagram showing an example of the configuration of the user device 200 画像を3次元的に表示するためのシステム100による処理700の一例を示す図FIG. 7 shows an example of a process 700 by the system 100 for displaying an image three-dimensionally. 画像を3次元的に表示するためのシステム100’による処理710の一例を示す図FIG. 7 illustrates an example process 710 by system 100' for displaying an image in three dimensions; 画像をディスプレイ上に表示するための処理800の一例を示す図FIG. 8 shows an example process 800 for displaying an image on a display.
 以下、本発明を説明する。本明細書において使用される用語は、特に言及しない限り、当該分野で通常用いられる意味で用いられることが理解されるべきである。したがって、他に定義されない限り、本明細書中で使用される全ての専門用語および科学技術用語は、本発明の属する分野の当業者によって一般的に理解されるのと同じ意味を有する。矛盾する場合、本明細書(定義を含めて)が優先する。 The present invention will be described below. It should be understood that the terms used herein have the meanings commonly used in the art unless otherwise specified. Thus, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. In case of conflict, the present specification (including definitions) will control.
 本明細書において、「画像」とは、2次元平面に表示可能な画像のことをいう。画像は、2次元情報(縦×横)を含む「2次元画像」のみならず、3次元情報(縦×横×奥行)を含む「3次元画像」を含む。「3次元画像」は、例えば、RGB-Dカメラを用いて取得され得る。「3次元画像」は、例えば、2次元画像に対して奥行き情報を推定する処理を行い、2次元画像に奥行き情報を付加することによって取得され得る。画像には、静止画および動画が含まれる。動画とは、時間的に連続する複数の静止画であるとみなされる。 In this specification, "image" refers to an image that can be displayed on a two-dimensional plane. The image includes not only a "two-dimensional image" containing two-dimensional information (length x width) but also a "three-dimensional image" containing three-dimensional information (length x width x depth). A "three-dimensional image" can be acquired, for example, using an RGB-D camera. A "three-dimensional image" can be obtained, for example, by performing a process of estimating depth information on a two-dimensional image and adding the depth information to the two-dimensional image. Images include still images and moving images. A moving image is considered to be a plurality of temporally consecutive still images.
 本明細書において、「3次元的」とは、3次元ではないが、3次元であるかのようであるさまのことをいう。 In this specification, "three-dimensional" refers to a state that is not three-dimensional, but appears to be three-dimensional.
 本明細書において、「3次元的に表示する」とは、3次元ではないもの(例えば、2次元平面内にあるもの)を3次元であるかのように表示することを意味する。 In this specification, "three-dimensionally displaying" means displaying something that is not three-dimensional (for example, something that is in a two-dimensional plane) as if it were three-dimensional.
 本明細書において、「3次元的表現」とは、3次元ではないもの(例えば、2次元平面内にあるもの)を3次元であるかのように表現したものを意味する。例えば、3次元的表現は、陰影をつけることによって3次元であるかのように表現したもの、明暗をつけることによって3次元であるかのように表現したもの、視差をつけることによって3次元であるかのように表現したもの、大きさの違いをつけることによって3次元であるかのように表現したもの、または、遠近感をつけることによって3次元であるかのように表現したもののうちの少なくとも1つを含む。 In this specification, "three-dimensional representation" means representation of something that is not three-dimensional (for example, something in a two-dimensional plane) as if it were three-dimensional. For example, the three-dimensional representation includes a three-dimensional representation by adding shading, a three-dimensional representation by adding light and shade, and a three-dimensional representation by adding parallax. One that is expressed as if it exists, one that is expressed as if it were three-dimensional by adding a difference in size, or one that is expressed as if it is three-dimensional by adding perspective At least one.
 本明細書において、「疑似3次元効果」とは、視覚における錯覚(錯視)により、3次元的に見える効果のことをいう。見る人の感覚に依存して、疑似3次元効果の程度に違いはあり得る。「疑似3次元画像」は、「疑似3次元効果」を奏する画像のことをいう。 In this specification, the "pseudo three-dimensional effect" refers to the effect of appearing three-dimensionally due to visual illusion (optical illusion). Depending on the viewer's perception, there can be varying degrees of pseudo three-dimensional effect. A "pseudo three-dimensional image" refers to an image that produces a "pseudo three-dimensional effect".
 本明細書において、「対象」とは、画像内に現れる任意の物体のことをいう。対象は、例えば、生物であってもよいし、無生物であってもよい。対象は、例えば、人間であってもよいし、動物であってもよいし、植物であってもよい。 In this specification, "object" refers to any object appearing in an image. A subject may be, for example, animate or inanimate. A subject may be, for example, a human, an animal, or a plant.
 本明細書において、「約」とは、後に続く数値の±10%を意味する。 As used herein, "about" means ±10% of the following numerical value.
 以下、図面を参照しながら、本発明の実施の形態を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 1.1 画像を3次元的に表現する手法
 本発明の発明者は、平面ディスプレイに表示される画像(2次元画像)を3次元的に表現する手法を開発した。この手法を用いると、平面ディスプレイに表示される画像が3次元的に表示されることになり、平面ディスプレイを見る人は、平面ディスプレイに表示される画像があたかも3次元空間内に表示されていると錯覚し得る。このような錯覚による効果を奏する画像が、疑似3次元画像である。この手法では、画像の3次元モデルが存在しない場合であっても、画像を3次元的に表現することができるようになる。
1.1 Method for Three-Dimensionally Representing Images The inventors of the present invention have developed a method for three-dimensionally representing an image (two-dimensional image) displayed on a flat display. When this method is used, the image displayed on the flat display is displayed three-dimensionally, and the person viewing the flat display sees the image displayed on the flat display as if it were displayed in a three-dimensional space. can be misunderstood. A pseudo three-dimensional image is an image that produces such an illusory effect. With this method, an image can be represented three-dimensionally even if a three-dimensional model of the image does not exist.
 図1Aは、画像10が表示されている様子を示す。画像10内には、対象(本例では、人)の画像11が含まれている。 FIG. 1A shows how the image 10 is displayed. The image 10 includes an image 11 of an object (a person in this example).
 画像10は平面ディスプレイに表示されるため、対象の画像11は、2次元的または平面的に見えている。 Since the image 10 is displayed on a flat display, the target image 11 appears two-dimensional or two-dimensional.
 図1Bは、本発明の一実施形態における手法に従って、図1Aに示される画像10内の対象の画像11を3次元的に表現した例を示す。 FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of one embodiment of the present invention.
 図1Bに示される疑似3次元画像20では、元の画像10に比べて、対象の画像11とは別の要素(本例では、立方体12)が追加されている。立方体12は、3次元的表現で画像に追加されている。例えば、図1Bに示されるように、立方体12は、見る人に近い辺が太く、かつ見る人から遠い辺が細くなるように表現されている。これにより、立方体12には遠近感が生じている。疑似3次元画像20において、立方体12の遠近感により、対象の画像11にも遠近感が生じており、これにより、対象の画像11が3次元的に見え得る。 In the pseudo three-dimensional image 20 shown in FIG. 1B, compared to the original image 10, an element (cube 12 in this example) different from the target image 11 is added. A cube 12 has been added to the image in a three-dimensional representation. For example, as shown in FIG. 1B, the cube 12 is represented as thicker on the side closer to the viewer and thinner on the side farther from the viewer. As a result, the cube 12 has a sense of perspective. In the pseudo three-dimensional image 20, the perspective of the cube 12 also causes the object image 11 to have a perspective, so that the object image 11 can appear three-dimensional.
 さらに、立方体12は、対象の画像11の周囲に配置されており、立方体12は、立方体12の一部が対象の画像11の一部を隠すように対象の画像11に重なり、立方体12の一部が対象の画像11の一部に隠れるように対象の画像11に重なっている。これにより、立方体12の遠近感が増強され、ひいては、対象の画像11の遠近感も増強され得る。 Further, the cube 12 is positioned around the target image 11 , the cube 12 overlaps the target image 11 such that a portion of the cube 12 obscures a portion of the target image 11 and a portion of the cube 12 . It overlaps the target image 11 so that the part is hidden by a part of the target image 11 . This may enhance the perspective of the cube 12 and, in turn, the perspective of the image 11 of the object.
 このように、疑似3次元画像20では、対象の画像11自体は2次元画像であるものの、立方体12の存在により、対象の画像11は、3次元的に表現されていると錯覚されやすい。 In this way, in the pseudo three-dimensional image 20, although the target image 11 itself is a two-dimensional image, the existence of the cube 12 makes it easy to misunderstand that the target image 11 is represented three-dimensionally.
 図1Cおよび図1Dは、図1Bに示される疑似3次元画像20において、対象の画像11の遠近感をさらに増強して、対象の画像11を3次元的に表現した例を示す。図1Cおよび図1Dに示される例では、疑似3次元画像20は、動画であり、図1Cおよび図1Dに示される画像は、動画の1フレームであるとみなされ得る。 1C and 1D show examples in which the perspective of the target image 11 is further enhanced in the pseudo three-dimensional image 20 shown in FIG. 1B to express the target image 11 three-dimensionally. In the examples shown in FIGS. 1C and 1D, the pseudo three-dimensional image 20 is a moving image, and the images shown in FIGS. 1C and 1D can be considered one frame of the moving image.
 図1Cおよび図1Dでは、立方体12が軸周りに回転させられている。軸は、例えば、立方体12の上面および下面を通過する中心軸である。図1Cおよび図1Dに示されるように、立方体12が回転させられるにつれて、見る人に近い辺および見る人から遠い辺が遷移する。これに伴い、太く表現される辺および細く表現される辺が遷移する。これにより、回転させられる立方体12の遠近感が更に増強される。画像10において、立方体12の増強された遠近感により、対象の画像11の遠近感も増強されており、これにより、対象の画像11がさらに3次元的に見え得る。 1C and 1D, the cube 12 is rotated around its axis. The axis is, for example, the central axis passing through the top and bottom surfaces of the cube 12 . As the cube 12 is rotated, the edges near and far from the viewer transition, as shown in FIGS. 1C and 1D. Along with this, the edge represented thick and the edge represented thin transition. This further enhances the perspective of the rotated cube 12 . In image 10, the enhanced perspective of cube 12 also enhances the perspective of image of object 11, which may make image of object 11 appear more three-dimensional.
 さらに、立方体12は、対象の画像11の周囲で回転させられている。立方体12が回転させられるにつれて、対象の画像11の一部を隠す立方体12の一部および対象の画像11の一部に隠れる立方体12の一部が遷移する。これにより、立方体12の遠近感がさらに増強され、ひいては、対象の画像11の遠近感もさらに増強され得る。 Furthermore, the cube 12 is rotated around the image 11 of the object. As the cube 12 is rotated, the portion of the cube 12 obscuring the portion of the image 11 of interest and the portion of the cube 12 obscuring the portion of the image 11 of the object transition. This may further enhance the perspective of the cube 12 and, in turn, the perspective of the image 11 of the object.
 このように、疑似3次元画像20では、対象の画像11自体は2次元画像であるものの、回転させられる立方体12の存在により、対象の画像11は、3次元的に表現されているとより錯覚されやすい。 In this way, in the pseudo three-dimensional image 20, although the target image 11 itself is a two-dimensional image, the presence of the rotated cube 12 makes the target image 11 appear three-dimensional. easy to be
 なお、軸は、任意の軸であり得る。軸は、回転により立方体12の遠近感が増強される軸であることが好ましい。軸は、例えば、立方体12の側面を通過する中心軸であってもよいし、立方体12の少なくとも1つの面を通過する軸(中心軸または中心からずれた軸)であってもよいし、立方体12外の軸であってもよい。 It should be noted that the axis can be any axis. The axis is preferably the axis whose rotation enhances the perspective of the cube 12 . The axis may be, for example, a central axis passing through a side surface of the cube 12, an axis (a central axis or an off-center axis) passing through at least one surface of the cube 12, or a 12 out of the axis.
 上述した例では、対象の画像11とは別の要素による視覚的効果によって画像を3次元的に表現することを説明したが、本発明の一実施形態では、視覚的効果に加えて、または、視覚的効果に代えて、聴覚的効果によって画像を3次元的に表現することができる。この実施形態における平面ディスプレイは、スピーカを備えるかスピーカに接続されているものとする。 In the above example, it was explained that the image is three-dimensionally represented by a visual effect by elements other than the target image 11, but in one embodiment of the present invention, in addition to the visual effect, or Instead of visual effects, auditory effects can be used to represent images in three dimensions. It is assumed that the flat display in this embodiment has a speaker or is connected to a speaker.
 例えば、画像10が、対象が動く様子を表現した動画であったとする。画像10には、動く対象の画像11が含まれることになる。このような画像10から生成した疑似3次元画像20において、図1Eに示される状態から、図1Fに示される状態に、対象の画像11が動いたとする。この動きにより、対象の腕の画像が、立方体12の外に出ているように見えるようになる。図1Eに示される状態から図1Fに示される状態に対象の画像が変化する過程で、対象の腕の画像が立方体12に接触した瞬間から音を再生し、対象の腕の画像が立方体12から遠くに延びるにつれて、その音を変化させることができる。音は、スピーカから再生され得る。対象の腕の画像が立方体12に接触した瞬間に再生される音により、立方体12があたかも3次元空間内にあるかような錯覚を生じさせ、そして、立方体12と腕の画像との間の距離に従って音を変化させることによって、空間の3次元的な広がりの錯覚を生じさせることができる。音の変化は、例えば、音の大きさ、音の音程、音の音色のうちの少なくとも1つであり得る。例えば、対象の腕の画像が立方体12から遠くに延びるにつれて音を大きくすることができる。あるいは、例えば、対象の腕の画像が立方体12から遠くに延びるにつれて音を小さくするようにしてもよいし、例えば、対象の腕の画像が立方体12から遠くに延びるにつれて音を高くもしくは低くするようにしてもよいし、例えば、対象の腕の画像が立方体12から遠くに延びるにつれて別の音色に近づくようにするようにしてもよい。このような聴覚的効果により、立方体12の存在が強調され、ひいては、対象の画像11の遠近感がさらに強調されることになる。 For example, assume that the image 10 is a moving image that expresses how the target moves. Image 10 will include image 11 of a moving object. Assume that the target image 11 moves from the state shown in FIG. 1E to the state shown in FIG. 1F in the pseudo three-dimensional image 20 generated from such an image 10 . This movement causes the image of the subject's arm to appear to be outside the cube 12 . In the process of changing the image of the subject from the state shown in FIG. 1E to the state shown in FIG. The sound can change as it extends farther. Sound may be played from the speaker. The sound played at the moment the subject's arm image touches the cube 12 creates the illusion that the cube 12 is in three-dimensional space, and the distance between the cube 12 and the arm image. By changing the sound according to , it is possible to create the illusion of a three-dimensional spread of space. The change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound. For example, the sound can increase as the subject's arm image extends farther from the cube 12 . Alternatively, for example, the sound may decrease as the subject's arm image extends further from the cube 12, or, for example, the sound may increase or decrease as the subject's arm image extends further from the cube 12. or, for example, the subject's arm image may approach a different tone as it extends farther from the cube 12 . Such an auditory effect emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
 さらに、立方体12から遠ざかる対象の腕の画像の動きに加えて、または、これに代えて、立方体12の外部での対象の画像の他の動きに応じて、音を変化させるようにしてもよい。例えば、図1Gに示されるように、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて上下に動くとき、その動きに合わせて音を変化させることができる。例えば、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて下に動くにつれて音を大きく(または小さく)し、上に動くにつれて音を小さく(または大きく)することができる。あるいは、例えば、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて下に動くにつれて音を高く(または低く)し、上に動くにつれて音を低く(または高く)するようにしてもよいし、例えば、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて下に動くにつれて音を第1の音色に近づけ、上に動くにつれて音を第2の音色に近づけるようにしてもよい。このような聴覚的効果によっても、立方体12の存在が強調され、ひいては、対象の画像11の遠近感がさらに強調されることになる。 Further, the sound may vary in response to other motions of the subject's image outside the cube 12 in addition to or alternatively to movement of the subject's arm image away from the cube 12 . . For example, as shown in FIG. 1G, when the subject's arm image moves up and down in an arc outside the cube 12 in the direction of the arrow, the sound can be varied to match the movement. For example, the sound can be louder (or softer) as the subject's arm image moves down in an arc outside the cube 12 in the direction of the arrow, and softer (or louder) as it moves up. Or, for example, make the sound higher (or lower) as the image of the subject's arm moves downward in an arc outside the cube 12 in the direction of the arrow, and lower (or higher) as it moves upward. Alternatively, for example, as the subject's arm image moves downward in an arc outside the cube 12 in the direction of the arrow, the sound approaches a first timbre, and as it moves upward, the sound shifts to a second timbre. You can bring it closer. Such an auditory effect also emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
 このように、立方体12と対象の画像11の少なくとも一部との関係に従って、音を再生し、かつ音を変化させることによって、立方体12の存在を強調することができ、対象の画像11は、3次元的に表現されているとより錯覚されやすくなる。立方体12と対象の画像11の少なくとも一部との関係は、例えば、上述したように、立方体12と対象の画像11の一部との距離(すなわち、静的関係)であってもよいし、立方体12に対する対象の画像11の一部の動き(すなわち、動的関係)であってもよいし、他の関係であってもよい。 Thus, the presence of the cube 12 can be emphasized by playing and varying the sound according to the relationship between the cube 12 and at least a portion of the image of interest 11, and the image of interest 11 is: If it is represented three-dimensionally, it becomes easier to be illusioned. The relationship between the cube 12 and at least a portion of the target image 11 may be, for example, the distance between the cube 12 and the portion of the target image 11 (i.e., static relationship), as described above, It may be the movement of a portion of the image of interest 11 relative to the cube 12 (ie dynamic relationship) or other relationship.
 上述した例では、対象の画像11が動くことにより、立方体12との関係が変わることを説明したが、本発明はこれに限定されない。例えば、図1Cおよび図1Dを参照して上述したように、立方体12が移動し、それに伴って、立方体12と対象の画像11の少なくとも一部との関係も変化し、立方体12と対象の画像11の少なくとも一部との関係に従って、音を再生し、かつ音を変化させるようにしてもよい。 In the above example, it was explained that the relationship with the cube 12 changes as the target image 11 moves, but the present invention is not limited to this. For example, as described above with reference to FIGS. 1C and 1D, as cube 12 moves, the relationship between cube 12 and at least a portion of object image 11 also changes, and cube 12 and object image 11 change accordingly. The sound may be reproduced and changed according to the relationship with at least part of 11 .
 例えば、図1H~図1Jでは、図1Cおよび図1Dと同様に、立方体12が軸周りに回転させられている。図1Hでは、対象の画像11は、立方体12の内部に収まっている。従って、スピーカから音は再生されない。 For example, in FIGS. 1H-1J, the cube 12 is rotated about its axis, similar to FIGS. 1C and 1D. In FIG. 1H, the image of the object 11 is contained within the cube 12 . Therefore, no sound is reproduced from the speaker.
 立方体12が軸周りに回転させられると、図1Iに示されるように、対象の腕の画像が、立方体12の外に出ているように見えるようになる。図1Hに示される状態から図1Iに示される状態に立方体12が変化する過程で、立方体12が対象の腕の画像に接触した瞬間から音を再生し、立方体12が対象の腕の画像から遠くなるにつれて、その音を変化させることができる。音は、スピーカから再生され得る。立方体12が対象の腕の画像に接触した瞬間に再生される音により、立方体12があたかも3次元空間内にあるかのような錯覚を生じさせ、そして、立方体12と腕の画像との間の距離に従って音を変化させることによって、空間の3次元的な広がりの錯覚を生じさせることができる。音の変化は、例えば、音の大きさ、音の音程、音の音色のうちの少なくとも1つであり得る。例えば、立方体12が対象の腕の画像から遠くなるにつれて音を大きくすることができる。あるいは、例えば、立方体12が対象の腕の画像から遠くなるにつれて音を小さくするようにしてもよいし、立方体12が対象の腕の画像から遠くなるにつれて音を高くもしくは低くするようにしてもよいし、立方体12が対象の腕の画像から遠くなるにつれて別の音色に近づくようにするようにしてもよい。このような聴覚的効果により、立方体12の存在が強調され、ひいては、対象の画像11の遠近感がさらに強調されることになる。 When the cube 12 is rotated about its axis, the subject's arm image appears to be outside the cube 12, as shown in FIG. 1I. In the process of changing the cube 12 from the state shown in FIG. 1H to the state shown in FIG. You can change the sound as you go. Sound may be played from the speaker. The sound played at the moment the cube 12 touches the subject's arm image creates the illusion that the cube 12 is in a three-dimensional space, and the distance between the cube 12 and the arm image. By varying the sound according to distance, the illusion of three-dimensional expansion of space can be created. The change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound. For example, the sound can be louder as the cube 12 is farther from the subject's arm image. Alternatively, for example, the sound may decrease as the cube 12 moves away from the target's arm image, or the sound may increase or decrease as the cube 12 moves away from the target's arm image. However, the further away the cube 12 is from the target arm image, the closer to a different tone it may be. Such an auditory effect emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
 さらに、対象の腕の画像から遠ざかる立方体12の動きに加えて、立方体12の外部での対象の画像の動きに応じて、音を変化させるようにしてもよい。例えば、図1Jに示されるように、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて上下に動くとき、および/または、対象の腕の画像が立方体12の外部で矢印の方向に左右に動くとき、その動きに合わせて音を変化させることができる。例えば、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて下に動くにつれて音を大きく(または小さく)し、上に動くにつれて音を小さく(または大きく)することができる。あるいは、例えば、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて下に動くにつれて音を高く(または低く)し、上に動くにつれて音を低く(または高く)するようにしてもよいし、例えば、対象の腕の画像が立方体12の外部で矢印の方向に円弧を描いて下に動くにつれて音を第1の音色に近づけ、上に動くにつれて音を第2の音色に近づけるようにしてもよい。例えば、対象の腕の画像が立方体12から遠くに延びるにつれて音を大きくもしくは小さくするようにしてもよいし、例えば、対象の腕の画像が立方体12から遠くに延びるにつれて音を高くもしくは低くするようにしてもよいし、例えば、対象の腕の画像が立方体12から遠くに延びるにつれて別の音色に近づくようにするようにしてもよい。このような聴覚的効果によっても、立方体12の存在が強調され、ひいては、対象の画像11の遠近感がさらに強調されることになる。 Furthermore, in addition to the movement of the cube 12 moving away from the target arm image, the sound may be changed according to the movement of the target image outside the cube 12 . For example, as shown in FIG. 1J, when the image of the subject's arm moves up and down in an arc outside the cube 12 in the direction of the arrow and/or when the image of the subject's arm moves outside the cube 12 to the direction of the arrow. When moving left and right in the direction of , the sound can be changed according to the movement. For example, the sound can be louder (or softer) as the subject's arm image moves down in an arc outside the cube 12 in the direction of the arrow, and softer (or louder) as it moves up. Or, for example, make the sound higher (or lower) as the image of the subject's arm moves downward in an arc outside the cube 12 in the direction of the arrow, and lower (or higher) as it moves upward. Alternatively, for example, as the subject's arm image moves downward in an arc outside the cube 12 in the direction of the arrow, the sound approaches a first timbre, and as it moves upward, the sound shifts to a second timbre. You can bring it closer. For example, the sound may be louder or softer as the subject's arm image extends farther from the cube 12, e.g. or, for example, the subject's arm image may approach a different tone as it extends farther from the cube 12 . Such an auditory effect also emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
 上述した例では、立方体12を境界として、対象の画像の一部が境界から遠くに延びるにつれて音を変化させるまたは対象の画像の一部が境界の外で動くことに応じて音を変化させることを説明したが、境界はこれに限定されない。境界は、対象の画像の近く、例えば、対象の画像の周囲に規定される限り、任意の境界であり得る。例えば、境界は、立方体12のように可視であってもよいし、不可視であってもよい。境界が不可視である場合には、視覚的効果に依拠することなく、聴覚的効果によって画像を3次元的に表現することになる。 In the example above, with the cube 12 as the boundary, changing the sound as a portion of the image of interest extends farther from the boundary or changing the sound in response to a portion of the image of interest moving outside the boundary. , but the boundary is not limited to this. The boundary can be any boundary as long as it is defined near, eg, around the image of interest. For example, the boundary may be visible, such as cube 12, or invisible. If the boundaries are not visible, auditory effects will render the image three-dimensionally without relying on visual effects.
 境界は、任意の形状を有し得る。例えば、対象の画像の周囲を包囲する形状(例えば、球形、楕円球形、円筒形、角柱形等)であってもよいし、対象の画像を包囲しない形状(例えば、平面形、曲面形、半球形等)であってもよい。境界は、経時的に変化するものであってもよいし、経時的に変化しないものであってもよい。例えば、境界が立方体12によって表される上記の例のように、境界は、経時的に軸周りに回転し得る。 The boundary can have any shape. For example, it may be a shape that surrounds the target image (e.g., spherical, elliptical, cylindrical, prismatic, etc.), or a shape that does not surround the target image (e.g., planar, curved, hemispherical). shape, etc.). The boundary may change over time or may not change over time. For example, as in the example above where the boundary is represented by cube 12, the boundary may rotate about an axis over time.
 再生され変化させられる音は、任意の音であり得る。音は、例えば、画像に直接関係のある音であってもよし、画像に幾分か関係のある音であってもよいし、画像に無関係の音であってもよい。好ましくは、音は、画像に無関係の音であり得、より好ましくは、画像に直接関係のある音であり得る。画像に直接関係のある音は、例えば、元の画像10に音が同期されていた場合(例えば、元の画像10が音付きの静止画または音付きの動画であった場合)の同期されていた音であり得る。画像に幾分か関係のある音は、例えば、画像との関連が連想される音(例えば、鳥の画像に対する鳥の羽音もしくは鳴き声、車の画像に対する車の走行音もしくは警音器音等)であり得る。元の画像10に音が同期されていた場合であっても、音は、元の画像10に同期されていた音以外の音であることができる。 Any sound can be played and changed. The sound may be, for example, a sound that is directly related to the image, a sound that is somewhat related to the image, or a sound that is unrelated to the image. Preferably, the sound may be sound unrelated to the image, more preferably sound directly related to the image. Sound that is directly related to the image may, for example, have been synchronized to the original image 10 (eg, if the original image 10 was a still image with sound or a moving image with sound). sound. Sounds that are somewhat related to images are, for example, sounds that are associated with images (e.g., bird wing sounds or chirping for bird images, car running sounds or horn sounds for car images, etc.). can be Even if the sound was synchronized to the original image 10 , the sound can be a sound other than the sound that was synchronized to the original image 10 .
 上述した例では、視覚的効果および/または聴覚的効果によって画像を3次元的に表現することを説明したが、例えば、人間の五感のうちの少なくとも1つに与える効果(例えば、嗅覚的効果)によって、画像を3次元的に表現することも可能である。 In the above examples, visual effects and/or auditory effects are used to represent images in three dimensions. It is also possible to represent an image three-dimensionally.
 図2Aは、本発明の別の実施形態における手法に従って、図1Aに示される画像10内の対象の画像11を3次元的に表現した例を示す。 FIG. 2A shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention.
 図2Aに示される疑似3次元画像30では、元の画像10に比べて、対象の画像11とは別の要素(本例では、水平方向走査線13)が追加されて表示されている。水平方向走査線13は、対象の画像11上に追加されている。このとき、水平方向走査線13は、画像10に含まれる3次元情報または画像から導出される3次元情報に基づいて、対象の輪郭形状を表現するように追加される。例えば、図2Aに示されるように、水平方向走査線13は、対象の顔の曲面に沿って湾曲し、かつ、対象の鼻の凹凸に沿って湾曲するように表現されている。疑似3次元画像30において、水平方向走査線13が対象の輪郭形状を表現することにより、対象の画像11に遠近感が生じており、これにより、対象の画像11が3次元的に見え得る。 In the pseudo three-dimensional image 30 shown in FIG. 2A, compared to the original image 10, an element (horizontal scanning line 13 in this example) different from the target image 11 is added and displayed. A horizontal scan line 13 has been added over the image 11 of the object. Horizontal scan lines 13 are then added to represent the contour shape of the object, based on the three-dimensional information contained in or derived from the image 10 . For example, as shown in FIG. 2A, the horizontal scan line 13 is represented as curving along the curved surface of the subject's face and curving along the contours of the subject's nose. In the pseudo-three-dimensional image 30, the horizontal scanning lines 13 express the contour shape of the object, so that the object image 11 has a sense of perspective, so that the object image 11 can be seen three-dimensionally.
 このように、疑似3次元画像30では、対象の画像11自体は2次元画像であるものの、水平方向走査線13の存在により、対象の画像11は、3次元的に表現されていると錯覚されやすい。 Thus, in the pseudo three-dimensional image 30, although the target image 11 itself is a two-dimensional image, the presence of the horizontal scanning lines 13 gives the illusion that the target image 11 is represented three-dimensionally. Cheap.
 図2Bは、図2Aに示される疑似3次元画像30において、対象の画像11の遠近感をさらに増強して、対象の画像11を3次元的に表現した例を示す。 FIG. 2B shows an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 30 shown in FIG. 2A.
 図2Bでは、図1Bを参照して上述した立方体12が対象の画像11の周囲に追加されている。上述したように、立方体12の遠近感により、対象の画像11にも遠近感が生じており、これにより、対象の画像11が3次元的に見え得る。 In FIG. 2B, the cube 12 described above with reference to FIG. 1B has been added around the image 11 of interest. As described above, the perspective of the cube 12 also causes the image of the object 11 to have a perspective, so that the image of the object 11 can appear three-dimensional.
 立方体12は、図1Cおよび図1Dを参照して上述したように、軸周りに回転させられ得る。これにより、疑似3次元画像30は動画となる。立方体12が軸周りに回転させられることにより、対象の画像11の遠近感をさらに増強することができる。 The cube 12 can be rotated about its axis as described above with reference to Figures 1C and 1D. As a result, the pseudo three-dimensional image 30 becomes a moving image. By rotating the cube 12 about its axis, the perspective of the image 11 of the object can be further enhanced.
 さらに、図1E~図1Jを参照して上述したように、立方体12と対象の画像11の少なくとも一部との関係に従って、音を再生し、かつ音を変化させることによって、立方体12の存在を強調することもでき、対象の画像11の遠近感をさらに増強することができる。 Further, as described above with reference to FIGS. 1E-1J, the presence of the cube 12 can be detected by playing and varying the sound according to the relationship between the cube 12 and at least a portion of the image 11 of interest. It can also be enhanced, further enhancing the perspective of the image 11 of the object.
 このように、疑似3次元画像30では、対象の画像11自体は2次元画像であるものの、水平方向走査線13の存在ならびに立方体12の存在または回転させられる立方体12の存在、さらには立方体12との対象の画像11の一部との関係に従って再生および変化する音の存在により、対象の画像11は、3次元的に表現されているとより錯覚されやすい。 Thus, in the pseudo-three-dimensional image 30, although the image of interest 11 itself is a two-dimensional image, the presence of the horizontal scan lines 13 as well as the presence of the cube 12 or the rotated cube 12, and also the presence of the cube 12 and the cube 12 to be rotated. Due to the presence of sounds that reproduce and change according to their relationship with a portion of the image 11 of the object, the image 11 of the object is more likely to give the illusion of being represented three-dimensionally.
 なお、上述した例では、画像に追加される要素(上述した例では、立方体12)が対象の画像11に重なる例を説明したが、画像に追加される要素は、必ずしも対象の画像11に重なる必要はない。要素は、疑似3次元効果を奏する限り、画像内の任意の位置に追加されることができる。例えば、図2Cの疑似3次元画像20’に示されるように、追加される要素は、対象の画像11に隣接して配置されることができる。追加される要素が3次元的に表現されることにより、画像内に幾分か遠近感が生じ、対象の画像11にも幾分かの遠近感が生じ得る。これにより、対象の画像11が3次元的に見え得る。 In the above example, the element added to the image (the cube 12 in the above example) overlaps the target image 11, but the element added to the image does not necessarily overlap the target image 11. No need. Elements can be added anywhere in the image as long as it produces a pseudo three-dimensional effect. For example, the added elements can be placed adjacent to the image of interest 11, as shown in the pseudo-three-dimensional image 20' of FIG. 2C. The three-dimensional representation of the added elements introduces some perspective in the image and may also introduce some perspective in the image 11 of interest. This allows the image 11 of the object to appear three-dimensional.
 また、上述した例では、1つの要素(上述した例では、立方体12)が画像10に追加される例を説明したが、追加される要素の数はこれに限定されない。例えば、図2Dの疑似3次元画像20’’に示されるように、複数の要素12が画像10に追加されることができる。追加される複数の要素は、例えば、それぞれの軸周りに回転させられるようにしてもよいし、図2Dの疑似3次元画像20’’に示されるように、複数の要素のうちの少なくともいくつかに共通する軸周りに回転させられるようにしてもよい。追加される要素の存在により、または、追加される要素が回転させられることにより、画像内に遠近感が生じ、対象の画像11にも遠近感が生じ得る。これにより、対象の画像11が3次元的に見え得る。 Also, in the above example, an example in which one element (the cube 12 in the above example) is added to the image 10 has been described, but the number of added elements is not limited to this. For example, multiple elements 12 can be added to the image 10, as shown in the pseudo-three-dimensional image 20'' of FIG. 2D. The added elements may, for example, be rotated about their respective axes, and at least some of the elements may be rotated, as shown in the pseudo-three-dimensional image 20'' of FIG. 2D. may be rotated about an axis common to . The presence of the added element or the rotation of the added element creates perspective in the image and may also create perspective in the image 11 of interest. This allows the image 11 of the object to appear three-dimensional.
 上述した手法を用いて、本来は画像10には存在しない要素が疑似3次元画像において追加することにより、疑似3次元画像は、仮想的な画像であることの印象が強くなる。 By using the above-described method to add elements that originally do not exist in the image 10 to the pseudo-three-dimensional image, the pseudo-three-dimensional image gives a strong impression that it is a virtual image.
 図3A~図3Bは、本発明の別の実施形態における手法に従って、図1Aに示される画像10内の対象の画像11を3次元的に表現した例を示す。図1Cおよび図1Dに示される例では、疑似3次元画像は、動画であり、図3Aおよび図3Dに示される静止画41、42は、動画の1フレームであるとみなされ得る。 3A-3B show an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the present invention. In the examples shown in FIGS. 1C and 1D, the pseudo three-dimensional image is a moving image, and the still images 41, 42 shown in FIGS. 3A and 3D can be considered one frame of the moving image.
 図3Aでは、図1Aに示される画像10から作成された、対象を第1の視線方向から見た静止画41が表示されている。或る画像から視線方向が異なる画像を作成する手法は、当該技術分野において公知の手法であり得る。例えば、画像に含まれる3次元情報または画像から導出される3次元情報に基づいて、或る画像から視線方向が異なる画像を作成することができる。例えば、機械学習技術を用いて、或る画像から視線方向が異なる画像を作成することができる。例えば、画像10が動画である場合には、動画の各フレームに対して静止画が生成されることができる。 In FIG. 3A, a still image 41 created from the image 10 shown in FIG. 1A and viewed from the first line-of-sight direction is displayed. A technique for creating images with different line-of-sight directions from a given image may be a technique known in the art. For example, based on the three-dimensional information contained in the image or the three-dimensional information derived from the image, images with different line-of-sight directions can be created from a given image. For example, machine learning techniques can be used to create images with different viewing directions from an image. For example, if image 10 is a moving image, a still image can be generated for each frame of the moving image.
 第1の視線方向は、図1Aに示される画像10の視線方向に比べて、より左の方向から対象を見たときの視線方向である。 The first line-of-sight direction is the line-of-sight direction when the object is viewed from a more left direction than the line-of-sight direction of the image 10 shown in FIG. 1A.
 図3Bでは、図1Aに示される画像10から作成された、対象を第2の視線方向から見た静止画42が表示されている。上述したように、或る画像から視線方向が異なる画像を作成する手法は、当該技術分野において公知の手法であり得る。 In FIG. 3B, a still image 42 created from the image 10 shown in FIG. 1A is displayed, looking at the object from the second line-of-sight direction. As described above, the method of creating images with different line-of-sight directions from a certain image may be a method known in the art.
 第2の視線方向は、図1Aに示される画像10の視線方向に比べて、より右の方向から対象を見たときの視線方向である。 The second line-of-sight direction is the line-of-sight direction when the object is viewed from a more right direction than the line-of-sight direction of the image 10 shown in FIG. 1A.
 図3A~図3Bに示される例では、作成された静止画41および静止画42を時間的に連続して結合することにより、疑似3次元動画が生成されて表示される。視点が異なる2つの静止画をフレームとして有する疑似3次元動画により、異なる視点から生じる視差により、対象の画像11に遠近感が生じているように見える。これにより、対象の画像11が3次元的に見え得る。 In the example shown in FIGS. 3A and 3B, the created still images 41 and 42 are temporally continuously combined to generate and display a pseudo three-dimensional moving image. A pseudo three-dimensional moving image having two frames of still images from different viewpoints gives the target image 11 a sense of perspective due to the parallax generated from the different viewpoints. This allows the image 11 of the object to appear three-dimensional.
 生成された疑似3次元動画では、例えば、静止画41および静止画42が交互に繰り返し出現するようにしてもよい。これにより、任意の長さの動画が生成され得る。あるいは、画像10が動画である場合には、生成された動画では、各フレームから作成された静止画41および静止画42が、フレームの順に連続して出現するようにしてもよい。このとき、フレームレートは任意の値に設定され得る。例えば、画像10のフレームレートを維持すると、画像10の長さの2倍の長さの疑似3次元動画が生成され得る。例えば、フレームレートを2倍にすることで、画像10の長さと同じ長さの疑似3次元動画が生成され得る。 In the generated pseudo-three-dimensional video, for example, still images 41 and 42 may appear alternately and repeatedly. This can generate animations of arbitrary length. Alternatively, when the image 10 is a moving image, the still images 41 and 42 created from each frame may appear continuously in frame order in the generated moving image. At this time, the frame rate can be set to any value. For example, maintaining the frame rate of the image 10 may generate a pseudo-3D animation that is twice the length of the image 10 . For example, by doubling the frame rate, a pseudo-three-dimensional video of the same length as the image 10 can be generated.
 生成された疑似3次元動画には、上述した要素(例えば、要素12、水平方向走査線13)が追加されてもよいし、画像と共に音が再生されてもよい。これにより、対象の画像11の遠近感を増強することができる。要素12が追加される場合には、要素12は、軸周りに回転させられることができる。これにより、対象の画像11の遠近感をさらに増強することができる。 The above-described elements (eg, element 12, horizontal scanning line 13) may be added to the generated pseudo-three-dimensional video, and sound may be played along with the image. Thereby, the perspective of the target image 11 can be enhanced. If the element 12 is added, the element 12 can be rotated around the axis. This can further enhance the perspective of the target image 11 .
 上述した手法は、例えば、対象の3次元モデルが存在せず、対象の2次元画像しか存在しない場合であっても、対象の画像を3次元的に表現することができる。もちろん、対象の3次元モデルが存在する場合であっても、上述した手法を適用することが可能である。 With the above-described method, for example, even if there is no 3D model of the target and only a 2D image of the target, the target image can be represented in 3D. Of course, even if there is a three-dimensional model of the object, it is possible to apply the technique described above.
 図4Aは、疑似3次元画像を表示するための画像表示装置の一例を示している。 FIG. 4A shows an example of an image display device for displaying a pseudo three-dimensional image.
 本例では、画像表示装置は、少なくとも1つの部材21が回転して表示面を形成する回転型ディスプレイ20(「ホログラムディスプレイ」とも称される)である。少なくとも1つの部材21は回転軸C1周りに回転することが可能である。直線状の少なくとも1つの部材21を回転させることにより、平面状の表示面を形成することが可能である。少なくとも1つの部材21上には光源(例えば、LED)が配置されている。少なくとも1つの部材21上の光源は、少なくとも1つの部材21の回転角度に応じて発光を制御されることにより、残像効果で表示面上に画像を投影することができる。回転型ディスプレイ20では、少なくとも1つの部材21が回転することにより、背景が透過して見えるため、あたかも画像が空中浮遊しているかのように見えるという効果がある。 In this example, the image display device is a rotary display 20 (also called a "hologram display") in which at least one member 21 rotates to form a display surface. At least one member 21 is rotatable around a rotation axis C1. By rotating at least one linear member 21, it is possible to form a planar display surface. A light source (for example, an LED) is arranged on at least one member 21 . Light emission from the light source on at least one member 21 is controlled according to the rotation angle of at least one member 21, so that an image can be projected onto the display surface by the afterimage effect. In the rotatable display 20, the background can be seen through the rotation of at least one member 21, so there is an effect that the image appears as if it is floating in the air.
 また、回転型ディスプレイ20に表示される画像のフレームレートは、少なくとも1つの部材21の回転速度に依存している。概して、回転型ディスプレイ20に表示される画像のフレームレートは、一般的なディスプレイ装置に表示される画像のフレームレートよりも有意に小さい。回転型ディスプレイ20のフレームレートは、例えば、約20fps~約40fps、例えば、約30fpsであり得る。これにより、回転型ディスプレイ20上に表示される画像は、一般的なディスプレイ装置に表示される画像よりも粗い画像となり得る。粗い画像を表示することにより、回転型ディスプレイ20上に表示される画像は、仮想的な画像であることの印象が強くなる。 Also, the frame rate of the image displayed on the rotary display 20 depends on the rotation speed of at least one member 21 . In general, the frame rate of images displayed on rotating display 20 is significantly lower than the frame rate of images displayed on typical display devices. The frame rate of rotating display 20 can be, for example, from about 20 fps to about 40 fps, such as about 30 fps. As a result, the image displayed on the rotary display 20 can be rougher than the image displayed on a typical display device. By displaying a rough image, the impression that the image displayed on the rotary display 20 is a virtual image is enhanced.
 回転型ディスプレイ20は、本体22を有する。本体22は、回転軸C2周りに回転することが可能なように構成されている。本体22が回転軸C2周りに回転することにより、少なくとも1つの部材21によって形成される表示面の向きを変更することができる。例えば、回転型ディスプレイ20は、検知手段(図示せず)によって回転型ディスプレイ20を見ているユーザの位置を検知し、検知されたユーザUの位置を向くように、本体22を回転軸C2周りに回転させることができる。 The rotary display 20 has a main body 22. The main body 22 is configured to be rotatable around the rotation axis C2. By rotating the main body 22 around the rotation axis C2, the orientation of the display surface formed by the at least one member 21 can be changed. For example, the rotatable display 20 detects the position of the user viewing the rotatable display 20 by a detection means (not shown), and rotates the main body 22 around the rotation axis C2 so as to face the detected position of the user U. can be rotated to
 図4Bは、図4Aに示される表示面の向きにおいて、回転型ディスプレイ20の表示面23に表示される画像の一例を示す。 FIG. 4B shows an example of an image displayed on the display surface 23 of the rotary display 20 in the orientation of the display surface shown in FIG. 4A.
 図4Bに示される例では、図1Aに示される例と同様に、表示面23には、対象の画像11が表示されている。対象の画像11において、対象は正面を向いている。これにより、表示面を見るユーザUは、対象の正面側を見ることになる。 In the example shown in FIG. 4B, the target image 11 is displayed on the display surface 23, similar to the example shown in FIG. 1A. In the image 11 of the object, the object faces the front. As a result, the user U viewing the display surface sees the front side of the object.
 なお、表示面23には、図1B~図3Bを参照して上述した疑似3次元画像が表示されることができる。回転型ディスプレイ20の背景が透過して見える表示面23に疑似3次元画像が表示されることにより、疑似3次元画像は、空中浮遊しているかのように見えるようになり、疑似3次元画像の3次元感が強調され得る。また、フレームレートが有意に低い回転型ディスプレイ20の表示面23に疑似3次元画像が表示されることにより、疑似3次元画像の仮想的な画像であることの印象が強くなる。 The display surface 23 can display the pseudo three-dimensional image described above with reference to FIGS. 1B to 3B. By displaying the pseudo three-dimensional image on the display surface 23 through which the background of the rotary display 20 is visible, the pseudo three-dimensional image appears as if it is floating in the air, and the pseudo three-dimensional image is displayed. Three-dimensional feeling can be emphasized. In addition, by displaying the pseudo three-dimensional image on the display surface 23 of the rotary display 20 having a significantly low frame rate, the impression that the pseudo three-dimensional image is a virtual image is enhanced.
 例えば、図4Cに示されるように、ユーザUが回転型ディスプレイ20の周りで矢印の方向に移動し、回転型ディスプレイ21の左側に移動した場合、回転型ディスプレイ20は、検知手段(図示せず)によってユーザUの位置を検知し、表示面の向きを変更するように、本体22を回転軸C2周りに時計回りに回転させることができる。これにより、回転型ディスプレイ20の表示面は、ユーザUの方を向くことになる。ユーザUは移動した後も、回転型ディスプレイ20の表示面を見ることができる。 For example, as shown in FIG. 4C, when the user U moves around the rotatable display 20 in the direction of the arrow and moves to the left of the rotatable display 21, the rotatable display 20 detects the detection means (not shown). ) to detect the position of the user U and rotate the main body 22 clockwise around the rotation axis C2 so as to change the orientation of the display surface. As a result, the display surface of the rotary display 20 faces the user U. As shown in FIG. The user U can see the display surface of the rotary display 20 even after moving.
 図4Dは、図4Cに示されるユーザ位置に向いた表示面23に表示される画像の一例を示す。 FIG. 4D shows an example of an image displayed on the display surface 23 facing the user position shown in FIG. 4C.
 図4Cに示されるユーザ位置は、図4Aに示されるユーザ位置に対して左側に移動しているため、ユーザUは、図4Aに示される表示面に表示される画像において正面を向いている対象を左側から見ることになるとみなすことができる。そこで、図4Cに示されるユーザ位置に向いた表示面23には、正面を向いている対象を左側から見た画像11’を表示することができる。これにより、ユーザUは、対象の画像11、11’における対象が3次元物体であるかのように錯覚し得る。このような錯覚は、回転型ディスプレイ21の表示面が、図4Aに示されるユーザ位置および図4Cに示されるユーザ位置の両方でユーザUの方に向けられることによってさらに増強され得る。表示面が常にユーザUを向いているため、ユーザUは、表示面が平面的であることを感じにくいからである。 Since the user position shown in FIG. 4C has moved to the left with respect to the user position shown in FIG. 4A, the user U is facing forward in the image displayed on the display surface shown in FIG. 4A. can be viewed from the left side. Therefore, on the display surface 23 facing the user position shown in FIG. 4C, it is possible to display an image 11' of a front facing object viewed from the left side. This may give the user U the illusion that the object in the object images 11, 11' is a three-dimensional object. Such an illusion can be further enhanced by having the display surface of rotating display 21 directed toward user U at both the user position shown in FIG. 4A and the user position shown in FIG. 4C. This is because the display surface always faces the user U, so the user U does not easily perceive that the display surface is planar.
 図1B~図3Bを参照して上述した疑似3次元画像は、このような表示面23に表示されることにより、疑似3次元効果がさらに強調され得る。 By displaying the pseudo-three-dimensional image described above with reference to FIGS. 1B to 3B on such a display surface 23, the pseudo-three-dimensional effect can be further emphasized.
 例えば、上述した回転型ディスプレイ20によれば、ユーザUは、回転型ディスプレイ20に対するどの角度位置からでも、表示面23に対して正対することができ、歪みのない画像を視認することができる。例えば、ユーザU1が回転型ディスプレイ20に対するいずれの角度位置から回転型ディスプレイ20を見ても、表示面23に表示された馬の画像は、図4E(a)に示されるように、歪むことなく、ユーザU1に提示される。他方で、ユーザU1がディスプレイを視認しているとき、別のユーザU2がユーザU1とは異なる角度位置から回転型ディスプレイ20を見ようとすると、回転型ディスプレイ20の表示面23は、ユーザU2の方を向くことができず、結果として、ユーザU2は、歪んだ画像を視認することになる。例えば、表示面23に表示された馬の画像は、図4E(b)に示されるように、歪んでユーザU2に提示される。 For example, according to the rotatable display 20 described above, the user U can face the display surface 23 from any angular position with respect to the rotatable display 20, and can visually recognize an image without distortion. For example, even if the user U1 views the rotary display 20 from any angular position with respect to the rotary display 20, the image of the horse displayed on the display surface 23 is not distorted, as shown in FIG. 4E(a). , is presented to user U1. On the other hand, when user U1 is viewing the display, if another user U2 tries to view rotary display 20 from a different angular position than user U1, display surface 23 of rotary display 20 will not be visible to user U2. As a result, user U2 sees a distorted image. For example, the horse image displayed on the display surface 23 is distorted and presented to the user U2 as shown in FIG. 4E(b).
 本発明の一実施形態において、回転型ディスプレイ25は、少なくとも1つの部材26を第1の回転軸C1周りに回転させ、かつ、少なくとも1つの部材26を第2の回転軸C2周りに回転させることにより、立体的な表示面を形成することができる。第2の回転軸C2は、第1の回転軸C1に対して略垂直であり得る。図4F(a)では、第1の回転軸C1周りの回転の方向がRC1で表され、第2の回転軸C2周りの回転の方向がRC2で表されている。直線状の少なくとも1つの部材26を第1の回転軸C1および第2の回転軸C2の周りに回転させることにより、略球面状の表示面を形成することが可能である。少なくとも1つの部材26上には光源(例えば、LED)が配置されている。少なくとも1つの部材26上の光源は、少なくとも1つの部材26の回転角度に応じて発光を制御されることにより、残像効果で略球面状の表示面上に画像を投影することができる。例えば、回転型ディスプレイ25は、図4F(b)に示されるように、略球面状の表示面を形成することができる。上述した構成以外の構成については、回転型ディスプレイ25は、上述した回転型ディスプレイ20と同様の構成を有し得る。 In one embodiment of the present invention, the rotating display 25 rotates at least one member 26 about a first rotation axis C1 and rotates at least one member 26 about a second rotation axis C2. Thus, a three-dimensional display surface can be formed. The second axis of rotation C2 may be substantially perpendicular to the first axis of rotation C1. In FIG. 4F(a), the direction of rotation about the first rotation axis C1 is indicated by RC1, and the direction of rotation about the second rotation axis C2 is indicated by RC2. By rotating at least one linear member 26 about a first rotation axis C1 and a second rotation axis C2, a substantially spherical display surface can be formed. A light source (eg, an LED) is disposed on at least one member 26 . Light emission from the light source on at least one member 26 is controlled according to the rotation angle of the at least one member 26, so that an image can be projected onto the substantially spherical display surface by the afterimage effect. For example, the rotary display 25 can form a substantially spherical display surface, as shown in FIG. 4F(b). Except for the configuration described above, the rotary display 25 may have the same configuration as the rotary display 20 described above.
 このような略球面状の表示面であれば、ユーザU1がディスプレイを視認しているときに、別のユーザU2がユーザU1とは異なる角度位置から回転型ディスプレイ25を見る場合であっても、ユーザU2は、ユーザU1と同様に、歪んでいない画像を視認することができる。例えば、複数の別のユーザがユーザU1とは異なる角度位置から回転型ディスプレイ25を見る場合であっても、複数のユーザのそれぞれは、ユーザU1と同様に、歪んでいない画像を視認することができる。 With such a substantially spherical display surface, even if another user U2 views the rotary display 25 from a different angular position from the user U1 while the user U1 is viewing the display, User U2 can visually recognize the undistorted image in the same way as user U1. For example, even when a plurality of other users view the rotatable display 25 from different angular positions than the user U1, each of the plurality of users can visually recognize the undistorted image similarly to the user U1. can.
 回転型ディスプレイ25の略球面状の表示面には、図1B~図3Bを参照して上述した疑似3次元画像が表示されることができる。回転型ディスプレイ25の背景が透過して見える表示面に疑似3次元画像が表示されることにより、疑似3次元画像は、空中浮遊しているかのように見えるようになり、疑似3次元画像の3次元感が強調され得る。また、フレームレートが有意に低い回転型ディスプレイ25の表示面に疑似3次元画像が表示されることにより、疑似3次元画像の仮想的な画像であることの印象が強くなる。さらには、回転型ディスプレイ25に対するいずれの角度位置からも歪んでいない疑似3次元画像を視認することができることにより、疑似3次元画像の3次元感が際立ち得る。 The pseudo-three-dimensional image described above with reference to FIGS. 1B to 3B can be displayed on the substantially spherical display surface of the rotary display 25 . By displaying the pseudo-three-dimensional image on the display surface of the rotary display 25 through which the background can be seen, the pseudo-three-dimensional image appears as if it is floating in the air. A sense of dimension can be emphasized. Moreover, by displaying the pseudo three-dimensional image on the display surface of the rotary display 25 having a significantly low frame rate, the impression that the pseudo three-dimensional image is a virtual image is enhanced. Furthermore, since the undistorted pseudo three-dimensional image can be viewed from any angular position with respect to the rotary display 25, the three-dimensional feel of the pseudo three-dimensional image can be emphasized.
 上述した回転型ディスプレイ20、25は、少なくとも1つの部材21、26を共通軸周りに回転させて1つの平面状の表示面、1つの略球面状の表示面を形成することを説明したが、本発明は、これに限定されない。例えば、複数の部材をそれぞれの軸周りに回転させて複数の表示面を形成することも本発明の範囲内である。 It has been explained that the rotary displays 20, 25 described above rotate at least one member 21, 26 around a common axis to form one planar display surface and one substantially spherical display surface. The invention is not limited to this. For example, it is within the scope of the invention to rotate multiple members about their respective axes to form multiple display surfaces.
 図4Gは、一実施形態における回転型ディスプレイ27の一例を示す。回転型ディスプレイ27は、少なくとも1つの第1の部材を回転させることによって形成された第1の表示面28と、少なくとも1つの第2の部材を回転させることによって形成された第2の表示面29とを形成するように構成されている。第1の部材および第2の部材は2つの軸周りに回転させられ、それぞれ略球面状の表示面を形成することができる。図4Gに示される例では、第1の部材および第2の部材は、共通する軸(本体軸)周りに回転させられている。上述した構成以外の構成については、回転型ディスプレイ27は、上述した回転型ディスプレイ20または25と同様の構成を有し得る。 FIG. 4G shows an example of the rotating display 27 in one embodiment. The rotary display 27 has a first display surface 28 formed by rotating at least one first member and a second display surface 29 formed by rotating at least one second member. is configured to form a The first member and the second member can be rotated about two axes to each form a substantially spherical viewing surface. In the example shown in FIG. 4G, the first member and the second member are rotated about a common axis (body axis). Except for the configuration described above, rotary display 27 may have a configuration similar to rotary display 20 or 25 described above.
 このように複数の表示面を形成することにより、表示領域を拡張することができる。例えば、それぞれの表示面に別個の画像を表示するようにしてもよいし、複数の表示面にわたって1つの画像を表示するようにしてもよい。複数の表示面により、映像表現の幅が広がる。 By forming multiple display surfaces in this way, the display area can be expanded. For example, a separate image may be displayed on each display surface, or one image may be displayed over a plurality of display surfaces. Multiple display surfaces expand the range of video expression.
 回転型ディスプレイ27の略球面状の表示面には、図1B~図3Bを参照して上述した疑似3次元画像が表示されることができる。回転型ディスプレイ27の背景が透過して見える表示面に疑似3次元画像が表示されることにより、疑似3次元画像は、空中浮遊しているかのように見えるようになり、疑似3次元画像の3次元感が強調され得る。また、フレームレートが有意に低い回転型ディスプレイ27の表示面に疑似3次元画像が表示されることにより、疑似3次元画像の仮想的な画像であることの印象が強くなる。さらには、回転型ディスプレイ27に対するいずれの角度位置からも歪んでいない疑似3次元画像を視認することができることにより、疑似3次元画像の3次元感が際立ち得る。さらには、複数の表示面により、多彩な疑似3次元画像を表現することも可能になる。 The pseudo three-dimensional image described above with reference to FIGS. 1B to 3B can be displayed on the substantially spherical display surface of the rotary display 27. FIG. By displaying the pseudo-three-dimensional image on the display surface of the rotary display 27 through which the background is visible, the pseudo-three-dimensional image appears as if it is floating in the air. A sense of dimension can be emphasized. Further, by displaying the pseudo three-dimensional image on the display surface of the rotary display 27 having a significantly low frame rate, the impression that the pseudo three-dimensional image is a virtual image is enhanced. Furthermore, since the undistorted pseudo three-dimensional image can be visually recognized from any angular position with respect to the rotary display 27, the three-dimensional feeling of the pseudo three-dimensional image can be conspicuous. Furthermore, it is possible to express a variety of pseudo three-dimensional images using a plurality of display surfaces.
 上述した例では、特殊な回転型ディスプレイ21、25、27に疑似3次元画像を表示することを説明したが、疑似3次元画像を表示する画像表示装置は、これに限定されない。疑似3次元画像は、任意の他の画像表示装置に表示されることができる。好ましくは、画像表示装置は、背景が透過して見える透明ディスプレイであり得る。これにより、表示される疑似3次元画像の疑似3次元効果が増強されるからである。加えて、または、これに代えて、画像表示装置は、フレームレートが有意に低いディスプレイであり得る。これにより、低いフレームレートで疑似3次元画像が表示されることになり、疑似3次元画像の仮想的な画像であることの印象が強くなり、ひいては、疑似3次元効果を増強することができるからである。 In the above example, the pseudo three-dimensional image is displayed on the special rotary displays 21, 25, and 27, but the image display device that displays the pseudo three-dimensional image is not limited to this. The pseudo three-dimensional image can be displayed on any other image display device. Preferably, the image display device can be a transparent display through which the background can be seen. This is because the pseudo-three-dimensional effect of the pseudo-three-dimensional image displayed is enhanced. Additionally or alternatively, the image display device may be a display with a significantly lower frame rate. As a result, the pseudo three-dimensional image is displayed at a low frame rate, the impression that the pseudo three dimensional image is a virtual image is enhanced, and the pseudo three dimensional effect can be enhanced. is.
 1.2 専用表示装置に依存しない、仮想現実画像を提供する手法
 本発明の発明者は、専用表示装置(例えば、VRゴーグル、ヘッドマウントディスプレイ等)を用いることなく、仮想現実画像をユーザに提供する手法を開発した。この手法を用いると、ユーザは、専用表示装置を装着することなく、自然な仮想現実画像を違和感なく楽しむことができるようになる。例えば、この手法を用いると、据え置き型のディスプレイ、または、上述した回転型ディスプレイ20、25、27等に自然な仮想現実画像を表示することができるようになる。
1.2 Method for Providing Virtual Reality Images Without Relying on Dedicated Display Device developed a method to By using this technique, the user can enjoy natural virtual reality images without discomfort without wearing a dedicated display device. For example, using this technique, it becomes possible to display a natural virtual reality image on a stationary display or on a rotating display such as 20, 25, 27 described above.
 図5Aは、本発明の一実施形態における手法のフローの一例を概略的に図示する。 FIG. 5A schematically illustrates an example flow of a technique in one embodiment of the invention.
 まず、仮想現実画像として表示される画像の基となる画像51が取得される。画像51は、特定の図法で表現されていることが好ましい。特定の図法は、例えば、正距円筒図法(またはエクイレクタングラー図法とも呼ばれる)であり得る。エクイレクタングラー図法では、緯線および経線が直角であり、かつ緯線および経線が等間隔に交差する。これにより、2点間の距離が正しく表現されることになる。エクイレクタングラー図法は、仮想現実画像を表示するときによく使われている図法である。図5Aに示されるように、エクイタングラー図法で表現された画像51では、画像51内に歪みが存在して見える。 First, an image 51 that is the basis of an image displayed as a virtual reality image is acquired. The image 51 is preferably represented in a specific drawing method. A particular projection may be, for example, an equirectangular projection (also called an equirectangular projection). In the equirectangular projection, the lines of latitude and longitude are at right angles and the lines of latitude and longitude intersect at regular intervals. As a result, the distance between the two points is represented correctly. The equirectangular projection is a projection that is often used when displaying virtual reality images. As shown in FIG. 5A, an image 51 represented by the equitangler projection appears to have distortion in the image 51 .
 次いで、画像51を仮想球体52の内面に貼り付ける。例えば、エクイレクタングラー図法で表された画像を球体の内面に貼り付けると、歪みのない自然な画像を生成することができる。 Next, the image 51 is pasted on the inner surface of the virtual sphere 52 . For example, if an image represented by an equirectangular projection is pasted on the inner surface of a sphere, a natural image without distortion can be generated.
 仮想球体52は、ユーザUの視点を中心とし、ユーザUの視点と仮想現実画像を表示するディスプレイ20との間の距離を半径とする仮想的な球体である。例えば、図5Bに示されるように、ユーザUの視点とディスプレイ20との間の距離が小さい場合、仮想球体52の直径は小さくなる一方で、図5Cに示されるように、ユーザUの視点とディスプレイ20との間の距離が大きい場合、仮想球体52の直径は大きくなる。 The virtual sphere 52 is a virtual sphere whose center is the viewpoint of the user U and whose radius is the distance between the viewpoint of the user U and the display 20 that displays the virtual reality image. For example, when the distance between the user U's viewpoint and the display 20 is small, as shown in FIG. When the distance from the display 20 is large, the diameter of the phantom sphere 52 is large.
 ユーザUの視点とディスプレイ20との間の距離は、例えば、ディスプレイ20が備え得る検知手段(図示せず)によって測定することが可能である。検知手段は、ユーザUの眼の位置を検知し、測距の分野で公知の技術を用いて、ユーザUの視点とディスプレイ20との間の距離を測定することができる。 The distance between the viewpoint of the user U and the display 20 can be measured, for example, by sensing means (not shown) that the display 20 may have. The detection means can detect the position of the user's U eyes and measure the distance between the user's U viewpoint and the display 20 using techniques known in the field of distance measurement.
 次いで、ディスプレイ20の表示面に対応する仮想球体52の内面の部分に貼り付けられている画像51の部分が特定され、特定された部分の画像53がディスプレイ20の表示面に表示される。ユーザUは、画像53を見ることができる。 Next, the portion of the image 51 pasted on the portion of the inner surface of the phantom sphere 52 corresponding to the display surface of the display 20 is specified, and the image 53 of the specified portion is displayed on the display surface of the display 20. User U can see image 53 .
 例えば、図5Bに示されるように、ユーザUの視点とディスプレイ20との間の距離が小さい場合、仮想球体52の直径は小さくなるため、ディスプレイ20の表示面に対応する仮想球体52の内面の部分に貼り付けられている画像の部分は、相対的に大きくなる。これに対して、図5Cに示されるように、ユーザUの視点とディスプレイ20との間の距離が大きい場合、仮想球体52の直径は大きくなるため、ディスプレイ20の表示面に対応する仮想球体52の内面の部分に貼り付けられている画像の部分は、相対的に小さくなる。すなわち、ユーザUがディスプレイに近づくと、ユーザUとディスプレイとの間の遠近感を表現するように、画像51内のより大きい領域が表示されることになり、ユーザUがディスプレイから遠ざかると、ユーザUとディスプレイとの間の遠近感を表現するように、画像51内のより小さい領域が表示されることになる。 For example, as shown in FIG. 5B, when the distance between the user U's viewpoint and the display 20 is small, the diameter of the virtual sphere 52 is small, so the inner surface of the virtual sphere 52 corresponding to the display surface of the display 20 is The part of the image pasted on the part becomes relatively large. On the other hand, as shown in FIG. 5C, when the distance between the viewpoint of the user U and the display 20 is large, the diameter of the virtual sphere 52 is large. The portion of the image pasted on the inner surface of the is relatively small. That is, when the user U approaches the display, a larger area within the image 51 will be displayed so as to represent the perspective between the user U and the display, and when the user U moves away from the display, the user A smaller area within the image 51 will be displayed to represent the perspective between U and the display.
 例えば、図5Bおよび図5Cにおいて、画像53内の車54に着目する。図5Bに示される画像53では、車54はディスプレイ20に小さく表示されているが、ユーザUはディスプレイ20に近いので、ユーザUは、ディスプレイ20自体を大きく知覚し、ひいては、車54も相応の大きさであると知覚する。ユーザUがディスプレイ20から遠ざかると、図5Cに示される画像53のように車54がディスプレイ20に大きく表示されるが、ユーザUは、ディスプレイ20自体を小さく知覚し、ひいては、車54も相応の大きさであると知覚する。これにより、ユーザUは、ディスプレイ20に近づいた場合とディスプレイ20から遠ざかった場合とで、車54が略同じ大きさであると知覚することになる。 For example, focus on the car 54 in the image 53 in FIGS. 5B and 5C. In the image 53 shown in FIG. 5B, the car 54 is displayed small on the display 20, but since the user U is close to the display 20, the user U perceives the display 20 itself to be large, and thus the car 54 is also correspondingly large. perceive it as size. When the user U moves away from the display 20, the car 54 is displayed large on the display 20 as in the image 53 shown in FIG. 5C. perceive it as size. As a result, the user U perceives the car 54 to be substantially the same size when approaching the display 20 and when moving away from the display 20 .
 ディスプレイ20に表示される画像を通して、画像内で手前に写っている物体は、ユーザUがディスプレイに近づこうとディスプレイから遠ざかろうと、ユーザUに近いように知覚され、画像内で奥に写っている物体は、ユーザUがディスプレイに近づこうとディスプレイから遠ざかろうと、ユーザUから遠いように知覚される。このように、ディスプレイ20に表示される画像の遠近感が維持されるので、ユーザUは、ディスプレイ20を通じて、画像を現実世界の感覚で見ることができる。言い換えると、ユーザUは、ディスプレイ20を通じて、仮想現実画像を体験することができるのである。 Through the image displayed on the display 20, an object in the foreground in the image is perceived as close to the user U, regardless of whether the user U approaches the display or moves away from the display, and an object in the background in the image is perceived. is perceived as far from the user U, whether the user U is closer to the display or farther from the display. Since the perspective of the image displayed on the display 20 is maintained in this manner, the user U can view the image through the display 20 with a feeling of the real world. In other words, the user U can experience virtual reality images through the display 20 .
上述した例では、回転型ディスプレイ20に画像を表示することを説明したが、本発明は、これに限定されない。ユーザUとディスプレイとの間の距離を測定することができる限り、任意のディスプレイに画像を表示することができる。 In the above example, the image is displayed on the rotary display 20, but the present invention is not limited to this. The image can be displayed on any display as long as the distance between the user U and the display can be measured.
 上述した画像を3次元的に表現する手法および仮想現実画像を提供する手法は、例えば、後述する画像を3次元的に表示するためのシステム100によって実装され得る。 The technique of three-dimensionally representing an image and the technique of providing a virtual reality image described above can be implemented, for example, by the system 100 for three-dimensionally displaying an image, which will be described later.
 2.画像を3次元的に表示するためのシステムの構成
 図6Aは、画像を3次元的に表示するためのシステム100の構成の一例を示す。
2. Configuration of System for Three-Dimensional Display of Images FIG. 6A shows an example of the configuration of a system 100 for three-dimensional display of images.
 システム100は、受信手段110と、作成手段120と、表示手段130とを備えている。
 受信手段110は、画像を受信するように構成されている。受信手段110は、任意の態様で画像を受信することができる。受信される画像には、対象の画像が含まれている。受信手段110は、システム100の外部から画像を受信するようにしてもよいし、システム100の内部(例えば、システムが備え得る記憶手段から画像を受信するようにしてもよい。システム100の外部から画像を受信する場合には、受信手段110は、例えば、システム100に接続された記憶媒体から画像を受信するようにしてもよいし、システム100に接続されたネットワークを介して画像を受信してもよい。ここで、ネットワークの種類は問わない。インターネット、LAN等の任意のネットワークが用いられ得る。
The system 100 comprises receiving means 110 , creating means 120 and displaying means 130 .
The receiving means 110 are arranged to receive images. The receiving means 110 can receive images in any manner. The received image contains the image of the object. The receiving means 110 may receive an image from outside the system 100, or may receive an image from inside the system 100 (for example, from a storage means that the system may have). When receiving an image, the receiving means 110 may, for example, receive the image from a storage medium connected to the system 100, or may receive the image via a network connected to the system 100. Here, the type of network does not matter, and any network such as the Internet or LAN can be used.
 受信される画像は、任意のデータ形式であり得る。受信される画像は、2次元情報(縦×横)を含む2次元画像であってもよいし、3次元情報(縦×横×奥行)を含む3次元画像であってもよい。 The received image can be in any data format. The received image may be a two-dimensional image containing two-dimensional information (length x width) or a three-dimensional image containing three-dimensional information (length x width x depth).
 受信された画像は、作成手段120に渡される。 The received image is passed to the creation means 120.
 作成手段120は、画像を処理することにより、疑似3次元画像を作成するように構成されている。作成手段120は、例えば、画像を処理することにより、対象の画像とは別の要素の3次元的表現を画像内に追加する(例えば、図1B~図2Dを参照)ことによって疑似3次元画像を作成することができる。ここで、要素の3次元的表現は、例えば、要素に陰影をつけること、要素に明暗をつけること、要素に大きさの違いをつけること、または、要素に遠近感をつけることのうちの少なくとも1つを含む。作成手段120による処理は、当該技術分野で公知の画像処理であり得る。 The creating means 120 is configured to create a pseudo three-dimensional image by processing the image. The generating means 120 may generate a pseudo-three-dimensional image by, for example, processing the image to add in the image a three-dimensional representation of elements separate from the image of interest (see, for example, FIGS. 1B-2D). can be created. Here, the three-dimensional representation of the elements is at least one of, for example, shading the elements, lighting the elements, giving the elements different sizes, or giving perspective to the elements. including one. The processing by the creating means 120 may be image processing known in the art.
 作成手段120は、例えば、画像内に追加された要素の3次元的表現を対象の画像の周囲で回転させることにより、疑似3次元動画を作成することができる。このような疑似3次元動画は、対象の画像の疑似3次元効果を増強させる点で好ましい。 The creating means 120 can create a pseudo-three-dimensional animation, for example, by rotating the three-dimensional representation of the elements added in the image around the target image. Such a pseudo three-dimensional moving image is preferable in that it enhances the pseudo three dimensional effect of the target image.
 作成手段120は、例えば、要素の3次元的表現の一部が対象の画像の上に重ね合わせられて対象の画像の一部が要素の3次元的表現によって隠れ、要素の3次元的表現の他の一部が対象の画像の下に重ね合わせられて要素の3次元的表現の上記他の一部が対象の画像によって隠れるように、要素の3次元的表現を追加することができる。これにより、対象の画像の疑似3次元効果をさらに増強することができる。 For example, the creation means 120 superimposes a part of the three-dimensional representation of the element on the image of the object, and hides a part of the image of the object by the three-dimensional representation of the element. A three-dimensional representation of an element can be added such that another part is superimposed under the image of interest and said other part of the three-dimensional representation of the element is obscured by the image of interest. This can further enhance the pseudo three-dimensional effect of the image of the object.
 ここで、要素は、任意の物体であり得、任意の形状、大きさ、色彩等を有し得る。 Here, the element can be any object, and can have any shape, size, color, etc.
 作成手段120は、例えば、複数の水平方向走査線の3次元的表現を対象の画像上に追加することができる。複数の水平方向走査線の3次元的表現は、対象の3次元的な輪郭形状に沿うように引かれた走査線であり得る。対象の3次元的な輪郭形状は、例えば、画像に含まれる3次元情報または画像から導出される3次元情報に基づいて決定され得る。画像から3次元情報を導出する処理は、例えば、当該技術分野において公知の技術で行われ得る。画像から3次元情報を導出する処理は、画像から深度情報を推定することが可能なAIモデルを用いて行われ得る。 The creation means 120 can add, for example, a three-dimensional representation of a plurality of horizontal scanning lines onto the target image. The three-dimensional representation of the plurality of horizontal scanlines can be scanlines drawn along the three-dimensional outline of the object. The three-dimensional contour shape of the object can be determined, for example, based on three-dimensional information contained in the image or derived from the image. The process of deriving three-dimensional information from images can be performed, for example, by techniques known in the art. The process of deriving 3D information from images can be performed using an AI model capable of estimating depth information from images.
 作成手段120は、例えば、画像から視点の異なる複数の画像を生成し(例えば、図3Aおよび図3Bを参照)、視点の異なる複数の画像を時間的に連続して結合することにより疑似3次元動画を作成することができる。視点の異なる複数の画像は、例えば、画像に含まれる3次元情報または画像から導出される3次元情報に基づいて作成され得る。例えば、視点の異なる画像は、仮想的な視点を設定し、その仮想的な視点からどのように見えるかを3次元情報に基づいて推定することによって、作成され得る。視点の異なる複数の画像は、例えば、当該技術分野において公知の技術を用いて作成され得る。例えば、視点の異なる複数の画像は、視差を有する画像対を作成することが可能なAIモデルを用いて行われ得る。 The generating means 120 generates, for example, a plurality of images with different viewpoints from the images (see, for example, FIGS. 3A and 3B), and temporally successively combines the plurality of images with different viewpoints to create a pseudo three-dimensional image. You can create videos. A plurality of images from different viewpoints can be created, for example, based on three-dimensional information contained in the images or three-dimensional information derived from the images. For example, images with different viewpoints can be created by setting a virtual viewpoint and estimating how it looks from the virtual viewpoint based on three-dimensional information. A plurality of images from different viewpoints can be created using techniques known in the art, for example. For example, multiple images from different viewpoints can be performed using an AI model capable of creating image pairs with parallax.
 作成手段120によって作成された疑似3次元画像は、表示手段130に渡される。 The pseudo three-dimensional image created by creating means 120 is passed to display means 130 .
 表示手段130は、疑似3次元画像を表示するように構成されている。表示手段130は、画像を表示することができる限り、任意の表示手段であり得る。表示手段130は、例えば、液晶ディスプレイ、LEDディスプレイ等であるがこれらに限定されない。一実施形態において、表示手段130は、少なくとも1つの部材が回転して表示面を形成する回転型ディスプレイであり得る。回転型ディスプレイは、例えば、上述した回転型ディスプレイ20、25、27等であり得る。 The display means 130 is configured to display a pseudo three-dimensional image. The display means 130 can be any display means as long as it can display an image. The display means 130 is, for example, a liquid crystal display, an LED display, or the like, but is not limited to these. In one embodiment, display means 130 may be a rotating display in which at least one member rotates to form a display surface. The rotating display can be, for example, rotating display 20, 25, 27, etc., described above.
 一実施形態において、表示手段130は、表示面の向きを変更可能なように構成され得る。表示手段130は、任意の機構を用いて、表示面の向きを変更可能であり得る。 In one embodiment, the display means 130 can be configured so that the orientation of the display surface can be changed. The display means 130 may be able to change the orientation of the display surface using any mechanism.
 システム100は、さらに、表示手段130の前に位置するユーザの位置を検出するように構成された検出手段を備えることができる。検出手段は、任意のセンサであり得る。検出手段は、例えば、カメラであり得る。本例において、システム100は、検出手段によって検出されたユーザの位置に表示手段130の表示面が向くように、表示手段130の表示面の向きを変更することができる。これにより、ユーザは、常に表示手段130の表示面を正面から見ることができるようになる。 The system 100 may further comprise detection means configured to detect the position of the user positioned in front of the display means 130 . The detection means can be any sensor. The detection means can be, for example, a camera. In this example, the system 100 can change the orientation of the display surface of the display means 130 so that the display surface of the display means 130 faces the position of the user detected by the detection means. Thereby, the user can always see the display surface of the display means 130 from the front.
 図6Bは、別の実施形態における画像を3次元的に表示するためのシステム100’の構成の一例を示す。システム100’は、疑似3次元画像の疑似3次元効果を強調するための音を画像に同期させて再生するための手段を備える点を除いて、システム100と同様の構成を有する。ここでは、図6Aを参照して上述した構成を同様の構成には同一の参照番号を付し、詳細な説明は省略する。 FIG. 6B shows an example configuration of a system 100' for three-dimensionally displaying an image in another embodiment. The system 100' has the same configuration as the system 100, except that it includes means for synchronously reproducing sound for enhancing the pseudo three-dimensional effect of the pseudo three dimensional image. Here, the same reference numerals are given to the same configurations as those described above with reference to FIG. 6A, and detailed description thereof will be omitted.
 システム100’は、受信手段110と、作成手段120と、表示手段130と、同期手段140と、再生手段150とを備えている。 The system 100 ′ comprises receiving means 110 , creating means 120 , displaying means 130 , synchronizing means 140 and reproducing means 150 .
 受信手段110は、画像を受信するように構成されている。受信された画像は、作成手段120に渡される。 The receiving means 110 is configured to receive an image. The received image is passed to the creating means 120 .
 作成手段120は、画像を処理することにより、疑似3次元画像を作成するように構成されている。作成手段120によって作成された疑似3次元画像は、表示手段130および同期手段140に渡される。 The creating means 120 is configured to create a pseudo three-dimensional image by processing the image. The pseudo three-dimensional image created by creating means 120 is passed to display means 130 and synchronization means 140 .
 表示手段130は、疑似3次元画像を表示するように構成されている。 The display means 130 is configured to display a pseudo three-dimensional image.
 同期手段140は、画像に音を同期させるように構成されている。画像は、作成手段120によって作成された疑似3次元画像であってもよいし、受信手段110が受信した画像であってもよい。同期手段140は、動画作成分野で公知の任意の手法を用いて、画像に音を同期させることができる。 The synchronizing means 140 is configured to synchronize the sound with the image. The image may be a pseudo three-dimensional image created by creating means 120 or an image received by receiving means 110 . Synchronizer 140 can synchronize the sound with the image using any technique known in the art of motion picture creation.
 音は、任意の音であり得る。音は、例えば、画像に直接関係のある音であってもよし、画像に幾分か関係のある音であってもよいし、画像に無関係の音であってもよい。好ましくは、音は、画像に無関係の音であり得、より好ましくは、画像に直接関係のある音であり得る。画像に直接関係のある音は、例えば、受信手段110によって受信された画像に音が既に同期されていた場合(例えば、画像が音付きの静止画または音付きの動画であった場合)の同期されていた音であり得る。画像に幾分か関係のある音は、例えば、画像との関連が連想される音(例えば、鳥の画像に対する鳥の羽音もしくは鳴き声、車の画像に対する車の走行音もしくは警音器音等)であり得る。画像に音が同期されていた場合であっても、音は、画像に同期されていた音以外の音であることができる。 The sound can be any sound. The sound may be, for example, a sound that is directly related to the image, a sound that is somewhat related to the image, or a sound that is unrelated to the image. Preferably, the sound may be sound unrelated to the image, more preferably sound directly related to the image. Sound directly related to the image may be synchronized, for example, if the sound was already synchronized to the image received by the receiving means 110 (eg, if the image was a still image with sound or a moving image with sound). It could be the sound that was being played. Sounds that are somewhat related to images are, for example, sounds that are associated with images (e.g., bird wing sounds or chirping for bird images, car running sounds or horn sounds for car images, etc.). can be Even if the sound was synchronized to the image, the sound can be a sound other than the sound that was synchronized to the image.
 同期手段140は、同期された音が再生されると、画像中の動きに基づいて変化しているように聞こえるように、音を同期させることができる。例えば、同期手段140は、図1E~図1Jを参照して上述したように、疑似3次元画像中の要素の3次元的表現と対象の画像との関係に従って、音が変化しているように聞こえるように、音を同期させることができる。音の変化は、例えば、音の大きさ、音の音程、音の音色のうちの少なくとも1つであり得る。例えば、対象の画像の少なくとも一部が要素の3次元的表現に接触したときに音が鳴るように、音を同期させることができる。例えば、対象の画像の少なくとも一部が要素の3次元的表現から遠くに延びるにつれて音を大きくもしくは小さくなるように、あるいは、例えば、対象の画像の少なくとも一部が要素の3次元的表現から遠くに延びるにつれて音を高くもしくは低くなるように、例えば、対象の画像の少なくとも一部が要素の3次元的表現から遠くに延びるにつれて別の音色に近づくように、音を同期させることができる。例えば、対象の画像の少なくとも一部が要素の3次元的表現の外部で動くとき、その動きに合わせて音が変化しているように聞こえるように、音を同期させることができる。 Synchronization means 140 can synchronize sounds such that when the synchronized sounds are played, they appear to change based on motion in the image. For example, the synchronization means 140 may be arranged such that the sound is changing according to the relationship between the three-dimensional representation of the elements in the pseudo-three-dimensional image and the image of interest, as described above with reference to FIGS. 1E-1J. Sounds can be synchronized so that they can be heard. The change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound. For example, the sound can be synchronized such that the sound is played when at least a portion of the image of interest touches the three-dimensional representation of the element. For example, such that at least a portion of the image of the object extends farther from the three-dimensional representation of the element, the sound becomes louder or quieter, or, for example, at least a portion of the image of the object moves farther from the three-dimensional representation of the element. Sounds can be synchronized such that they become higher or lower as they extend into the 3D representation, for example, approaching a different timbre as at least a portion of the image of interest extends further from the three-dimensional representation of the element. For example, the sound can be synchronized such that when at least a portion of the image of interest moves outside the three-dimensional representation of the element, the sound appears to change with the movement.
 例えば、同期手段140は、疑似3次元画像中に設定される境界と対象の画像との関係に従って、音が変化しているように聞こえるように、音を同期させることができる。これは、上述した例で、疑似3次元画像中の要素の3次元的表現が不可視である場合に相当する。例えば、対象の画像の少なくとも一部が境界を超えたことに応答して音が鳴るように、音を同期させることができる。例えば、対象の画像の少なくとも一部が境界から遠くに延びるにつれて音を大きくもしくは小さくなるように、あるいは、例えば、対象の画像の少なくとも一部が境界から遠くに延びるにつれて音を高くもしくは低くなるように、例えば、対象の画像の少なくとも一部が境界から遠くに延びるにつれて別の音色に近づくように、音を同期させることができる。例えば、対象の画像の少なくとも一部が境界の外部で動くとき、その動きに合わせて音が変化しているように聞こえるように、音を同期させることができる。 For example, the synchronizing means 140 can synchronize the sounds so that the sounds change according to the relationship between the boundary set in the pseudo three-dimensional image and the target image. This corresponds to the example above where the 3D representation of the elements in the pseudo 3D image is invisible. For example, the sound can be synchronized such that the sound is played in response to at least a portion of the image of interest crossing a boundary. For example, to make the sound louder or quieter as at least a portion of the image of interest extends farther from the boundary, or to make the sound louder or lower as at least a portion of the image of interest extends farther from the boundary, for example. In addition, for example, the sounds can be synchronized such that at least a portion of the image of interest extends farther from the boundary and approaches a different timbre. For example, the sound can be synchronized such that when at least a portion of the image of interest moves outside the boundary, the sound appears to change with the movement.
 境界は、任意の形状を有し得る。例えば、対象の画像の周囲を包囲する形状(例えば、球形、楕円球形、円筒形、角柱形等)であってもよいし、対象の画像を包囲しない形状(例えば、平面形、曲面形、半球形等)であってもよい。境界は、経時的に変化するものであってもよいし、経時的に変化しないものであってもよい。 The boundary can have any shape. For example, it may be a shape that surrounds the target image (e.g., spherical, elliptical, cylindrical, prismatic, etc.), or a shape that does not surround the target image (e.g., planar, curved, hemispherical). shape, etc.). The boundary may change over time or may not change over time.
 同期手段140によって画像と同期された音は、再生手段150に渡される。 The sound synchronized with the image by the synchronization means 140 is passed to the reproduction means 150.
 再生手段150は、表示手段130が画像を表示しているときに、画像と同期させられた音を再生するように構成されている。再生手段150は、表示されている画像に合わせて音を再生することができる限り、任意の再生手段であることができる。再生手段150は、例えば、スピーカである。例えば、スピーカは、表示手段130に内蔵されるものであってもよいし、表示手段130に外付けのものであってもよい。 The reproduction means 150 is configured to reproduce sound synchronized with the image while the display means 130 is displaying the image. The reproduction means 150 can be any reproduction means as long as it can reproduce sound in time with the image being displayed. The reproduction means 150 is, for example, a speaker. For example, the speaker may be built in the display means 130 or may be externally attached to the display means 130 .
 上述した例では、視覚的効果および聴覚的効果によって3次元感を強調するために、疑似3次元画像に合わせて音を再生することを説明したが、本発明はこれに限定されない。例えば、聴覚的効果のみによって3次元感を強調するための構成も本発明の範囲内である。この場合、例えば、図6Bに示される構成において、作成手段120および表示手段130が省略され得る。 In the above example, it has been explained that sound is reproduced in accordance with the pseudo-three-dimensional image in order to emphasize the three-dimensional feeling by means of visual and auditory effects, but the present invention is not limited to this. For example, a configuration for emphasizing the three-dimensional feeling only by auditory effects is within the scope of the present invention. In this case, for example, the creating means 120 and the displaying means 130 may be omitted in the configuration shown in FIG. 6B.
 上述したシステム100、100’は、例えば、ユーザ装置200において実装され得る。 The systems 100, 100' described above may be implemented in the user equipment 200, for example.
 図6Cは、ユーザ装置200の構成の一例を示す。 6C shows an example of the configuration of the user device 200. FIG.
 ユーザ装置200は、スマートフォン、タブレットコンピュータ、スマートグラス、スマートウォッチ、ラップトップコンピュータ、デスクトップコンピュータ等の任意の端末装置であり得る。 The user device 200 can be any terminal device such as smart phones, tablet computers, smart glasses, smart watches, laptop computers, desktop computers, and the like.
 ユーザ装置200は、通信インターフェース部210と、入力部220と、表示部230と、メモリ部240と、プロセッサ部250とを備える。 The user device 200 includes a communication interface section 210 , an input section 220 , a display section 230 , a memory section 240 and a processor section 250 .
 通信インターフェース部210は、ユーザ装置200の外部との通信を制御する。ユーザ装置200のプロセッサ部250は、通信インターフェース部210を介して、ユーザ装置200の外部から情報を受信することが可能であり、ユーザ装置200の外部に情報を送信することが可能である。例えば、ユーザ装置200のプロセッサ部250は、通信インターフェース部210を介して、画像を受信することができる。ユーザ装置200の外部に疑似3次元画像を送信することが可能である。通信インターフェース部210は、任意の方法で通信を制御し得る。 The communication interface unit 210 controls communication of the user device 200 with the outside. The processor unit 250 of the user device 200 can receive information from outside the user device 200 via the communication interface unit 210 and can transmit information to the outside of the user device 200 . For example, the processor portion 250 of the user device 200 can receive images via the communication interface portion 210 . It is possible to transmit the pseudo three-dimensional image to the outside of the user device 200 . Communication interface unit 210 may control communications in any manner.
 例えば、システム100の受信手段110は、通信インターフェース部210によって実装され得る。 For example, the receiving means 110 of the system 100 can be implemented by the communication interface section 210.
 入力部220は、ユーザが情報をユーザ装置200に入力することを可能にする。入力部220が、どのような態様で、ユーザが情報をユーザ装置200に入力することを可能にするかは問わない。例えば、入力部220がタッチパネルである場合には、ユーザがタッチパネルにタッチすることによって情報を入力するようにしてもよい。あるいは、入力部220がマウスである場合には、ユーザがマウスを操作することによって情報を入力するようにしてもよい。あるいは、入力部220がキーボードである場合には、ユーザがキーボードのキーを押下することによって情報を入力するようにしてもよい。あるいは、入力部220がマイクである場合には、ユーザが音声で情報を入力するようにしてもよい。 The input unit 220 allows the user to input information into the user device 200 . It does not matter in what manner the input unit 220 allows the user to input information into the user device 200 . For example, if the input unit 220 is a touch panel, the user may input information by touching the touch panel. Alternatively, if the input unit 220 is a mouse, the user may input information by operating the mouse. Alternatively, if the input unit 220 is a keyboard, the user may input information by pressing keys on the keyboard. Alternatively, if the input unit 220 is a microphone, the user may input information by voice.
 表示部230は、情報を表示するための任意のディスプレイであり得る。 The display unit 230 can be any display for displaying information.
 例えば、システム100の表示手段130は、表示部230によって実装され得る。 For example, the display means 130 of the system 100 may be implemented by the display unit 230.
 メモリ部240には、ユーザ装置200における処理を実行するためのプログラムやそのプログラムの実行に必要とされるデータ等が格納されている。メモリ部240には、例えば、画像を3次元的に表示するためのプログラム(例えば、後述する図7A、図7Bに示される処理を実現するプログラム)の一部または全部が格納されている。メモリ部240には、例えば、画像をディスプレイ上に表示するためのプログラム(例えば、後述する図8に示される処理を実現するプログラム)の一部または全部が格納されてもよい。メモリ部240には、任意の機能を実装するアプリケーションが格納されていてもよい。ここで、プログラムをどのようにしてメモリ部240に格納するかは問わない。例えば、プログラムは、メモリ部240にプリインストールされていてもよい。あるいは、プログラムは、ネットワーク500を経由してダウンロードされることによってメモリ部240にインストールされるようにしてもよい。プログラムは、コンピュータ読み取り可能な有形記憶媒体上に記憶されてもよい。メモリ部240は、任意の記憶手段によって実装され得る。 The memory unit 240 stores programs for executing processes in the user device 200 and data required for executing the programs. The memory unit 240 stores, for example, part or all of a program for three-dimensionally displaying an image (for example, a program for realizing processing shown in FIGS. 7A and 7B described later). The memory unit 240 may store, for example, part or all of a program for displaying an image on the display (for example, a program for realizing processing shown in FIG. 8, which will be described later). The memory unit 240 may store applications that implement arbitrary functions. Here, it does not matter how the program is stored in the memory unit 240 . For example, the program may be pre-installed in memory unit 240 . Alternatively, the program may be installed in memory unit 240 by being downloaded via network 500 . The program may be stored on a computer-readable tangible storage medium. Memory unit 240 may be implemented by any storage means.
 プロセッサ部250は、ユーザ装置200全体の動作を制御する。プロセッサ部250は、メモリ部240に格納されているプログラムを読み出し、そのプログラムを実行する。これにより、ユーザ装置200を所望のステップを実行する装置として機能させることが可能である。プロセッサ部250は、単一のプロセッサによって実装されてもよいし、複数のプロセッサによって実装されてもよい。 The processor unit 250 controls the operation of the user device 200 as a whole. The processor unit 250 reads a program stored in the memory unit 240 and executes the program. This allows the user device 200 to function as a device that executes desired steps. The processor unit 250 may be implemented by a single processor or may be implemented by multiple processors.
 例えば、システム100の作成手段120は、プロセッサ部250によって実装され得る。例えば、システム100の同期手段140は、プロセッサ部250によって実装され得る。 For example, the creating means 120 of the system 100 may be implemented by the processor unit 250. For example, the synchronization means 140 of system 100 may be implemented by processor portion 250 .
 ユーザ装置200は、例えば、表示部230の前に位置するユーザの位置を検出するように構成された検出部を備えることができる。検出部は、任意のセンサであり得る。検出部は、例えば、カメラであり得る。システム100検出手段は、検出部によって実装され得る。 The user device 200 can include, for example, a detector configured to detect the position of the user positioned in front of the display 230 . The detector can be any sensor. The detector can be, for example, a camera. The system 100 detection means may be implemented by a detection unit.
 ユーザ装置200は、例えば、音を再生するための再生部(図示せず)を備えるようにしてもよい。再生部は、音を再生するための任意のスピーカであり得る。例えば、システム100の再生手段150は、再生部によって実装され得る。 The user device 200 may include, for example, a reproduction unit (not shown) for reproducing sound. The reproduction unit can be any speaker for reproducing sound. For example, the reproducing means 150 of the system 100 may be implemented by a reproducing section.
 図6Cに示される例では、ユーザ装置200の各構成要素がユーザ装置200内に設けられているが、本発明はこれに限定されない。ユーザ装置200の各構成要素のいずれかがユーザ装置200の外部に設けられることも可能である。例えば、表示部230がユーザ装置200の外部に設けられる(すなわち、表示部230が外部ディスプレイである)ことも可能である。例えば、入力部220、表示部230、メモリ部240、プロセッサ部250のそれぞれが別々のハードウェア部品で構成されている場合には、各ハードウェア部品が任意のネットワークを介して接続されてもよい。このとき、ネットワークの種類は問わない。各ハードウェア部品は、例えば、LANを介して接続されてもよいし、無線接続されてもよいし、有線接続されてもよい。ユーザ装置200は、特定のハードウェア構成には限定されない。例えば、プロセッサ部250をデジタル回路ではなくアナログ回路によって構成することも本発明の範囲内である。ユーザ装置200の構成は、その機能を実現できる限りにおいて上述したものに限定されない。 Although each component of the user device 200 is provided in the user device 200 in the example shown in FIG. 6C, the present invention is not limited to this. Any of the components of user device 200 may be provided external to user device 200 . For example, the display unit 230 can be provided outside the user device 200 (that is, the display unit 230 is an external display). For example, when the input unit 220, the display unit 230, the memory unit 240, and the processor unit 250 are each configured with separate hardware components, each hardware component may be connected via an arbitrary network. . At this time, the type of network does not matter. Each hardware component may be connected via a LAN, wirelessly, or wired, for example. User device 200 is not limited to a particular hardware configuration. For example, it is within the scope of the present invention to configure the processor section 250 with analog circuits instead of digital circuits. The configuration of user device 200 is not limited to that described above as long as its functions can be realized.
 システム100の構成要素は、例えば、上述したようにユーザ装置200側に備えられるようにしてもよいし、ユーザ装置200およびサーバ装置の両方に分散されてもよい。システム100の構成要素がユーザ装置200およびサーバ装置の両方に分散される場合、ユーザ装置200が表示手段130(および再生手段150)を備え、サーバ装置が受信手段110および作成手段120(および同期手段140)を備えるようにしてもよい。 For example, the components of the system 100 may be provided on the user device 200 side as described above, or distributed to both the user device 200 and the server device. If the components of system 100 are distributed in both user device 200 and server device, user device 200 comprises display means 130 (and playback means 150) and server device comprises receiving means 110 and creating means 120 (and synchronization means). 140).
 3.画像を3次元的に表示するためのシステムによる処理
 図7Aは、画像を3次元的に表示するためのシステム100による処理700の一例を示す。図7Aに示される例では、システム100がユーザ装置200によって実装され、処理が、ユーザ装置200のプロセッサ部250によって実行される場合を例に説明する。上述したように、プロセッサ部250は、作成手段120を実装し得る。
3. Processing by System for Three-Dimensional Display of Images FIG. 7A shows an example of processing 700 by system 100 for three-dimensional display of images. In the example shown in FIG. 7A, the case where the system 100 is implemented by the user device 200 and the processing is executed by the processor unit 250 of the user device 200 will be described as an example. As mentioned above, processor unit 250 may implement creating means 120 .
 ステップS701では、プロセッサ部250が、画像を受信する。画像には、対象の画像が含まれている。プロセッサ部250は、例えば、通信インターフェース部210を介してシステム100の外部から受信された画像を受信することができる。 At step S701, the processor unit 250 receives an image. The image contains the image of the object. The processor unit 250 can receive images received from outside the system 100 via the communication interface unit 210, for example.
 ステップS702では、プロセッサ部250が、ステップS701で受信された画像を処理することにより、疑似3次元画像を作成する。プロセッサ部250は、画像を処理することにより、対象の画像とは別の要素の3次元的表現を画像内に追加する(例えば、図1B~図2Dを参照)ことによって疑似3次元画像を作成することができる。ここで、要素の3次元的表現は、例えば、要素に陰影をつけること、要素に明暗をつけること、要素に大きさの違いをつけること、または、要素に遠近感をつけることのうちの少なくとも1つを含む。プロセッサ部250による処理は、当該技術分野で公知の画像処理であり得る。 In step S702, the processor unit 250 creates a pseudo three-dimensional image by processing the image received in step S701. The processor unit 250 processes the image to create a pseudo-three-dimensional image by adding in the image three-dimensional representations of elements separate from the image of interest (see, for example, FIGS. 1B-2D). can do. Here, the three-dimensional representation of the elements is at least one of, for example, shading the elements, lighting the elements, giving the elements different sizes, or giving perspective to the elements. including one. The processing by the processor unit 250 may be image processing known in the art.
 プロセッサ部250は、例えば、画像内に追加された要素の3次元的表現を対象の画像の周囲で回転させることにより、疑似3次元動画を作成することができる。このような疑似3次元動画は、対象の画像の疑似3次元効果を増強させる点で好ましい。 The processor unit 250 can create a pseudo-three-dimensional animation, for example, by rotating the three-dimensional representation of the elements added in the image around the image of interest. Such a pseudo three-dimensional moving image is preferable in that it enhances the pseudo three dimensional effect of the target image.
 上記に加えて、または、上記に代えて、プロセッサ部250は、例えば、要素の3次元的表現の一部が対象の画像の上に重ね合わせられて対象の画像の一部が要素の3次元的表現によって隠れ、要素の3次元的表現の他の一部が対象の画像の下に重ね合わせられて要素の3次元的表現の上記他の一部が対象の画像によって隠れるように、要素の3次元的表現を追加することができる。これにより、対象の画像の疑似3次元効果をさらに増強することができる。 Additionally or alternatively, the processor unit 250 may, for example, superimpose a portion of the three-dimensional representation of the element on the image of the object so that a portion of the image of the object is a three-dimensional representation of the element. and such that another part of the three-dimensional representation of the element is superimposed under the target image such that said other part of the three-dimensional representation of the element is hidden by the target image. A three-dimensional representation can be added. This can further enhance the pseudo three-dimensional effect of the image of the object.
 上記に加えて、または、上記に代えて、プロセッサ部250は、例えば、複数の水平方向走査線の3次元的表現を対象の画像上に追加することができる。複数の水平方向走査線の3次元的表現は、対象の3次元的な輪郭形状に沿うように引かれた走査線であり得る。プロセッサ部250は、画像に含まれる3次元情報または画像から導出される3次元情報に基づいて対象の3次元的な輪郭形状を決定することができる。画像から3次元情報を導出する処理は、例えば、当該技術分野において公知の技術で行われ得る。画像から3次元情報を導出する処理は、画像から深度情報を推定することが可能なAIモデルを用いて行われ得る。 Additionally or alternatively, the processor unit 250 can add, for example, a three-dimensional representation of multiple horizontal scan lines onto the image of interest. The three-dimensional representation of the plurality of horizontal scanlines can be scanlines drawn along the three-dimensional outline of the object. The processor unit 250 can determine the three-dimensional contour shape of the object based on the three-dimensional information contained in the image or derived from the image. The process of deriving three-dimensional information from images can be performed, for example, by techniques known in the art. The process of deriving 3D information from images can be performed using an AI model capable of estimating depth information from images.
 上記に加えて、または、上記に代えて、プロセッサ部250は、例えば、画像から視点の異なる複数の画像を生成し(例えば、図3Aおよび図3Bを参照)、視点の異なる複数の画像を時間的に連続して結合することにより疑似3次元動画を作成することができる。プロセッサ部250は、例えば、画像に含まれる3次元情報または画像から導出される3次元情報に基づいて視点の異なる複数の画像を作成することができる。視点の異なる複数の画像は、例えば、当該技術分野において公知の技術を用いて作成され得る。例えば、視点の異なる複数の画像は、視差を有する画像対を作成することが可能なAIモデルを用いて行われ得る。 Additionally or alternatively, the processor unit 250 may, for example, generate a plurality of images with different viewpoints from the image (see, for example, FIGS. 3A and 3B), and generate a plurality of images with different viewpoints over time. A pseudo three-dimensional moving image can be created by combining the images continuously. The processor unit 250 can create a plurality of images with different viewpoints based on, for example, three-dimensional information contained in the images or three-dimensional information derived from the images. A plurality of images from different viewpoints can be created using techniques known in the art, for example. For example, multiple images from different viewpoints can be performed using an AI model capable of creating image pairs with parallax.
 ステップS703では、プロセッサ部250が、ステップ702で作成された疑似3次元画像を表示部230に表示する。 In step S703, the processor unit 250 displays the pseudo three-dimensional image created in step S702 on the display unit 230.
 プロセッサ部250は、例えば、ステップ702で作成された疑似3次元画像をそのまま表示部230に表示することができる。あるいは、プロセッサ部250は、表示部230の表示面の向きに応じて疑似3次元画像中の対象の向きを変更して疑似3次元画像を表示部230に表示するようにしてもよい。これにより、例えば、ユーザの表示部230に対する向きに応じた画像が表示部230に表示されることになる。 The processor unit 250 can display the pseudo three-dimensional image created in step 702 as it is on the display unit 230, for example. Alternatively, the processor unit 250 may display the pseudo three-dimensional image on the display unit 230 by changing the orientation of the object in the pseudo three-dimensional image according to the orientation of the display surface of the display unit 230 . As a result, for example, an image corresponding to the orientation of the user with respect to the display unit 230 is displayed on the display unit 230 .
 なお、表示部230の表示面の向きに応じて疑似3次元画像中の対象の向きを変更して疑似3次元画像を表示部230に表示することを説明したが、表示される画像は、ステップ702で作成された疑似3次元画像に限定されない。例えば、ステップS701で受信された画像が3次元画像である場合には、ステップS701で受信された画像中の対象の向きを変更して、その画像を表示部230に表示することができる。この場合でも、画像中の対象が3次元物体であるように見えることにより、表示される画像は、結果として疑似3次元画像となり得る。 It has been described that the pseudo three-dimensional image is displayed on the display unit 230 by changing the orientation of the object in the pseudo three-dimensional image according to the orientation of the display surface of the display unit 230. It is not limited to the pseudo three-dimensional image created at 702 . For example, if the image received in step S701 is a three-dimensional image, the orientation of the object in the image received in step S701 can be changed and the image can be displayed on display unit 230 . Even in this case, the object in the image appears to be a three-dimensional object, so that the displayed image can result in a pseudo-three-dimensional image.
 例えば、処理700は、ユーザ装置200およびサーバ装置の両方に分散されて行われてもよい。このとき、例えば、ステップS701およびステップS702はサーバ装置で行われ、ステップS703はユーザ装置200で行われることができる。 For example, the process 700 may be distributed to both the user device 200 and the server device. At this time, for example, steps S701 and S702 can be performed by the server device, and step S703 can be performed by the user device 200 .
 図7Bは、画像を3次元的に表示するためのシステム100’による処理710の一例を示す。図7Bに示される例では、システム100’がユーザ装置200によって実装され、処理が、ユーザ装置200のプロセッサ部250によって実行される場合を例に説明する。上述したように、プロセッサ部250は、作成手段120および同期手段140を実装し得る。 FIG. 7B shows an example of processing 710 by system 100' for displaying an image three-dimensionally. In the example shown in FIG. 7B , the system 100 ′ is implemented by the user device 200 and the processing is executed by the processor unit 250 of the user device 200 as an example. As noted above, processor portion 250 may implement creating means 120 and synchronizing means 140 .
 ステップS711では、プロセッサ部250が、画像を受信する。ステップS711は、ステップS701と同様である。 At step S711, the processor unit 250 receives the image. Step S711 is the same as step S701.
 ステップS702では、プロセッサ部250が、ステップS701で受信された画像を処理することにより、疑似3次元画像を作成する。ステップS712は、ステップS702と同様である。 In step S702, the processor unit 250 creates a pseudo three-dimensional image by processing the image received in step S701. Step S712 is similar to step S702.
 ステップS703では、プロセッサ部250が、ステップS702で作成された疑似3次元画像に音を同期させる。プロセッサ部250は、動画作成分野で公知の任意の手法を用いて、画像に音を同期させることができる。 At step S703, the processor unit 250 synchronizes the sound with the pseudo three-dimensional image created at step S702. Processor unit 250 can synchronize sound with images using any technique known in the art of motion picture production.
 プロセッサ部250は、同期された音が再生されると画像中の動きに基づいて変化しているように聞こえるように、音を同期させることができる。プロセッサ部250は、疑似3次元画像中に設定される境界と対象の画像との関係に従って、音が変化しているように聞こえるように、音を同期させることができる。境界は、疑似3次元画像中の彫塑の3次元的表現であってもよいし、それ以外であってもよい。 The processor unit 250 can synchronize the sounds such that when the synchronized sounds are played, they appear to change based on motion in the image. The processor unit 250 can synchronize the sounds so that the sounds change according to the relationship between the boundary set in the pseudo three-dimensional image and the target image. The boundary may or may not be a three-dimensional representation of the sculpture in the pseudo-three-dimensional image.
 ステップS714では、プロセッサ部250が、ステップ712で作成された疑似3次元画像を表示部230に表示する。ステップS714は、ステップS703と同様である。 In step S714, the processor unit 250 displays the pseudo three-dimensional image created in step S712 on the display unit 230. Step S714 is similar to step S703.
 ステップS715では、プロセッサ部250が、ステップS714で疑似3次元画像が表示されているときに、ステップS713で同期させられた音を再生部から再生する。 In step S715, the processor unit 250 reproduces the sound synchronized in step S713 from the reproduction unit while the pseudo three-dimensional image is being displayed in step S714.
 処理710によって、疑似3次元画像による視覚的な疑似3次元的効果に加えて、疑似3次元画像中の動きに合わせて再生される音による聴覚的な疑似3次元的効果により、画像の3次元感が強調されることになる。 In addition to the visual pseudo-three-dimensional effect of the pseudo-three-dimensional image, the processing 710 adds the auditory pseudo-three-dimensional effect of the sound reproduced in time with the motion in the pseudo-three-dimensional image, thereby enhancing the three-dimensional appearance of the image. feeling will be emphasized.
 例えば、処理710は、ユーザ装置200およびサーバ装置の両方に分散されて行われてもよい。このとき、例えば、ステップS711~ステップS713はサーバ装置で行われ、ステップS714およびステップS715はユーザ装置200で行われることができる。 For example, the processing 710 may be distributed to both the user device 200 and the server device. At this time, for example, steps S711 to S713 can be performed by the server device, and steps S714 and S715 can be performed by the user device 200. FIG.
 処理710は、視覚的な疑似3次元的効果に加えて、聴覚的な疑似3次元的効果により、画像の3次元感を強調することを説明したが、聴覚的な疑似3次元効果のみにより、画像の3次元感を強調するようにしてもよい。この場合、ステップS712が省略され、ステップS713では、ステップS711で受信された画像に音が同期され、ステップS714では、ステップS711で受信された画像が表示されることになる。 Although the processing 710 has been described as enhancing the three-dimensional feeling of the image by the auditory pseudo three-dimensional effect in addition to the visual pseudo three dimensional effect, only the auditory pseudo three dimensional effect You may make it emphasize the three-dimensional feeling of an image. In this case, step S712 is omitted, sound is synchronized with the image received in step S711 in step S713, and the image received in step S711 is displayed in step S714.
 図8は、画像をディスプレイ上に表示するための処理800の一例を示す。処理800により、専用表示装置(例えば、VRゴーグル、ヘッドマウントディスプレイ等)を用いることなく、仮想現実画像をユーザに提供することができる。処理800は、例えば、ユーザ装置200のプロセッサ部250によって実行される。 FIG. 8 shows an example of a process 800 for displaying an image on a display. Process 800 can provide a virtual reality image to a user without using a dedicated display device (eg, VR goggles, head-mounted display, etc.). Process 800 is performed by processor unit 250 of user device 200, for example.
 ステップS801において、ユーザ装置200の検出部が、ディスプレイに対するユーザの視点の位置を検出する。検出部は、任意の検出手段によってユーザの視点の位置を検出することができる。例えば、検出部は、カメラによって撮影された画像に基づいて、ユーザの視点の位置を検出することができる。ユーザの視点の位置は、例えば、ユーザの眼の位置(より具体的には、例えば、ユーザの両眼の中間点)であり得る。 In step S801, the detection unit of the user device 200 detects the position of the user's viewpoint with respect to the display. The detection unit can detect the position of the user's viewpoint by any detection means. For example, the detection unit can detect the position of the user's viewpoint based on the image captured by the camera. The position of the user's viewpoint may be, for example, the position of the user's eyes (more specifically, for example, the midpoint between the user's eyes).
 ステップS802において、ユーザ装置200のプロセッサ部250が、仮想現実画像として表示されるべき画像を受信し、画像を処理することにより、ディスプレイ上に表示されるべき画像の部分を決定する。ディスプレイ上に表示されるべき画像の部分を決定することは、例えば、以下のステップS8021~ステップS8023によって行われることができる。 At step S802, the processor unit 250 of the user device 200 receives an image to be displayed as a virtual reality image and processes the image to determine the portion of the image to be displayed on the display. Determining the portion of the image to be displayed on the display can be performed, for example, by steps S8021-S8023 below.
 ステップS8021において、プロセッサ部250は、仮想球体を設定する。仮想球体は、ユーザの視点を中心とし、ユーザの視点とディスプレイとの間の距離を半径とする仮想的な球体である。例えば、図5Bに示されるように、ユーザの視点とディスプレイとの間の距離が小さい場合、仮想球体の直径は小さくなる一方で、図5Cに示されるように、ユーザの視点とディスプレイとの間の距離が大きい場合、仮想球体の直径は大きくなる。 In step S8021, the processor unit 250 sets a virtual sphere. A virtual sphere is a virtual sphere whose center is the user's viewpoint and whose radius is the distance between the user's viewpoint and the display. For example, when the distance between the user's viewpoint and the display is small, as shown in FIG. 5B, the diameter of the phantom sphere becomes small, while the distance between the user's viewpoint and the display is small, as shown in FIG. 5C. If the distance of is large, the diameter of the phantom sphere will be large.
 ステップS8022において、プロセッサ部250は、仮想現実画像として表示されるべき画像を、ステップS8021で設定された仮想球体の内面に貼り付ける。プロセッサ部250は、画像処理の分野において公知の任意の処理によって、球体の内面に画像を貼り付けることができる。ここで、画像は、エクイレクタングラー図法によって表現されていることが好ましい。エクイレクタングラー図法によって表現されている画像は、球体の内面に歪みなく貼り付けることができるからである。 In step S8022, the processor unit 250 pastes the image to be displayed as a virtual reality image on the inner surface of the virtual sphere set in step S8021. The processor unit 250 can apply an image to the inner surface of the sphere by any processing known in the field of image processing. Here, the image is preferably represented by the equirectangular projection method. This is because an image represented by the equirectangular projection can be pasted on the inner surface of the sphere without distortion.
 ステップS8023において、プロセッサ部250は、ディスプレイの表示面に対応する仮想球体の内面の部分に貼り付けられている画像の部分を特定する。ディスプレイの表示面に対応する仮想球体の内面の部分は、ユーザの視点を中心に仮想球体を仮想的に配置したときに、ディスプレイの表示面に重なる部分である。プロセッサ部250は、例えば、ユーザとディスプレイとの間の相対的な位置関係から、ディスプレイの表示面に対応する仮想球体の内面の部分を導出することができる。そして、プロセッサ部250は、導出された部分と仮想球体に貼り付けられた画像との関係から、画像の部分を特定することができる。 In step S8023, the processor unit 250 identifies the portion of the image pasted on the inner surface portion of the virtual sphere corresponding to the display surface of the display. The portion of the inner surface of the virtual sphere that corresponds to the display surface of the display is the portion that overlaps the display surface of the display when the virtual sphere is virtually arranged around the user's viewpoint. The processor unit 250 can, for example, derive the inner surface portion of the virtual sphere corresponding to the display surface of the display from the relative positional relationship between the user and the display. Then, the processor unit 250 can specify the portion of the image from the relationship between the derived portion and the image pasted on the virtual sphere.
 このようにして、ディスプレイ上に表示されるべき画像の部分が決定されると、ステップS803に進む。 When the portion of the image to be displayed on the display is thus determined, the process proceeds to step S803.
 ステップS803では、ステップS802で決定された画像の部分をディスプレイの表示面に表示する。 In step S803, the portion of the image determined in step S802 is displayed on the display surface of the display.
 例えば、ユーザがディスプレイに対する位置を変更する度にステップS801~ステップS803を繰り返すことにより、ユーザの位置に応じた画像を表示することができる。 For example, by repeating steps S801 to S803 each time the user changes the position with respect to the display, an image according to the user's position can be displayed.
 処理800によって表示される画像では、ユーザによって知覚される遠近感が維持されることになる。すなわち、仮想現実画像内で遠くにある物体は、ユーザがディスプレイに近づいてもユーザがディスプレイから遠ざかっても、依然として遠くにあるように見え、仮想現実画像内で近くにある物体は、ユーザがディスプレイに近づいてもユーザがディスプレイから遠ざかっても、依然として近くにあるように見える。このように、ユーザUは、ディスプレイを通じて、画像を現実世界の感覚で見ることができる。このディスプレイは、専用表示装置(例えば、VRゴーグル、ヘッドマウントディスプレイ等)であってもよいが、一般的な据え置き型のディスプレイ、または、上述した回転型ディスプレイ20、25、27等であってもよい。すなわち、ユーザは、専用表示装置を装着することなく、自然な仮想現実画像を見ることができるようになる。 The image displayed by process 800 will maintain the perspective perceived by the user. That is, objects that are far away in the virtual reality image will still appear far away whether the user moves closer to the display or farther away from the display, and objects that are closer in the virtual reality image will appear farther away when the user moves away from the display. Even when the user moves away from the display, it still appears to be close. In this way, the user U can see the image with a sense of the real world through the display. This display may be a dedicated display device (e.g., VR goggles, head-mounted display, etc.), but may be a general stationary display, or the above-described rotary display 20, 25, 27, etc. good. That is, the user can view a natural virtual reality image without wearing a dedicated display device.
 図7A、図7Bを参照して上述した例では、図7A、図7Bに示される各ステップの処理および図8に示される処理の一部は、プロセッサ部250とメモリ部240に格納されたプログラムとによって実現することが説明されたが、本発明はこれに限定されない。図7A、図7Bに示される各ステップの処理および図8に示される処理の一部のうちの少なくとも1つは、制御回路などのハードウェア構成によって実現されてもよい。 In the example described above with reference to FIGS. 7A and 7B, the processing of each step shown in FIGS. 7A and 7B and part of the processing shown in FIG. Although it has been described that the present invention is realized by and, the present invention is not limited to this. At least one of the processing of each step shown in FIGS. 7A and 7B and part of the processing shown in FIG. 8 may be realized by a hardware configuration such as a control circuit.
 また、図7A、図7B、図8を参照して上述した例では、特定の順序で各ステップの処理が行われることを説明したが、各ステップの処理の順序は説明されるものに限定されない。論理的に可能な任意の順序で、各ステップの処理を行うことができる。また、他の任意のステップを追加することが可能であり、かつ/または、示されるステップのうちの少なくとも1つを省略することが可能である。 In addition, in the examples described above with reference to FIGS. 7A, 7B, and 8, it has been described that the processing of each step is performed in a specific order, but the order of processing of each step is not limited to that described. . The steps can be processed in any order that is logically possible. Also, any other steps could be added and/or at least one of the steps shown could be omitted.
 本発明は、上述した実施形態に限定されるものではない。本発明は、特許請求の範囲によってのみその範囲が解釈されるべきであることが理解される。当業者は、本発明の具体的な好ましい実施形態の記載から、本発明の記載および技術常識に基づいて等価な範囲を実施することができることが理解される。 The present invention is not limited to the embodiments described above. It is understood that the invention is to be construed in scope only by the claims. It is understood that a person skilled in the art can implement an equivalent range from the description of specific preferred embodiments of the present invention based on the description of the present invention and common technical knowledge.
 本発明は、画像を3次元的に表示するために、疑似3次元画像を作成することが可能な方法等を提供することができる点で有用である。 The present invention is useful in that it can provide a method and the like capable of creating a pseudo-three-dimensional image in order to three-dimensionally display an image.
 100 システム
 110 受信手段
 120 作成手段
 130 表示手段
100 System 110 Receiving Means 120 Creating Means 130 Displaying Means

Claims (25)

  1.  画像を3次元的に表示するための方法であって、
     対象の画像を含む画像を受信することと、
     前記画像を処理することにより、前記対象の画像とは別の要素の3次元的表現を前記画像内に追加することによって疑似3次元効果を奏する疑似3次元画像を作成することと、
     前記疑似3次元画像を表示することと
     を含む方法。
    A method for displaying an image three-dimensionally, comprising:
    receiving an image containing an image of interest;
    processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest;
    and displaying the simulated three-dimensional image.
  2.  前記疑似3次元画像を作成することは、前記要素の3次元的表現を前記対象の画像の周囲で回転させることにより前記疑似3次元画像として疑似3次元動画を作成することを含む、請求項1に記載の方法。 2. The pseudo-three-dimensional image of claim 1, wherein creating the pseudo-three-dimensional image comprises creating a pseudo-three-dimensional animation as the pseudo-three-dimensional image by rotating the three-dimensional representation of the element around the image of the object. The method described in .
  3.  前記要素の3次元的表現の一部が前記対象の画像の上に重ね合わせられて前記対象の画像の前記一部が前記要素の3次元的表現によって隠れ、前記要素の3次元的表現の他の一部が前記対象の画像の下に重ね合わせられて前記要素の3次元的表現の前記他の一部が前記対象の画像によって隠れるように前記要素の3次元的表現が追加される、請求項1または請求項2に記載の方法。 a portion of the three-dimensional representation of the element is superimposed on the image of the object such that the portion of the image of the object is hidden by the three-dimensional representation of the element; is superimposed under the image of interest and the other portion of the three-dimensional representation of the element is obscured by the image of interest. 3. A method according to claim 1 or claim 2.
  4.  前記要素は、複数の水平方向走査線を含み、
     前記要素の3次元的表現を前記画像内に追加することは、前記複数の水平方向走査線の3次元的表現を前記対象の画像上に追加することを含む、請求項1~3のいずれか一項に記載の方法。
    the element includes a plurality of horizontal scan lines;
    4. Any of claims 1-3, wherein adding a three-dimensional representation of the element within the image comprises adding a three-dimensional representation of the plurality of horizontal scan lines onto the image of the object. The method according to item 1.
  5.  前記疑似3次元画像を作成することは、前記画像から視点の異なる複数の画像を生成することと、前記視点の異なる複数の画像を時間的に連続して結合することにより前記疑似3次元画像として疑似3次元動画を作成することを含む、請求項1~4のいずれか一項に記載の方法。 Creating the pseudo three-dimensional image includes generating a plurality of images with different viewpoints from the image, and temporally successively combining the plurality of images with different viewpoints to form the pseudo three-dimensional image. A method according to any one of claims 1 to 4, comprising creating a simulated 3D animation.
  6.  前記疑似3次元画像は、疑似3次元動画であり、
     前記方法は、
     前記疑似3次元画像に音を同期させることと、
     前記疑似3次元画像を表示しているときに、前記同期させられた音を再生することと
     をさらに含む、請求項1~5のいずれか一項に記載の方法。
    The pseudo three-dimensional image is a pseudo three-dimensional video,
    The method includes:
    synchronizing sound with the simulated three-dimensional image;
    The method of any one of claims 1-5, further comprising: playing said synchronized sound while displaying said simulated 3D image.
  7.  前記音は、前記画像中の動きに基づいて変化する、請求項6に記載の方法。 The method of claim 6, wherein the sound changes based on movement in the image.
  8.  前記音は、前記対象の画像が前記対象の画像の周囲に規定される境界を超えることに応答して、変化する、請求項6または請求項7に記載の方法。 A method according to claim 6 or claim 7, wherein the sound changes in response to the image of the object exceeding a boundary defined around the image of the object.
  9.  前記音は、前記境界の外での前記対象の動きに応答して、変化する、請求項8に記載の方法。 9. The method of claim 8, wherein the sound changes in response to movement of the object outside the boundary.
  10.  前記境界は、前記対象の画像の周囲に配置された前記要素の3次元的表現によって規定される、請求項8または請求項9に記載の方法。 10. A method according to claim 8 or claim 9, wherein the boundary is defined by a three-dimensional representation of the elements arranged around the image of the object.
  11.  前記音の変化は、前記音の大きさ、音程、音色のうちの少なくとも1つの変化を含む、請求項7~10のいずれか一稿に記載の方法。 The method according to any one of claims 7 to 10, wherein said change in sound includes change in at least one of loudness, pitch and timbre of said sound.
  12.  前記疑似3次元画像を表示することは、少なくとも1つの部材が第1の軸周りに回転して平面状の表示面を形成する回転型ディスプレイ上に前記疑似3次元画像を表示することを含む、請求項1~11のいずれか一項に記載の方法。 displaying the simulated three-dimensional image includes displaying the simulated three-dimensional image on a rotating display in which at least one member rotates about a first axis to form a planar display surface; A method according to any one of claims 1-11.
  13.  前記回転型ディスプレイは、前記表示面の向きを変更可能に構成されており、
     前記方法は、
     前記回転型ディスプレイに対するユーザの位置を検出することと、
     前記検出された位置に基づいて、前記表示面の向き変更することと
     を含む、請求項12に記載の方法。
    The rotary display is configured such that the orientation of the display surface can be changed,
    The method includes:
    detecting a user's position relative to the rotating display;
    13. The method of claim 12, comprising: reorienting the display surface based on the detected position.
  14.  前記疑似3次元画像を表示することは、前記表示面の向きに基づいて、前記疑似3次元画像中の前記対象の向きを変更して前記表示面上に前記疑似3次元画像を表示することを含む、請求項7に記載の方法。 Displaying the pseudo three-dimensional image includes changing the orientation of the object in the pseudo three-dimensional image based on the orientation of the display surface and displaying the pseudo three-dimensional image on the display surface. 8. The method of claim 7, comprising:
  15.  前記疑似3次元画像を表示することは、少なくとも1つの部材が第1の軸周りに回転しかつ第1の軸に略垂直な第2の周りに回転して略球面状の表示面を形成する回転型ディスプレイ上に前記疑似3次元画像を表示することを含む、請求項1~11のいずれか一項に記載の方法。 Displaying the pseudo-three-dimensional image includes rotating at least one member about a first axis and about a second axis substantially perpendicular to the first axis to form a substantially spherical display surface. A method according to any preceding claim, comprising displaying the simulated three-dimensional image on a rotating display.
  16.  前記回転型ディスプレイは、複数の部材がそれぞれ回転して複数の表示面を形成する、請求項12~15のいずれか一項に記載の方法。 The method according to any one of claims 12 to 15, wherein the rotary display has a plurality of members that rotate to form a plurality of display surfaces.
  17.  画像を3次元的に表示するためのプログラムであって、前記プログラムは、プロセッサと、表示部とを備えるコンピュータシステムにおいて実行され、前記プログラムは、
     対象の画像を含む画像を受信することと、
     前記画像を処理することにより、前記対象の画像とは別の要素の3次元的表現を前記画像内に追加することによって疑似3次元効果を奏する疑似3次元画像を作成することと、
     前記疑似3次元画像を前記表示部に表示することと
     を含む処理を前記プロセッサに行わせる、プログラム。
    A program for displaying an image three-dimensionally, said program being executed in a computer system comprising a processor and a display unit, said program comprising:
    receiving an image containing an image of interest;
    processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest;
    A program causing the processor to perform processing including: displaying the pseudo three-dimensional image on the display unit.
  18.  画像を3次元的に表示するためのシステムであって、
     対象の画像を含む画像を受信する受信手段と、
     前記画像を処理することにより、前記対象の画像とは別の要素の3次元的表現を前記画像内に追加することによって疑似3次元効果を奏する疑似3次元画像を作成する作成手段と、
     前記疑似3次元画像を表示する表示手段と
     を備えるシステム。
    A system for three-dimensionally displaying an image, comprising:
    receiving means for receiving an image including an image of interest;
    creating means for creating, by processing the image, a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest;
    and display means for displaying the pseudo three-dimensional image.
  19.  画像を3次元的に表示するための方法であって、
     画像を受信することと、
     前記画像に音を同期させることであって、前記音は、前記画像中の動きに応答して変化する、ことと
     前記画像を表示することと、
     前記画像を表示しているときに、前記同期させられた音を再生することと
     を含む方法。
    A method for displaying an image three-dimensionally, comprising:
    receiving an image;
    synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image;
    and playing said synchronized sound while displaying said image.
  20.  画像を3次元的に表示するためのプログラムであって、前記プログラムは、プロセッサと、表示部と、音出力部とを備えるコンピュータシステムにおいて実行され、前記プログラムは、
     画像を受信することと、
     前記画像に音を同期させることであって、前記音は、前記画像中の動きに応答して変化する、ことと
     前記画像を前記表示部に表示することと、
     前記画像を表示しているときに、前記同期させられた音を前記音出力部から再生することと
     を含む処理を前記プロセッサに行わせる、プログラム。
    A program for three-dimensionally displaying an image, said program being executed in a computer system comprising a processor, a display unit, and a sound output unit, said program comprising:
    receiving an image;
    synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image on the display;
    and reproducing the synchronized sound from the sound output unit when the image is displayed.
  21.  画像を3次元的に表示するためのシステムであって、
     画像を受信する受信手段と、
     前記画像に音を同期させる同期手段であって、前記音は、前記画像中の動きに応答して変化する、同期手段と
     前記画像を表示する表示手段と、
     前記画像を表示しているときに、前記同期させられた音を再生する再生手段と
     を備えるシステム。
    A system for three-dimensionally displaying an image, comprising:
    a receiving means for receiving an image;
    synchronization means for synchronizing sound with said image, said sound varying in response to movement in said image; display means for displaying said image;
    reproduction means for reproducing said synchronized sound when displaying said image.
  22.  画像をディスプレイ上に表示する方法であって、
     前記ディスプレイに対するユーザの視点の位置を検出することと、
     画像を処理することにより、前記ディスプレイ上に表示されるべき前記画像の部分を決定することであって、
      前記ユーザの視点を中心とし、前記ユーザの視点と前記ディスプレイとの間の距離を半径とする仮想球体を設定することと、
      前記画像を前記仮想球体の内面に貼り付けることと、
      前記ディスプレイの表示面に対応する前記仮想球体の内面の部分に貼り付けられている前記画像の部分を特定することと、
     を含む、ことと、
     前記決定された画像の部分を前記ディスプレイの前記表示面に表示することと
     を含む方法。
    A method of displaying an image on a display, comprising:
    detecting the position of a user's viewpoint with respect to the display;
    Determining the portion of the image to be displayed on the display by processing the image, comprising:
    setting a virtual sphere centered at the user's viewpoint and having a radius equal to the distance between the user's viewpoint and the display;
    pasting the image onto the inner surface of the virtual sphere;
    identifying a portion of the image pasted on a portion of the inner surface of the phantom sphere corresponding to the display surface of the display;
    including
    and displaying the determined portion of the image on the display surface of the display.
  23.  前記画像は、エクイレクタングラー図法で表現されている、請求項22に記載の方法。 The method according to claim 22, wherein said image is represented by an equirectangular projection.
  24.  前記ディスプレイは、据え置き型ディスプレイである、請求項22または請求項23に記載の方法。 The method according to claim 22 or 23, wherein said display is a stationary display.
  25.  前記ディスプレイは、少なくとも1つの部材が回転して表示面を形成する回転型ディスプレイである、請求項24に記載の方法。 25. The method of claim 24, wherein the display is a rotating display in which at least one member rotates to form a display surface.
PCT/JP2022/012787 2021-03-22 2022-03-18 Method, program, and system for displaying image three-dimensionally WO2022202700A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021047451 2021-03-22
JP2021-047451 2021-03-22
JP2021092377A JP2022146839A (en) 2021-03-22 2021-06-01 Method, program and system for displaying image three-dimensionally
JP2021-092377 2021-06-01

Publications (1)

Publication Number Publication Date
WO2022202700A1 true WO2022202700A1 (en) 2022-09-29

Family

ID=83397278

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/012787 WO2022202700A1 (en) 2021-03-22 2022-03-18 Method, program, and system for displaying image three-dimensionally

Country Status (1)

Country Link
WO (1) WO2022202700A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62153780A (en) * 1985-12-27 1987-07-08 Kyoritsu Denpa Kk Interlace display apparatus
JP2003216071A (en) * 2002-01-21 2003-07-30 Noritsu Koki Co Ltd Rotary type display device
JP2010238108A (en) * 2009-03-31 2010-10-21 Sharp Corp Device and method for processing video and computer program
JP2013012811A (en) * 2011-06-28 2013-01-17 Square Enix Co Ltd Proximity passage sound generation device
KR20160071797A (en) * 2014-12-12 2016-06-22 삼성전자주식회사 Display apparatus and control method thereof
JP2018056953A (en) * 2016-09-30 2018-04-05 アイシン精機株式会社 Periphery monitoring system
CN212675887U (en) * 2020-08-25 2021-03-09 深圳市洲明科技股份有限公司 3D display device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62153780A (en) * 1985-12-27 1987-07-08 Kyoritsu Denpa Kk Interlace display apparatus
JP2003216071A (en) * 2002-01-21 2003-07-30 Noritsu Koki Co Ltd Rotary type display device
JP2010238108A (en) * 2009-03-31 2010-10-21 Sharp Corp Device and method for processing video and computer program
JP2013012811A (en) * 2011-06-28 2013-01-17 Square Enix Co Ltd Proximity passage sound generation device
KR20160071797A (en) * 2014-12-12 2016-06-22 삼성전자주식회사 Display apparatus and control method thereof
JP2018056953A (en) * 2016-09-30 2018-04-05 アイシン精機株式会社 Periphery monitoring system
CN212675887U (en) * 2020-08-25 2021-03-09 深圳市洲明科技股份有限公司 3D display device

Similar Documents

Publication Publication Date Title
US9684994B2 (en) Modifying perspective of stereoscopic images based on changes in user viewpoint
US11010958B2 (en) Method and system for generating an image of a subject in a scene
US7907167B2 (en) Three dimensional horizontal perspective workstation
KR102230645B1 (en) Virtual reality, augmented reality and mixed reality systems with spatialized audio
US11128977B2 (en) Spatial audio downmixing
JP2008506140A (en) Horizontal perspective display
WO2018086295A1 (en) Application interface display method and apparatus
US11069137B2 (en) Rendering captions for media content
JP2011077710A (en) Video communication system and video communication method
KR20120048191A (en) Interactive 3d system of table type
CN111226187A (en) System and method for interacting with a user through a mirror
TW202240530A (en) Neural blending for novel view synthesis
JP2023168544A (en) Low-frequency interchannel coherence control
EP3422151A1 (en) Methods, apparatus, systems, computer programs for enabling consumption of virtual content for mediated reality
WO2022202700A1 (en) Method, program, and system for displaying image three-dimensionally
CN111699460A (en) Multi-view virtual reality user interface
JP2022146839A (en) Method, program and system for displaying image three-dimensionally
CN116325720A (en) Dynamic resolution of depth conflicts in telepresence
EP3623908A1 (en) A system for controlling audio-capable connected devices in mixed reality environments
CN111837161A (en) Shadowing images in three-dimensional content systems
EP3534241A1 (en) Method, apparatus, systems, computer programs for enabling mediated reality
JP6601392B2 (en) Display control apparatus, display control method, and program
EP3343347A1 (en) Audio processing
JP6562371B1 (en) Display device, display processing device, and display processing program
US20200302761A1 (en) Indicator modes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22775487

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE