WO2022202700A1 - Procédé, programme et système pour afficher une image en trois dimensions - Google Patents

Procédé, programme et système pour afficher une image en trois dimensions Download PDF

Info

Publication number
WO2022202700A1
WO2022202700A1 PCT/JP2022/012787 JP2022012787W WO2022202700A1 WO 2022202700 A1 WO2022202700 A1 WO 2022202700A1 JP 2022012787 W JP2022012787 W JP 2022012787W WO 2022202700 A1 WO2022202700 A1 WO 2022202700A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
dimensional
pseudo
displaying
Prior art date
Application number
PCT/JP2022/012787
Other languages
English (en)
Japanese (ja)
Inventor
ホースーン カン
Original Assignee
株式会社オルツ
ホースーン カン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2021092377A external-priority patent/JP2022146839A/ja
Application filed by 株式会社オルツ, ホースーン カン filed Critical 株式会社オルツ
Publication of WO2022202700A1 publication Critical patent/WO2022202700A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume

Definitions

  • the present invention relates to a method, program and system for displaying images three-dimensionally.
  • the image When an image is displayed on a general display device, the image is displayed two-dimensionally. This is because the display surface of the display device is flat.
  • Patent Document 1 A special device has been developed for displaying images in three dimensions (for example, Patent Document 1)
  • An object of the present invention is to provide a method or the like capable of creating a pseudo-three-dimensional image in order to display an image three-dimensionally.
  • the present invention provides the following items.
  • (Item 1) A method for displaying an image three-dimensionally, comprising: receiving an image containing an image of interest; processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; and displaying the simulated three-dimensional image.
  • (Item 2) Item 1, wherein creating the pseudo-three-dimensional image includes creating a pseudo-three-dimensional animation as the pseudo-three-dimensional image by rotating the three-dimensional representation of the element around the image of the object. described method.
  • a portion of the three-dimensional representation of the element is superimposed on the image of the object such that the portion of the image of the object is hidden by the three-dimensional representation of the element; is superimposed under the image of interest and the other portion of the three-dimensional representation of the element is obscured by the image of interest.
  • the element includes a plurality of horizontal scan lines; 4. Any one of items 1-3, wherein adding a three-dimensional representation of the element within the image includes adding a three-dimensional representation of the plurality of horizontal scan lines onto the image of the object. The method described in section.
  • Creating the pseudo three-dimensional image includes generating a plurality of images with different viewpoints from the image, and temporally successively combining the plurality of images with different viewpoints to form the pseudo three-dimensional image. 5. The method according to any one of items 1 to 4, comprising creating a simulated three-dimensional animation.
  • the pseudo three-dimensional image is a pseudo three-dimensional video, The method includes: synchronizing sound with the simulated three-dimensional image; 6. The method of any one of items 1-5, further comprising: playing said synchronized sound while displaying said simulated three-dimensional image. (Item 7) 7. Method according to item 6, wherein the sound changes based on movement in the image. (Item 8) 8.
  • displaying the simulated three-dimensional image includes displaying the simulated three-dimensional image on a rotating display in which at least one member rotates about a first axis to form a planar display surface; The method according to any one of items 1-11.
  • the rotary display is configured such that the orientation of the display surface can be changed, The method includes: detecting a user's position relative to the rotating display; 13. The method of item 12, comprising: reorienting the display surface based on the detected position.
  • Displaying the pseudo three-dimensional image includes changing the orientation of the object in the pseudo three-dimensional image based on the orientation of the display surface and displaying the pseudo three-dimensional image on the display surface. 8.
  • the method of item 7, comprising (Item 15) Displaying the pseudo-three-dimensional image includes rotating at least one member about a first axis and about a second axis substantially perpendicular to the first axis to form a substantially spherical display surface. 12.
  • a program for displaying an image three-dimensionally said program being executed in a computer system comprising a processor and a display unit, said program comprising: receiving an image containing an image of interest; processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; A program causing the processor to perform processing including: displaying the pseudo three-dimensional image on the display unit.
  • a system for three-dimensionally displaying an image comprising: receiving means for receiving an image including an image of interest; creating means for creating, by processing the image, a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; and display means for displaying the pseudo three-dimensional image.
  • a storage medium storing a program for three-dimensionally displaying an image, the program being executed in a computer system comprising a processor and a display unit, the program comprising: receiving an image containing an image of interest; processing the image to create a pseudo-three-dimensional image that produces a pseudo-three-dimensional effect by adding a three-dimensional representation of elements in the image that are separate from the image of interest; A storage medium causing the processor to perform processing including: displaying the pseudo three-dimensional image on the display unit.
  • a storage medium according to item 18B comprising features according to one or more of the above items.
  • (Item 19) A method for displaying an image three-dimensionally, comprising: receiving an image; synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image; and playing said synchronized sound while displaying said image.
  • (Item 20) A program for three-dimensionally displaying an image, said program being executed in a computer system comprising a processor, a display unit, and a sound output unit, said program comprising: receiving an image; synchronizing sound with the image, wherein the sound changes in response to movement in the image; displaying the image on the display; and reproducing the synchronized sound from the sound output unit when the image is displayed.
  • (Item 20A) 21 Program according to item 20, including features according to one or more of the above items.
  • (Item 21) A system for three-dimensionally displaying an image, comprising: a receiving means for receiving an image; synchronization means for synchronizing sound with said image, said sound varying in response to movement in said image; display means for displaying said image; reproduction means for reproducing said synchronized sound when displaying said image.
  • a method of displaying an image on a display comprising: detecting the position of a user's viewpoint with respect to the display; Determining the portion of the image to be displayed on the display by processing the image, comprising: setting a virtual sphere centered at the user's viewpoint and having a radius equal to the distance between the user's viewpoint and the display; pasting the image onto the inner surface of the virtual sphere; identifying a portion of the image pasted on a portion of the inner surface of the phantom sphere corresponding to the display surface of the display; including and displaying the determined portion of the image on the display surface of the display.
  • the method of item 22, wherein the image is represented in an equirectangular projection.
  • (Item 24) 24 24.
  • FIG. 10 is a diagram showing how an image 10 is displayed;
  • FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of one embodiment of the present invention.
  • FIG. 1B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 20 shown in FIG. 1B;
  • FIG. 1B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 20 shown in FIG.
  • FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention.
  • FIG. 2B is a diagram showing an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 30 shown in FIG.
  • FIG. 1 shows an example of an image display device for displaying a pseudo three-dimensional image
  • 4B is a diagram showing an example of an image displayed on the display surface 23 of the rotary display 20 in the orientation of the display surface shown in FIG. 4A;
  • FIG. 4D is a diagram showing an example of an image displayed on the display surface 23 facing the user position shown in FIG. 4C.
  • FIG. 4 is a diagram showing how an image is displayed on the display surface 23 of the rotary display 20;
  • FIG. 2 shows a rotating display 25 in one embodiment of the present invention;
  • FIG. 3 shows a rotating display 27 in another embodiment of the invention;
  • FIG. 2 schematically illustrates an example flow of a technique in one embodiment of the present invention;
  • FIG. 4 is a diagram when the distance between the viewpoint of the user U and the display 20 is small;
  • FIG. 4 is a diagram when the distance between the viewpoint of the user U and the display 20 is small;
  • FIG. 4 is a diagram showing a case where the distance between the viewpoint of the user U and the display 20 is large;
  • a diagram showing an example of the configuration of the user device 200 FIG. 7 shows an example of a process 700 by the system 100 for displaying an image three-dimensionally.
  • FIG. 7 illustrates an example process 710 by system 100' for displaying an image in three dimensions;
  • FIG. 8 shows an example process 800 for displaying an image on a display.
  • image refers to an image that can be displayed on a two-dimensional plane.
  • the image includes not only a "two-dimensional image” containing two-dimensional information (length x width) but also a "three-dimensional image” containing three-dimensional information (length x width x depth).
  • a "three-dimensional image” can be acquired, for example, using an RGB-D camera.
  • a "three-dimensional image” can be obtained, for example, by performing a process of estimating depth information on a two-dimensional image and adding the depth information to the two-dimensional image.
  • Images include still images and moving images.
  • a moving image is considered to be a plurality of temporally consecutive still images.
  • three-dimensional refers to a state that is not three-dimensional, but appears to be three-dimensional.
  • three-dimensionally displaying means displaying something that is not three-dimensional (for example, something that is in a two-dimensional plane) as if it were three-dimensional.
  • three-dimensional representation means representation of something that is not three-dimensional (for example, something in a two-dimensional plane) as if it were three-dimensional.
  • the three-dimensional representation includes a three-dimensional representation by adding shading, a three-dimensional representation by adding light and shade, and a three-dimensional representation by adding parallax.
  • the "pseudo three-dimensional effect” refers to the effect of appearing three-dimensionally due to visual illusion (optical illusion). Depending on the viewer's perception, there can be varying degrees of pseudo three-dimensional effect.
  • a “pseudo three-dimensional image” refers to an image that produces a “pseudo three-dimensional effect”.
  • object refers to any object appearing in an image.
  • a subject may be, for example, animate or inanimate.
  • a subject may be, for example, a human, an animal, or a plant.
  • the inventors of the present invention have developed a method for three-dimensionally representing an image (two-dimensional image) displayed on a flat display.
  • this method is used, the image displayed on the flat display is displayed three-dimensionally, and the person viewing the flat display sees the image displayed on the flat display as if it were displayed in a three-dimensional space. can be misunderstood.
  • a pseudo three-dimensional image is an image that produces such an illusory effect. With this method, an image can be represented three-dimensionally even if a three-dimensional model of the image does not exist.
  • FIG. 1A shows how the image 10 is displayed.
  • the image 10 includes an image 11 of an object (a person in this example).
  • the target image 11 appears two-dimensional or two-dimensional.
  • FIG. 1B shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of one embodiment of the present invention.
  • the cube 12 In the pseudo three-dimensional image 20 shown in FIG. 1B, compared to the original image 10, an element (cube 12 in this example) different from the target image 11 is added.
  • a cube 12 has been added to the image in a three-dimensional representation.
  • the cube 12 is represented as thicker on the side closer to the viewer and thinner on the side farther from the viewer.
  • the cube 12 has a sense of perspective.
  • the perspective of the cube 12 also causes the object image 11 to have a perspective, so that the object image 11 can appear three-dimensional.
  • the cube 12 is positioned around the target image 11 , the cube 12 overlaps the target image 11 such that a portion of the cube 12 obscures a portion of the target image 11 and a portion of the cube 12 . It overlaps the target image 11 so that the part is hidden by a part of the target image 11 . This may enhance the perspective of the cube 12 and, in turn, the perspective of the image 11 of the object.
  • the target image 11 itself is a two-dimensional image
  • the existence of the cube 12 makes it easy to misunderstand that the target image 11 is represented three-dimensionally.
  • FIGS. 1C and 1D show examples in which the perspective of the target image 11 is further enhanced in the pseudo three-dimensional image 20 shown in FIG. 1B to express the target image 11 three-dimensionally.
  • the pseudo three-dimensional image 20 is a moving image, and the images shown in FIGS. 1C and 1D can be considered one frame of the moving image.
  • the cube 12 is rotated around its axis.
  • the axis is, for example, the central axis passing through the top and bottom surfaces of the cube 12 .
  • the edges near and far from the viewer transition, as shown in FIGS. 1C and 1D.
  • the edge represented thick and the edge represented thin transition This further enhances the perspective of the rotated cube 12 .
  • the enhanced perspective of cube 12 also enhances the perspective of image of object 11, which may make image of object 11 appear more three-dimensional.
  • the cube 12 is rotated around the image 11 of the object. As the cube 12 is rotated, the portion of the cube 12 obscuring the portion of the image 11 of interest and the portion of the cube 12 obscuring the portion of the image 11 of the object transition. This may further enhance the perspective of the cube 12 and, in turn, the perspective of the image 11 of the object.
  • the target image 11 itself is a two-dimensional image
  • the presence of the rotated cube 12 makes the target image 11 appear three-dimensional. easy to be
  • the axis can be any axis.
  • the axis is preferably the axis whose rotation enhances the perspective of the cube 12 .
  • the axis may be, for example, a central axis passing through a side surface of the cube 12, an axis (a central axis or an off-center axis) passing through at least one surface of the cube 12, or a 12 out of the axis.
  • the image is three-dimensionally represented by a visual effect by elements other than the target image 11, but in one embodiment of the present invention, in addition to the visual effect, or Instead of visual effects, auditory effects can be used to represent images in three dimensions. It is assumed that the flat display in this embodiment has a speaker or is connected to a speaker.
  • Image 10 is a moving image that expresses how the target moves.
  • Image 10 will include image 11 of a moving object.
  • the target image 11 moves from the state shown in FIG. 1E to the state shown in FIG. 1F in the pseudo three-dimensional image 20 generated from such an image 10 .
  • This movement causes the image of the subject's arm to appear to be outside the cube 12 .
  • the sound can change as it extends farther. Sound may be played from the speaker.
  • the sound played at the moment the subject's arm image touches the cube 12 creates the illusion that the cube 12 is in three-dimensional space, and the distance between the cube 12 and the arm image.
  • the change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound.
  • the sound can increase as the subject's arm image extends farther from the cube 12 .
  • the sound may decrease as the subject's arm image extends further from the cube 12, or, for example, the sound may increase or decrease as the subject's arm image extends further from the cube 12. or, for example, the subject's arm image may approach a different tone as it extends farther from the cube 12 .
  • Such an auditory effect emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
  • the sound may vary in response to other motions of the subject's image outside the cube 12 in addition to or alternatively to movement of the subject's arm image away from the cube 12 .
  • the sound can be varied to match the movement.
  • the sound can be louder (or softer) as the subject's arm image moves down in an arc outside the cube 12 in the direction of the arrow, and softer (or louder) as it moves up.
  • the presence of the cube 12 can be emphasized by playing and varying the sound according to the relationship between the cube 12 and at least a portion of the image of interest 11, and the image of interest 11 is: If it is represented three-dimensionally, it becomes easier to be illusioned.
  • the relationship between the cube 12 and at least a portion of the target image 11 may be, for example, the distance between the cube 12 and the portion of the target image 11 (i.e., static relationship), as described above, It may be the movement of a portion of the image of interest 11 relative to the cube 12 (ie dynamic relationship) or other relationship.
  • the relationship with the cube 12 changes as the target image 11 moves, but the present invention is not limited to this.
  • the relationship between cube 12 and at least a portion of object image 11 also changes, and cube 12 and object image 11 change accordingly.
  • the sound may be reproduced and changed according to the relationship with at least part of 11 .
  • FIGS. 1H-1J the cube 12 is rotated about its axis, similar to FIGS. 1C and 1D.
  • FIG. 1H the image of the object 11 is contained within the cube 12 . Therefore, no sound is reproduced from the speaker.
  • the subject's arm image appears to be outside the cube 12, as shown in FIG. 1I.
  • Sound may be played from the speaker.
  • the sound played at the moment the cube 12 touches the subject's arm image creates the illusion that the cube 12 is in a three-dimensional space, and the distance between the cube 12 and the arm image.
  • the change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound.
  • the sound can be louder as the cube 12 is farther from the subject's arm image.
  • the sound may decrease as the cube 12 moves away from the target's arm image, or the sound may increase or decrease as the cube 12 moves away from the target's arm image.
  • the further away the cube 12 is from the target arm image the closer to a different tone it may be.
  • Such an auditory effect emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
  • the sound may be changed according to the movement of the target image outside the cube 12 .
  • the sound can be changed according to the movement.
  • the sound can be louder (or softer) as the subject's arm image moves down in an arc outside the cube 12 in the direction of the arrow, and softer (or louder) as it moves up.
  • the sound may be louder or softer as the subject's arm image extends farther from the cube 12, e.g. or, for example, the subject's arm image may approach a different tone as it extends farther from the cube 12 .
  • Such an auditory effect also emphasizes the presence of the cube 12, which in turn further emphasizes the perspective of the image 11 of interest.
  • the boundary is not limited to this.
  • the boundary can be any boundary as long as it is defined near, eg, around the image of interest.
  • the boundary may be visible, such as cube 12, or invisible. If the boundaries are not visible, auditory effects will render the image three-dimensionally without relying on visual effects.
  • the boundary can have any shape. For example, it may be a shape that surrounds the target image (e.g., spherical, elliptical, cylindrical, prismatic, etc.), or a shape that does not surround the target image (e.g., planar, curved, hemispherical). shape, etc.).
  • the boundary may change over time or may not change over time. For example, as in the example above where the boundary is represented by cube 12, the boundary may rotate about an axis over time.
  • the sound may be, for example, a sound that is directly related to the image, a sound that is somewhat related to the image, or a sound that is unrelated to the image.
  • the sound may be sound unrelated to the image, more preferably sound directly related to the image.
  • Sound that is directly related to the image may, for example, have been synchronized to the original image 10 (eg, if the original image 10 was a still image with sound or a moving image with sound).
  • Sound that are somewhat related to images are, for example, sounds that are associated with images (e.g., bird wing sounds or chirping for bird images, car running sounds or horn sounds for car images, etc.). can be Even if the sound was synchronized to the original image 10 , the sound can be a sound other than the sound that was synchronized to the original image 10 .
  • visual effects and/or auditory effects are used to represent images in three dimensions. It is also possible to represent an image three-dimensionally.
  • FIG. 2A shows an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the invention.
  • an element different from the target image 11 is added and displayed.
  • a horizontal scan line 13 has been added over the image 11 of the object.
  • Horizontal scan lines 13 are then added to represent the contour shape of the object, based on the three-dimensional information contained in or derived from the image 10 .
  • the horizontal scan line 13 is represented as curving along the curved surface of the subject's face and curving along the contours of the subject's nose.
  • the horizontal scanning lines 13 express the contour shape of the object, so that the object image 11 has a sense of perspective, so that the object image 11 can be seen three-dimensionally.
  • the target image 11 itself is a two-dimensional image
  • the presence of the horizontal scanning lines 13 gives the illusion that the target image 11 is represented three-dimensionally. Cheap.
  • FIG. 2B shows an example in which the target image 11 is three-dimensionally expressed by further enhancing the perspective of the target image 11 in the pseudo three-dimensional image 30 shown in FIG. 2A.
  • the cube 12 described above with reference to FIG. 1B has been added around the image 11 of interest.
  • the perspective of the cube 12 also causes the image of the object 11 to have a perspective, so that the image of the object 11 can appear three-dimensional.
  • the cube 12 can be rotated about its axis as described above with reference to Figures 1C and 1D. As a result, the pseudo three-dimensional image 30 becomes a moving image. By rotating the cube 12 about its axis, the perspective of the image 11 of the object can be further enhanced.
  • the presence of the cube 12 can be detected by playing and varying the sound according to the relationship between the cube 12 and at least a portion of the image 11 of interest. It can also be enhanced, further enhancing the perspective of the image 11 of the object.
  • the image of interest 11 itself is a two-dimensional image
  • the presence of the horizontal scan lines 13 as well as the presence of the cube 12 or the rotated cube 12, and also the presence of the cube 12 and the cube 12 to be rotated Due to the presence of sounds that reproduce and change according to their relationship with a portion of the image 11 of the object, the image 11 of the object is more likely to give the illusion of being represented three-dimensionally.
  • the element added to the image overlaps the target image 11, but the element added to the image does not necessarily overlap the target image 11.
  • Elements can be added anywhere in the image as long as it produces a pseudo three-dimensional effect.
  • the added elements can be placed adjacent to the image of interest 11, as shown in the pseudo-three-dimensional image 20' of FIG. 2C.
  • the three-dimensional representation of the added elements introduces some perspective in the image and may also introduce some perspective in the image 11 of interest. This allows the image 11 of the object to appear three-dimensional.
  • the number of added elements is not limited to this.
  • multiple elements 12 can be added to the image 10, as shown in the pseudo-three-dimensional image 20'' of FIG. 2D.
  • the added elements may, for example, be rotated about their respective axes, and at least some of the elements may be rotated, as shown in the pseudo-three-dimensional image 20'' of FIG. 2D. may be rotated about an axis common to .
  • the presence of the added element or the rotation of the added element creates perspective in the image and may also create perspective in the image 11 of interest. This allows the image 11 of the object to appear three-dimensional.
  • the pseudo-three-dimensional image gives a strong impression that it is a virtual image.
  • 3A-3B show an example three-dimensional representation of the image 11 of the object in the image 10 shown in FIG. 1A, according to the technique of another embodiment of the present invention.
  • the pseudo three-dimensional image is a moving image
  • the still images 41, 42 shown in FIGS. 3A and 3D can be considered one frame of the moving image.
  • FIG. 3A a still image 41 created from the image 10 shown in FIG. 1A and viewed from the first line-of-sight direction is displayed.
  • a technique for creating images with different line-of-sight directions from a given image may be a technique known in the art. For example, based on the three-dimensional information contained in the image or the three-dimensional information derived from the image, images with different line-of-sight directions can be created from a given image. For example, machine learning techniques can be used to create images with different viewing directions from an image. For example, if image 10 is a moving image, a still image can be generated for each frame of the moving image.
  • the first line-of-sight direction is the line-of-sight direction when the object is viewed from a more left direction than the line-of-sight direction of the image 10 shown in FIG. 1A.
  • FIG. 3B a still image 42 created from the image 10 shown in FIG. 1A is displayed, looking at the object from the second line-of-sight direction.
  • the method of creating images with different line-of-sight directions from a certain image may be a method known in the art.
  • the second line-of-sight direction is the line-of-sight direction when the object is viewed from a more right direction than the line-of-sight direction of the image 10 shown in FIG. 1A.
  • the created still images 41 and 42 are temporally continuously combined to generate and display a pseudo three-dimensional moving image.
  • a pseudo three-dimensional moving image having two frames of still images from different viewpoints gives the target image 11 a sense of perspective due to the parallax generated from the different viewpoints. This allows the image 11 of the object to appear three-dimensional.
  • still images 41 and 42 may appear alternately and repeatedly. This can generate animations of arbitrary length.
  • the still images 41 and 42 created from each frame may appear continuously in frame order in the generated moving image.
  • the frame rate can be set to any value.
  • maintaining the frame rate of the image 10 may generate a pseudo-3D animation that is twice the length of the image 10 .
  • a pseudo-three-dimensional video of the same length as the image 10 can be generated.
  • the above-described elements may be added to the generated pseudo-three-dimensional video, and sound may be played along with the image. Thereby, the perspective of the target image 11 can be enhanced. If the element 12 is added, the element 12 can be rotated around the axis. This can further enhance the perspective of the target image 11 .
  • the target image can be represented in 3D.
  • the technique described above even if there is no 3D model of the target and only a 2D image of the target, the target image can be represented in 3D.
  • FIG. 4A shows an example of an image display device for displaying a pseudo three-dimensional image.
  • the image display device is a rotary display 20 (also called a "hologram display") in which at least one member 21 rotates to form a display surface. At least one member 21 is rotatable around a rotation axis C1. By rotating at least one linear member 21, it is possible to form a planar display surface.
  • a light source (for example, an LED) is arranged on at least one member 21 . Light emission from the light source on at least one member 21 is controlled according to the rotation angle of at least one member 21, so that an image can be projected onto the display surface by the afterimage effect.
  • the background can be seen through the rotation of at least one member 21, so there is an effect that the image appears as if it is floating in the air.
  • the frame rate of the image displayed on the rotary display 20 depends on the rotation speed of at least one member 21 .
  • the frame rate of images displayed on rotating display 20 is significantly lower than the frame rate of images displayed on typical display devices.
  • the frame rate of rotating display 20 can be, for example, from about 20 fps to about 40 fps, such as about 30 fps.
  • the image displayed on the rotary display 20 can be rougher than the image displayed on a typical display device. By displaying a rough image, the impression that the image displayed on the rotary display 20 is a virtual image is enhanced.
  • the rotary display 20 has a main body 22.
  • the main body 22 is configured to be rotatable around the rotation axis C2.
  • the orientation of the display surface formed by the at least one member 21 can be changed.
  • the rotatable display 20 detects the position of the user viewing the rotatable display 20 by a detection means (not shown), and rotates the main body 22 around the rotation axis C2 so as to face the detected position of the user U. can be rotated to
  • FIG. 4B shows an example of an image displayed on the display surface 23 of the rotary display 20 in the orientation of the display surface shown in FIG. 4A.
  • the target image 11 is displayed on the display surface 23, similar to the example shown in FIG. 1A.
  • the object faces the front.
  • the user U viewing the display surface sees the front side of the object.
  • the display surface 23 can display the pseudo three-dimensional image described above with reference to FIGS. 1B to 3B.
  • the pseudo three-dimensional image By displaying the pseudo three-dimensional image on the display surface 23 through which the background of the rotary display 20 is visible, the pseudo three-dimensional image appears as if it is floating in the air, and the pseudo three-dimensional image is displayed. Three-dimensional feeling can be emphasized.
  • the impression that the pseudo three-dimensional image is a virtual image is enhanced.
  • the rotatable display 20 detects the detection means (not shown). ) to detect the position of the user U and rotate the main body 22 clockwise around the rotation axis C2 so as to change the orientation of the display surface.
  • the display surface of the rotary display 20 faces the user U.
  • the user U can see the display surface of the rotary display 20 even after moving.
  • FIG. 4D shows an example of an image displayed on the display surface 23 facing the user position shown in FIG. 4C.
  • the user U Since the user position shown in FIG. 4C has moved to the left with respect to the user position shown in FIG. 4A, the user U is facing forward in the image displayed on the display surface shown in FIG. 4A. can be viewed from the left side. Therefore, on the display surface 23 facing the user position shown in FIG. 4C, it is possible to display an image 11' of a front facing object viewed from the left side. This may give the user U the illusion that the object in the object images 11, 11' is a three-dimensional object. Such an illusion can be further enhanced by having the display surface of rotating display 21 directed toward user U at both the user position shown in FIG. 4A and the user position shown in FIG. 4C. This is because the display surface always faces the user U, so the user U does not easily perceive that the display surface is planar.
  • the user U can face the display surface 23 from any angular position with respect to the rotatable display 20, and can visually recognize an image without distortion. For example, even if the user U1 views the rotary display 20 from any angular position with respect to the rotary display 20, the image of the horse displayed on the display surface 23 is not distorted, as shown in FIG. 4E(a). , is presented to user U1.
  • display surface 23 of rotary display 20 will not be visible to user U2.
  • user U2 sees a distorted image. For example, the horse image displayed on the display surface 23 is distorted and presented to the user U2 as shown in FIG. 4E(b).
  • the rotating display 25 rotates at least one member 26 about a first rotation axis C1 and rotates at least one member 26 about a second rotation axis C2.
  • a three-dimensional display surface can be formed.
  • the second axis of rotation C2 may be substantially perpendicular to the first axis of rotation C1.
  • the direction of rotation about the first rotation axis C1 is indicated by RC1
  • the direction of rotation about the second rotation axis C2 is indicated by RC2.
  • a light source eg, an LED
  • Light emission from the light source on at least one member 26 is controlled according to the rotation angle of the at least one member 26, so that an image can be projected onto the substantially spherical display surface by the afterimage effect.
  • the rotary display 25 can form a substantially spherical display surface, as shown in FIG. 4F(b). Except for the configuration described above, the rotary display 25 may have the same configuration as the rotary display 20 described above.
  • the pseudo-three-dimensional image described above with reference to FIGS. 1B to 3B can be displayed on the substantially spherical display surface of the rotary display 25 .
  • the pseudo-three-dimensional image appears as if it is floating in the air.
  • a sense of dimension can be emphasized.
  • the impression that the pseudo three-dimensional image is a virtual image is enhanced.
  • the undistorted pseudo three-dimensional image can be viewed from any angular position with respect to the rotary display 25, the three-dimensional feel of the pseudo three-dimensional image can be emphasized.
  • the rotary displays 20, 25 described above rotate at least one member 21, 26 around a common axis to form one planar display surface and one substantially spherical display surface.
  • the invention is not limited to this.
  • FIG. 4G shows an example of the rotating display 27 in one embodiment.
  • the rotary display 27 has a first display surface 28 formed by rotating at least one first member and a second display surface 29 formed by rotating at least one second member. is configured to form a The first member and the second member can be rotated about two axes to each form a substantially spherical viewing surface. In the example shown in FIG. 4G, the first member and the second member are rotated about a common axis (body axis). Except for the configuration described above, rotary display 27 may have a configuration similar to rotary display 20 or 25 described above.
  • the display area can be expanded. For example, a separate image may be displayed on each display surface, or one image may be displayed over a plurality of display surfaces. Multiple display surfaces expand the range of video expression.
  • the pseudo three-dimensional image described above with reference to FIGS. 1B to 3B can be displayed on the substantially spherical display surface of the rotary display 27.
  • FIG. By displaying the pseudo-three-dimensional image on the display surface of the rotary display 27 through which the background is visible, the pseudo-three-dimensional image appears as if it is floating in the air. A sense of dimension can be emphasized. Further, by displaying the pseudo three-dimensional image on the display surface of the rotary display 27 having a significantly low frame rate, the impression that the pseudo three-dimensional image is a virtual image is enhanced. Furthermore, since the undistorted pseudo three-dimensional image can be visually recognized from any angular position with respect to the rotary display 27, the three-dimensional feeling of the pseudo three-dimensional image can be conspicuous. Furthermore, it is possible to express a variety of pseudo three-dimensional images using a plurality of display surfaces.
  • the pseudo three-dimensional image is displayed on the special rotary displays 21, 25, and 27, but the image display device that displays the pseudo three-dimensional image is not limited to this.
  • the pseudo three-dimensional image can be displayed on any other image display device.
  • the image display device can be a transparent display through which the background can be seen. This is because the pseudo-three-dimensional effect of the pseudo-three-dimensional image displayed is enhanced.
  • the image display device may be a display with a significantly lower frame rate. As a result, the pseudo three-dimensional image is displayed at a low frame rate, the impression that the pseudo three dimensional image is a virtual image is enhanced, and the pseudo three dimensional effect can be enhanced. is.
  • FIG. 5A schematically illustrates an example flow of a technique in one embodiment of the invention.
  • an image 51 that is the basis of an image displayed as a virtual reality image is acquired.
  • the image 51 is preferably represented in a specific drawing method.
  • a particular projection may be, for example, an equirectangular projection (also called an equirectangular projection).
  • the lines of latitude and longitude are at right angles and the lines of latitude and longitude intersect at regular intervals. As a result, the distance between the two points is represented correctly.
  • the equirectangular projection is a projection that is often used when displaying virtual reality images. As shown in FIG. 5A, an image 51 represented by the equitangler projection appears to have distortion in the image 51 .
  • the image 51 is pasted on the inner surface of the virtual sphere 52 .
  • the image 51 is pasted on the inner surface of the virtual sphere 52 .
  • a natural image without distortion can be generated.
  • the virtual sphere 52 is a virtual sphere whose center is the viewpoint of the user U and whose radius is the distance between the viewpoint of the user U and the display 20 that displays the virtual reality image. For example, when the distance between the user U's viewpoint and the display 20 is small, as shown in FIG. When the distance from the display 20 is large, the diameter of the phantom sphere 52 is large.
  • the distance between the viewpoint of the user U and the display 20 can be measured, for example, by sensing means (not shown) that the display 20 may have.
  • the detection means can detect the position of the user's U eyes and measure the distance between the user's U viewpoint and the display 20 using techniques known in the field of distance measurement.
  • the portion of the image 51 pasted on the portion of the inner surface of the phantom sphere 52 corresponding to the display surface of the display 20 is specified, and the image 53 of the specified portion is displayed on the display surface of the display 20.
  • User U can see image 53 .
  • the diameter of the virtual sphere 52 is small, so the inner surface of the virtual sphere 52 corresponding to the display surface of the display 20 is The part of the image pasted on the part becomes relatively large.
  • the diameter of the virtual sphere 52 is large. The portion of the image pasted on the inner surface of the is relatively small.
  • an object in the foreground in the image is perceived as close to the user U, regardless of whether the user U approaches the display or moves away from the display, and an object in the background in the image is perceived. is perceived as far from the user U, whether the user U is closer to the display or farther from the display. Since the perspective of the image displayed on the display 20 is maintained in this manner, the user U can view the image through the display 20 with a feeling of the real world. In other words, the user U can experience virtual reality images through the display 20 .
  • the image is displayed on the rotary display 20, but the present invention is not limited to this.
  • the image can be displayed on any display as long as the distance between the user U and the display can be measured.
  • the technique of three-dimensionally representing an image and the technique of providing a virtual reality image described above can be implemented, for example, by the system 100 for three-dimensionally displaying an image, which will be described later.
  • FIG. 6A shows an example of the configuration of a system 100 for three-dimensional display of images.
  • the system 100 comprises receiving means 110 , creating means 120 and displaying means 130 .
  • the receiving means 110 are arranged to receive images.
  • the receiving means 110 can receive images in any manner.
  • the received image contains the image of the object.
  • the receiving means 110 may receive an image from outside the system 100, or may receive an image from inside the system 100 (for example, from a storage means that the system may have).
  • the receiving means 110 may, for example, receive the image from a storage medium connected to the system 100, or may receive the image via a network connected to the system 100.
  • the type of network does not matter, and any network such as the Internet or LAN can be used.
  • the received image can be in any data format.
  • the received image may be a two-dimensional image containing two-dimensional information (length x width) or a three-dimensional image containing three-dimensional information (length x width x depth).
  • the received image is passed to the creation means 120.
  • the creating means 120 is configured to create a pseudo three-dimensional image by processing the image.
  • the generating means 120 may generate a pseudo-three-dimensional image by, for example, processing the image to add in the image a three-dimensional representation of elements separate from the image of interest (see, for example, FIGS. 1B-2D). can be created.
  • the three-dimensional representation of the elements is at least one of, for example, shading the elements, lighting the elements, giving the elements different sizes, or giving perspective to the elements. including one.
  • the processing by the creating means 120 may be image processing known in the art.
  • the creating means 120 can create a pseudo-three-dimensional animation, for example, by rotating the three-dimensional representation of the elements added in the image around the target image.
  • a pseudo three-dimensional moving image is preferable in that it enhances the pseudo three dimensional effect of the target image.
  • the creation means 120 superimposes a part of the three-dimensional representation of the element on the image of the object, and hides a part of the image of the object by the three-dimensional representation of the element.
  • a three-dimensional representation of an element can be added such that another part is superimposed under the image of interest and said other part of the three-dimensional representation of the element is obscured by the image of interest. This can further enhance the pseudo three-dimensional effect of the image of the object.
  • the element can be any object, and can have any shape, size, color, etc.
  • the creation means 120 can add, for example, a three-dimensional representation of a plurality of horizontal scanning lines onto the target image.
  • the three-dimensional representation of the plurality of horizontal scanlines can be scanlines drawn along the three-dimensional outline of the object.
  • the three-dimensional contour shape of the object can be determined, for example, based on three-dimensional information contained in the image or derived from the image.
  • the process of deriving three-dimensional information from images can be performed, for example, by techniques known in the art.
  • the process of deriving 3D information from images can be performed using an AI model capable of estimating depth information from images.
  • the generating means 120 generates, for example, a plurality of images with different viewpoints from the images (see, for example, FIGS. 3A and 3B), and temporally successively combines the plurality of images with different viewpoints to create a pseudo three-dimensional image.
  • a plurality of images from different viewpoints can be created, for example, based on three-dimensional information contained in the images or three-dimensional information derived from the images.
  • images with different viewpoints can be created by setting a virtual viewpoint and estimating how it looks from the virtual viewpoint based on three-dimensional information.
  • a plurality of images from different viewpoints can be created using techniques known in the art, for example. For example, multiple images from different viewpoints can be performed using an AI model capable of creating image pairs with parallax.
  • the pseudo three-dimensional image created by creating means 120 is passed to display means 130 .
  • the display means 130 is configured to display a pseudo three-dimensional image.
  • the display means 130 can be any display means as long as it can display an image.
  • the display means 130 is, for example, a liquid crystal display, an LED display, or the like, but is not limited to these.
  • display means 130 may be a rotating display in which at least one member rotates to form a display surface.
  • the rotating display can be, for example, rotating display 20, 25, 27, etc., described above.
  • the display means 130 can be configured so that the orientation of the display surface can be changed.
  • the display means 130 may be able to change the orientation of the display surface using any mechanism.
  • the system 100 may further comprise detection means configured to detect the position of the user positioned in front of the display means 130 .
  • the detection means can be any sensor.
  • the detection means can be, for example, a camera.
  • the system 100 can change the orientation of the display surface of the display means 130 so that the display surface of the display means 130 faces the position of the user detected by the detection means. Thereby, the user can always see the display surface of the display means 130 from the front.
  • FIG. 6B shows an example configuration of a system 100' for three-dimensionally displaying an image in another embodiment.
  • the system 100' has the same configuration as the system 100, except that it includes means for synchronously reproducing sound for enhancing the pseudo three-dimensional effect of the pseudo three dimensional image.
  • the same reference numerals are given to the same configurations as those described above with reference to FIG. 6A, and detailed description thereof will be omitted.
  • the system 100 ′ comprises receiving means 110 , creating means 120 , displaying means 130 , synchronizing means 140 and reproducing means 150 .
  • the receiving means 110 is configured to receive an image.
  • the received image is passed to the creating means 120 .
  • the creating means 120 is configured to create a pseudo three-dimensional image by processing the image.
  • the pseudo three-dimensional image created by creating means 120 is passed to display means 130 and synchronization means 140 .
  • the display means 130 is configured to display a pseudo three-dimensional image.
  • the synchronizing means 140 is configured to synchronize the sound with the image.
  • the image may be a pseudo three-dimensional image created by creating means 120 or an image received by receiving means 110 .
  • Synchronizer 140 can synchronize the sound with the image using any technique known in the art of motion picture creation.
  • the sound can be any sound.
  • the sound may be, for example, a sound that is directly related to the image, a sound that is somewhat related to the image, or a sound that is unrelated to the image.
  • the sound may be sound unrelated to the image, more preferably sound directly related to the image.
  • Sound directly related to the image may be synchronized, for example, if the sound was already synchronized to the image received by the receiving means 110 (eg, if the image was a still image with sound or a moving image with sound). It could be the sound that was being played.
  • Sounds that are somewhat related to images are, for example, sounds that are associated with images (e.g., bird wing sounds or chirping for bird images, car running sounds or horn sounds for car images, etc.). can be Even if the sound was synchronized to the image, the sound can be a sound other than the sound that was synchronized to the image.
  • Synchronization means 140 can synchronize sounds such that when the synchronized sounds are played, they appear to change based on motion in the image.
  • the synchronization means 140 may be arranged such that the sound is changing according to the relationship between the three-dimensional representation of the elements in the pseudo-three-dimensional image and the image of interest, as described above with reference to FIGS. 1E-1J. Sounds can be synchronized so that they can be heard.
  • the change in sound can be, for example, at least one of loudness, pitch of sound, and timbre of sound.
  • the sound can be synchronized such that the sound is played when at least a portion of the image of interest touches the three-dimensional representation of the element.
  • the sound becomes louder or quieter, or, for example, at least a portion of the image of the object moves farther from the three-dimensional representation of the element.
  • Sounds can be synchronized such that they become higher or lower as they extend into the 3D representation, for example, approaching a different timbre as at least a portion of the image of interest extends further from the three-dimensional representation of the element.
  • the sound can be synchronized such that when at least a portion of the image of interest moves outside the three-dimensional representation of the element, the sound appears to change with the movement.
  • the synchronizing means 140 can synchronize the sounds so that the sounds change according to the relationship between the boundary set in the pseudo three-dimensional image and the target image. This corresponds to the example above where the 3D representation of the elements in the pseudo 3D image is invisible.
  • the sound can be synchronized such that the sound is played in response to at least a portion of the image of interest crossing a boundary. For example, to make the sound louder or quieter as at least a portion of the image of interest extends farther from the boundary, or to make the sound louder or lower as at least a portion of the image of interest extends farther from the boundary, for example.
  • the sounds can be synchronized such that at least a portion of the image of interest extends farther from the boundary and approaches a different timbre.
  • the sound can be synchronized such that when at least a portion of the image of interest moves outside the boundary, the sound appears to change with the movement.
  • the boundary can have any shape. For example, it may be a shape that surrounds the target image (e.g., spherical, elliptical, cylindrical, prismatic, etc.), or a shape that does not surround the target image (e.g., planar, curved, hemispherical). shape, etc.).
  • the boundary may change over time or may not change over time.
  • the sound synchronized with the image by the synchronization means 140 is passed to the reproduction means 150.
  • the reproduction means 150 is configured to reproduce sound synchronized with the image while the display means 130 is displaying the image.
  • the reproduction means 150 can be any reproduction means as long as it can reproduce sound in time with the image being displayed.
  • the reproduction means 150 is, for example, a speaker.
  • the speaker may be built in the display means 130 or may be externally attached to the display means 130 .
  • the systems 100, 100' described above may be implemented in the user equipment 200, for example.
  • FIG. 6C shows an example of the configuration of the user device 200.
  • the user device 200 can be any terminal device such as smart phones, tablet computers, smart glasses, smart watches, laptop computers, desktop computers, and the like.
  • the user device 200 includes a communication interface section 210 , an input section 220 , a display section 230 , a memory section 240 and a processor section 250 .
  • the communication interface unit 210 controls communication of the user device 200 with the outside.
  • the processor unit 250 of the user device 200 can receive information from outside the user device 200 via the communication interface unit 210 and can transmit information to the outside of the user device 200 .
  • the processor portion 250 of the user device 200 can receive images via the communication interface portion 210 . It is possible to transmit the pseudo three-dimensional image to the outside of the user device 200 .
  • Communication interface unit 210 may control communications in any manner.
  • the receiving means 110 of the system 100 can be implemented by the communication interface section 210.
  • the input unit 220 allows the user to input information into the user device 200 . It does not matter in what manner the input unit 220 allows the user to input information into the user device 200 . For example, if the input unit 220 is a touch panel, the user may input information by touching the touch panel. Alternatively, if the input unit 220 is a mouse, the user may input information by operating the mouse. Alternatively, if the input unit 220 is a keyboard, the user may input information by pressing keys on the keyboard. Alternatively, if the input unit 220 is a microphone, the user may input information by voice.
  • the display unit 230 can be any display for displaying information.
  • the display means 130 of the system 100 may be implemented by the display unit 230.
  • the memory unit 240 stores programs for executing processes in the user device 200 and data required for executing the programs.
  • the memory unit 240 stores, for example, part or all of a program for three-dimensionally displaying an image (for example, a program for realizing processing shown in FIGS. 7A and 7B described later).
  • the memory unit 240 may store, for example, part or all of a program for displaying an image on the display (for example, a program for realizing processing shown in FIG. 8, which will be described later).
  • the memory unit 240 may store applications that implement arbitrary functions. Here, it does not matter how the program is stored in the memory unit 240 .
  • the program may be pre-installed in memory unit 240 .
  • the program may be installed in memory unit 240 by being downloaded via network 500 .
  • the program may be stored on a computer-readable tangible storage medium.
  • Memory unit 240 may be implemented by any storage means.
  • the processor unit 250 controls the operation of the user device 200 as a whole.
  • the processor unit 250 reads a program stored in the memory unit 240 and executes the program. This allows the user device 200 to function as a device that executes desired steps.
  • the processor unit 250 may be implemented by a single processor or may be implemented by multiple processors.
  • the creating means 120 of the system 100 may be implemented by the processor unit 250.
  • the synchronization means 140 of system 100 may be implemented by processor portion 250 .
  • the user device 200 can include, for example, a detector configured to detect the position of the user positioned in front of the display 230 .
  • the detector can be any sensor.
  • the detector can be, for example, a camera.
  • the system 100 detection means may be implemented by a detection unit.
  • the user device 200 may include, for example, a reproduction unit (not shown) for reproducing sound.
  • the reproduction unit can be any speaker for reproducing sound.
  • the reproducing means 150 of the system 100 may be implemented by a reproducing section.
  • each component of the user device 200 is provided in the user device 200 in the example shown in FIG. 6C, the present invention is not limited to this. Any of the components of user device 200 may be provided external to user device 200 .
  • the display unit 230 can be provided outside the user device 200 (that is, the display unit 230 is an external display).
  • each hardware component may be connected via an arbitrary network. . At this time, the type of network does not matter.
  • Each hardware component may be connected via a LAN, wirelessly, or wired, for example.
  • User device 200 is not limited to a particular hardware configuration. For example, it is within the scope of the present invention to configure the processor section 250 with analog circuits instead of digital circuits. The configuration of user device 200 is not limited to that described above as long as its functions can be realized.
  • the components of the system 100 may be provided on the user device 200 side as described above, or distributed to both the user device 200 and the server device. If the components of system 100 are distributed in both user device 200 and server device, user device 200 comprises display means 130 (and playback means 150) and server device comprises receiving means 110 and creating means 120 (and synchronization means). 140).
  • FIG. 7A shows an example of processing 700 by system 100 for three-dimensional display of images.
  • the case where the system 100 is implemented by the user device 200 and the processing is executed by the processor unit 250 of the user device 200 will be described as an example.
  • processor unit 250 may implement creating means 120 .
  • the processor unit 250 receives an image.
  • the image contains the image of the object.
  • the processor unit 250 can receive images received from outside the system 100 via the communication interface unit 210, for example.
  • step S702 the processor unit 250 creates a pseudo three-dimensional image by processing the image received in step S701.
  • the processor unit 250 processes the image to create a pseudo-three-dimensional image by adding in the image three-dimensional representations of elements separate from the image of interest (see, for example, FIGS. 1B-2D). can do.
  • the three-dimensional representation of the elements is at least one of, for example, shading the elements, lighting the elements, giving the elements different sizes, or giving perspective to the elements. including one.
  • the processing by the processor unit 250 may be image processing known in the art.
  • the processor unit 250 can create a pseudo-three-dimensional animation, for example, by rotating the three-dimensional representation of the elements added in the image around the image of interest.
  • a pseudo three-dimensional moving image is preferable in that it enhances the pseudo three dimensional effect of the target image.
  • the processor unit 250 may, for example, superimpose a portion of the three-dimensional representation of the element on the image of the object so that a portion of the image of the object is a three-dimensional representation of the element. and such that another part of the three-dimensional representation of the element is superimposed under the target image such that said other part of the three-dimensional representation of the element is hidden by the target image.
  • a three-dimensional representation can be added. This can further enhance the pseudo three-dimensional effect of the image of the object.
  • the processor unit 250 can add, for example, a three-dimensional representation of multiple horizontal scan lines onto the image of interest.
  • the three-dimensional representation of the plurality of horizontal scanlines can be scanlines drawn along the three-dimensional outline of the object.
  • the processor unit 250 can determine the three-dimensional contour shape of the object based on the three-dimensional information contained in the image or derived from the image.
  • the process of deriving three-dimensional information from images can be performed, for example, by techniques known in the art.
  • the process of deriving 3D information from images can be performed using an AI model capable of estimating depth information from images.
  • the processor unit 250 may, for example, generate a plurality of images with different viewpoints from the image (see, for example, FIGS. 3A and 3B), and generate a plurality of images with different viewpoints over time.
  • a pseudo three-dimensional moving image can be created by combining the images continuously.
  • the processor unit 250 can create a plurality of images with different viewpoints based on, for example, three-dimensional information contained in the images or three-dimensional information derived from the images.
  • a plurality of images from different viewpoints can be created using techniques known in the art, for example. For example, multiple images from different viewpoints can be performed using an AI model capable of creating image pairs with parallax.
  • step S703 the processor unit 250 displays the pseudo three-dimensional image created in step S702 on the display unit 230.
  • the processor unit 250 can display the pseudo three-dimensional image created in step 702 as it is on the display unit 230, for example.
  • the processor unit 250 may display the pseudo three-dimensional image on the display unit 230 by changing the orientation of the object in the pseudo three-dimensional image according to the orientation of the display surface of the display unit 230 .
  • an image corresponding to the orientation of the user with respect to the display unit 230 is displayed on the display unit 230 .
  • the pseudo three-dimensional image is displayed on the display unit 230 by changing the orientation of the object in the pseudo three-dimensional image according to the orientation of the display surface of the display unit 230. It is not limited to the pseudo three-dimensional image created at 702 .
  • the image received in step S701 is a three-dimensional image
  • the orientation of the object in the image received in step S701 can be changed and the image can be displayed on display unit 230 .
  • the object in the image appears to be a three-dimensional object, so that the displayed image can result in a pseudo-three-dimensional image.
  • the process 700 may be distributed to both the user device 200 and the server device.
  • steps S701 and S702 can be performed by the server device
  • step S703 can be performed by the user device 200 .
  • FIG. 7B shows an example of processing 710 by system 100' for displaying an image three-dimensionally.
  • the system 100 ′ is implemented by the user device 200 and the processing is executed by the processor unit 250 of the user device 200 as an example.
  • processor portion 250 may implement creating means 120 and synchronizing means 140 .
  • Step S711 the processor unit 250 receives the image.
  • Step S711 is the same as step S701.
  • step S702 the processor unit 250 creates a pseudo three-dimensional image by processing the image received in step S701.
  • Step S712 is similar to step S702.
  • the processor unit 250 synchronizes the sound with the pseudo three-dimensional image created at step S702.
  • Processor unit 250 can synchronize sound with images using any technique known in the art of motion picture production.
  • the processor unit 250 can synchronize the sounds such that when the synchronized sounds are played, they appear to change based on motion in the image.
  • the processor unit 250 can synchronize the sounds so that the sounds change according to the relationship between the boundary set in the pseudo three-dimensional image and the target image.
  • the boundary may or may not be a three-dimensional representation of the sculpture in the pseudo-three-dimensional image.
  • step S714 the processor unit 250 displays the pseudo three-dimensional image created in step S712 on the display unit 230.
  • Step S714 is similar to step S703.
  • step S715 the processor unit 250 reproduces the sound synchronized in step S713 from the reproduction unit while the pseudo three-dimensional image is being displayed in step S714.
  • the processing 710 adds the auditory pseudo-three-dimensional effect of the sound reproduced in time with the motion in the pseudo-three-dimensional image, thereby enhancing the three-dimensional appearance of the image. feeling will be emphasized.
  • processing 710 may be distributed to both the user device 200 and the server device.
  • steps S711 to S713 can be performed by the server device
  • steps S714 and S715 can be performed by the user device 200.
  • step S712 is omitted, sound is synchronized with the image received in step S711 in step S713, and the image received in step S711 is displayed in step S714.
  • FIG. 8 shows an example of a process 800 for displaying an image on a display.
  • Process 800 can provide a virtual reality image to a user without using a dedicated display device (eg, VR goggles, head-mounted display, etc.).
  • Process 800 is performed by processor unit 250 of user device 200, for example.
  • the detection unit of the user device 200 detects the position of the user's viewpoint with respect to the display.
  • the detection unit can detect the position of the user's viewpoint by any detection means.
  • the detection unit can detect the position of the user's viewpoint based on the image captured by the camera.
  • the position of the user's viewpoint may be, for example, the position of the user's eyes (more specifically, for example, the midpoint between the user's eyes).
  • the processor unit 250 of the user device 200 receives an image to be displayed as a virtual reality image and processes the image to determine the portion of the image to be displayed on the display. Determining the portion of the image to be displayed on the display can be performed, for example, by steps S8021-S8023 below.
  • step S8021 the processor unit 250 sets a virtual sphere.
  • a virtual sphere is a virtual sphere whose center is the user's viewpoint and whose radius is the distance between the user's viewpoint and the display. For example, when the distance between the user's viewpoint and the display is small, as shown in FIG. 5B, the diameter of the phantom sphere becomes small, while the distance between the user's viewpoint and the display is small, as shown in FIG. 5C. If the distance of is large, the diameter of the phantom sphere will be large.
  • step S8022 the processor unit 250 pastes the image to be displayed as a virtual reality image on the inner surface of the virtual sphere set in step S8021.
  • the processor unit 250 can apply an image to the inner surface of the sphere by any processing known in the field of image processing.
  • the image is preferably represented by the equirectangular projection method. This is because an image represented by the equirectangular projection can be pasted on the inner surface of the sphere without distortion.
  • the processor unit 250 identifies the portion of the image pasted on the inner surface portion of the virtual sphere corresponding to the display surface of the display.
  • the portion of the inner surface of the virtual sphere that corresponds to the display surface of the display is the portion that overlaps the display surface of the display when the virtual sphere is virtually arranged around the user's viewpoint.
  • the processor unit 250 can, for example, derive the inner surface portion of the virtual sphere corresponding to the display surface of the display from the relative positional relationship between the user and the display. Then, the processor unit 250 can specify the portion of the image from the relationship between the derived portion and the image pasted on the virtual sphere.
  • step S803 When the portion of the image to be displayed on the display is thus determined, the process proceeds to step S803.
  • step S803 the portion of the image determined in step S802 is displayed on the display surface of the display.
  • the image displayed by process 800 will maintain the perspective perceived by the user. That is, objects that are far away in the virtual reality image will still appear far away whether the user moves closer to the display or farther away from the display, and objects that are closer in the virtual reality image will appear farther away when the user moves away from the display. Even when the user moves away from the display, it still appears to be close. In this way, the user U can see the image with a sense of the real world through the display.
  • This display may be a dedicated display device (e.g., VR goggles, head-mounted display, etc.), but may be a general stationary display, or the above-described rotary display 20, 25, 27, etc. good. That is, the user can view a natural virtual reality image without wearing a dedicated display device.
  • each step shown in FIGS. 7A and 7B and part of the processing shown in FIG. Although it has been described that the present invention is realized by and, the present invention is not limited to this. At least one of the processing of each step shown in FIGS. 7A and 7B and part of the processing shown in FIG. 8 may be realized by a hardware configuration such as a control circuit.
  • the present invention is useful in that it can provide a method and the like capable of creating a pseudo-three-dimensional image in order to three-dimensionally display an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Le but de la présente invention est de fournir un procédé ou similaire capable de créer une image pseudo-tridimensionnelle afin d'afficher une image en trois dimensions. La présente invention propose un procédé pour afficher une image en trois dimensions, le procédé comprenant : la réception d'une image comprenant une image cible ; la création d'une image pseudo-tridimensionnelle avec un effet pseudo-tridimensionnel en ajoutant, dans l'image, une représentation tridimensionnelle d'éléments autres que l'image cible, en traitant l'image ; et l'affichage de l'image pseudo-tridimensionnelle.
PCT/JP2022/012787 2021-03-22 2022-03-18 Procédé, programme et système pour afficher une image en trois dimensions WO2022202700A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021047451 2021-03-22
JP2021-047451 2021-03-22
JP2021-092377 2021-06-01
JP2021092377A JP2022146839A (ja) 2021-03-22 2021-06-01 画像を3次元的に表示するための方法、プログラムおよびシステム

Publications (1)

Publication Number Publication Date
WO2022202700A1 true WO2022202700A1 (fr) 2022-09-29

Family

ID=83397278

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/012787 WO2022202700A1 (fr) 2021-03-22 2022-03-18 Procédé, programme et système pour afficher une image en trois dimensions

Country Status (1)

Country Link
WO (1) WO2022202700A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62153780A (ja) * 1985-12-27 1987-07-08 Kyoritsu Denpa Kk インタ−レ−ス表示装置
JP2003216071A (ja) * 2002-01-21 2003-07-30 Noritsu Koki Co Ltd 回転型表示装置
JP2010238108A (ja) * 2009-03-31 2010-10-21 Sharp Corp 映像処理装置、映像処理方法及びコンピュータプログラム
JP2013012811A (ja) * 2011-06-28 2013-01-17 Square Enix Co Ltd 近接通過音発生装置
KR20160071797A (ko) * 2014-12-12 2016-06-22 삼성전자주식회사 디스플레이장치 및 그 제어방법
JP2018056953A (ja) * 2016-09-30 2018-04-05 アイシン精機株式会社 周辺監視装置
CN212675887U (zh) * 2020-08-25 2021-03-09 深圳市洲明科技股份有限公司 一种3d显示装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62153780A (ja) * 1985-12-27 1987-07-08 Kyoritsu Denpa Kk インタ−レ−ス表示装置
JP2003216071A (ja) * 2002-01-21 2003-07-30 Noritsu Koki Co Ltd 回転型表示装置
JP2010238108A (ja) * 2009-03-31 2010-10-21 Sharp Corp 映像処理装置、映像処理方法及びコンピュータプログラム
JP2013012811A (ja) * 2011-06-28 2013-01-17 Square Enix Co Ltd 近接通過音発生装置
KR20160071797A (ko) * 2014-12-12 2016-06-22 삼성전자주식회사 디스플레이장치 및 그 제어방법
JP2018056953A (ja) * 2016-09-30 2018-04-05 アイシン精機株式会社 周辺監視装置
CN212675887U (zh) * 2020-08-25 2021-03-09 深圳市洲明科技股份有限公司 一种3d显示装置

Similar Documents

Publication Publication Date Title
US9684994B2 (en) Modifying perspective of stereoscopic images based on changes in user viewpoint
US11010958B2 (en) Method and system for generating an image of a subject in a scene
US7907167B2 (en) Three dimensional horizontal perspective workstation
KR102230645B1 (ko) 공간화 오디오를 갖는 가상 현실, 증강 현실 및 혼합 현실 시스템들
US11128977B2 (en) Spatial audio downmixing
WO2018086295A1 (fr) Procédé et appareil d'affichage d'interface d'application
JP2008506140A (ja) 水平透視ディスプレイ
US11069137B2 (en) Rendering captions for media content
JP2011077710A (ja) 映像コミュニケーションシステム、及び映像コミュニケーション方法
KR20120048191A (ko) 테이블 타입 인터렉티브 3차원 시스템
CN111226187A (zh) 通过镜子与用户交互的系统和方法
TW202240530A (zh) 用於新穎視圖合成之神經混合
JP2023168544A (ja) 低周波数チャネル間コヒーレンス制御
EP3422151A1 (fr) Procédés, appareils, systèmes, programmes informatiques permettant l'accès au contenu virtuel de réalité communiquée
WO2022202700A1 (fr) Procédé, programme et système pour afficher une image en trois dimensions
CN111699460A (zh) 多视图虚拟现实用户界面
JP2022146839A (ja) 画像を3次元的に表示するための方法、プログラムおよびシステム
CN116325720A (zh) 远程呈现中的深度冲突的动态消解
EP3623908A1 (fr) Système permettant de commander des dispositifs connectés à capacité audio dans des environnements de réalité mixte
CN111837161A (zh) 对三维内容系统中的图像进行阴影处理
JP6601392B2 (ja) 表示制御装置、表示制御方法、及び、プログラム
EP3343347A1 (fr) Traitement audio
JP6562371B1 (ja) 表示装置、表示処理装置、および表示処理プログラム
US20200302761A1 (en) Indicator modes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22775487

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22775487

Country of ref document: EP

Kind code of ref document: A1