WO2013146385A1 - Display apparatus and program - Google Patents

Display apparatus and program Download PDF

Info

Publication number
WO2013146385A1
WO2013146385A1 PCT/JP2013/057511 JP2013057511W WO2013146385A1 WO 2013146385 A1 WO2013146385 A1 WO 2013146385A1 JP 2013057511 W JP2013057511 W JP 2013057511W WO 2013146385 A1 WO2013146385 A1 WO 2013146385A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
display unit
unit
displayed
Prior art date
Application number
PCT/JP2013/057511
Other languages
French (fr)
Japanese (ja)
Inventor
英範 栗林
正樹 大槻
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2013146385A1 publication Critical patent/WO2013146385A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the present invention relates to a display device and a program.
  • the display method as described above may limit the viewpoint from which the observer can recognize a stereoscopic image (three-dimensional image).
  • the range that the observer can recognize as a stereoscopic image There is a problem that it cannot be displayed in a wide range.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide a display device that can display a wide range of a range that an observer can recognize as a stereoscopic image.
  • a display device includes a first display unit that displays a first image including a first subject, and transmits light of the first image displayed by the first display unit.
  • a second display unit that is portable to the user, displays a second image including a second subject corresponding to the first subject, and displays the second image so that at least a part of the first subject and the second subject overlap each other. It is characterized by providing.
  • the display device transmits the first light including light from the first subject and displays the second image including the second subject corresponding to the first subject.
  • a display unit that is portable for a user and that displays so that at least a part of the first subject and the second subject overlap each other.
  • a program is configured to transmit a first display unit that displays a first image including a first subject, and light of the first image displayed by the first display unit, At least one of the first subject and the second subject is displayed on a computer of a display device having a second display unit that is portable for a user and displays a second image including a second subject corresponding to the first subject.
  • a display procedure for displaying a part so as to overlap is executed.
  • a display device includes a mounting unit that is mounted on an observer's head and the mounting unit that transmits first light including light from the first subject. And a display unit that displays a second image corresponding to the first subject, wherein the display unit is configured for the observer who has observed the first subject and the second image via the display unit.
  • a display device is a display device mounted on the head of an observer, and displays a first image by an eyepiece optical system and displays the first image displayed.
  • a first display unit that transmits light of a second image indicating an edge portion in a direction in which the first image is displayed is provided.
  • the display device displays a first image, detects a first display portion that transmits incident light, and a second image that indicates an edge portion of the first image.
  • a display device includes a supply unit that displays a first image and supplies the first image to a first display unit that transmits incident light, and the first display.
  • a second display unit that displays the second image set by the setting unit.
  • the present invention it is possible to display a wide range that can be recognized as a stereoscopic image (three-dimensional image) by the observer.
  • FIG. 1 is a configuration diagram illustrating an example of a configuration of a display system 100 according to the present embodiment.
  • the display system 100 of this embodiment includes an image information supply device 2 and a display device 10.
  • the display device 10 includes a first display unit 11 and a second display unit 12.
  • an XYZ orthogonal coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ orthogonal coordinate system.
  • a direction in which the first display unit 11 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively.
  • the X-axis direction is the horizontal direction of the first display unit 11
  • the Y-axis direction is the vertical direction of the first display unit 11.
  • the image information supply device 2 supplies the first image information to the first display unit 11 and also supplies the second image information to the second display unit 12.
  • the first image information is information for displaying the first image P11 displayed on the first display unit 11.
  • the second image information is information for displaying the second image P12 displayed on the second display unit 12, and is image information of the edge image PE generated based on the first image information.
  • the edge image PE is an image showing the edge portion E in the first image P11. The edge image PE will be described later with reference to FIG.
  • the second image information includes image information indicating the left-eye image P12L and image information indicating the right-eye image P12R, as will be described later with reference to FIG.
  • the display device 10 includes the first display unit 11 and the second display unit 12, and based on the first image information acquired from the image information supply device 2, the first image P11 is displayed. While displaying, based on the 2nd image information acquired from the image information supply apparatus 2, the 2nd image P12 is displayed.
  • the first display unit 11 includes a first display surface 110 that displays an image in the (+ Z) direction. Based on the first image information acquired from the image information supply device 2, the first image P11 is displayed. Is displayed on the first display surface 110. The first light beam R11 emitted from the first image P11 displayed on the first display surface 110 is visually recognized as an optical image by the observer 1 (also referred to as a user in the following description) sitting on the chair 3. The This observer 1 sits on the chair 3 provided at a position away from the first display surface 110 in the (+ Z) direction by a predetermined distance, and observes the first display surface 110 in the ⁇ Z direction.
  • FIG. 2 is a perspective view showing an example of the configuration of the second display unit 12 of the present embodiment.
  • the 2nd display part 12 is a display part with which the transmissive head mount display 50 which can permeate
  • the second display unit 12 displays an image in front of the eyes of the observer 1 wearing the head mounted display 50. That is, the second display unit 12 is a portable transmissive display that is portable by the user.
  • the second display unit 12 includes an optical system (not shown) such as a lens, and displays a virtual image to the observer 1.
  • the second display unit 12 includes a left eye display unit 12L and a right eye display unit 12R.
  • the left eye display unit 12L of the second display unit 12 When the left eye display unit 12L of the second display unit 12 is attached to the viewer 1, the left eye image P12L included in the second image information supplied from the image information supply device 2 is displayed on the viewer 1. The left eye L is displayed so as to be visually recognized. Similarly, the right eye display unit 12R of the second display unit 12 displays the right eye image P12R included in the second image information so that the right eye R of the observer 1 can visually recognize it.
  • the second display unit 12 also emits the first light beam R11 (light) of the first image displayed by the first display unit 11 when the observer 1 sits on the chair 3 and observes the first display unit 11. Transmit in the transmission direction.
  • the transmission direction is the (+ Z) direction. That is, the left eye display unit 12L of the second display unit 12 transmits the left eye first light beam R11L of the incident first light beam R11. Similarly, the right eye display unit 12R of the second display unit 12 transmits the right eye first light beam R11R out of the incident first light beam R11. In this way, the left eye display unit 12L transmits the incident left eye first light beam R11L and emits the left eye second light beam R12L.
  • the left eye second light beam R12L and the left eye first light beam R11L are visually recognized by the left eye of the observer 1 who is observing the ( ⁇ Z) direction as corresponding optical images.
  • the right eye display unit 12R transmits the incident right eye first light beam R11R and emits the right eye second light beam R12R. That is, the right eye second light beam R12R and the right eye first light beam R11R are visually recognized by the right eye of the observer 1 who is observing the ( ⁇ Z) direction as corresponding optical images.
  • FIG. 3 is a schematic diagram illustrating an example of the first image P11 in the present embodiment.
  • FIG. 4 is a schematic diagram illustrating an example of the second image P12 in the present embodiment.
  • the first image P11 is, for example, an image showing a square pattern as shown in FIG.
  • the four sides constituting the quadrilateral can be edge portions, but in the following description, for the sake of convenience, the left side edge portion E1 indicating the left side of the quadrilateral and the right side
  • the right-side edge portion E2 indicating the above is described as the edge portion E.
  • the second image P12 is, for example, an image including a left side edge image PE1 showing a left side edge part E1 of a square pattern and a right side edge image PE2 showing a right side edge part E2 as shown in FIG.
  • an edge portion (which may be simply expressed as an edge or an edge region) is a portion where the brightness (for example, luminance) of adjacent or neighboring pixels in the image changes suddenly.
  • the edge portion E is a portion including a high frequency component in the image (for example, the first image P11).
  • the edge portion E is a portion extracted by a filter (for example, a band pass filter) that allows a predetermined frequency component to pass through the image (for example, the first image P11).
  • the edge portion E indicates a theoretical line segment having no width on the left side or the right side of the quadrangle shown in FIG. 3 and, for example, around the edge having a finite width corresponding to the resolution of the second display unit 12.
  • the area is also shown.
  • the second image P12 includes a left eye image P12L displayed by the left eye display unit 12L of the second display unit 12, and a right eye image P12R displayed by the right eye display unit 12R of the second display unit 12. included.
  • the left-eye image P12L is an image including a left-side edge image PE1L and a right-side edge image PE2L.
  • This right-eye image P12R is an image including a left-side edge image PE1R and a right-side edge image PE2R. Since the left-eye image P12L and the right-eye image P12R are the same image, in the following description, the two images P12 will be collectively referred to unless otherwise distinguished.
  • FIG. 5 is a schematic diagram illustrating an example of an image displayed by the display device 10 according to the present embodiment.
  • the first display unit 11 displays the first image P11 as a virtual image in the (+ Z) direction so that the viewer 1 can visually recognize it.
  • the second display unit 12 displays the second image P12 in the (+ Z) direction so that the viewer 1 can visually recognize it.
  • the second image P12 is displayed at a position that is a predetermined distance Lp away from the position where the first image P11 is displayed in the (+ Z) direction.
  • the second display unit 12 is a display unit included in the transmissive head mounted display 50 that transmits light. Therefore, the first image P11 displayed on the first display unit 11 and the second image P12 displayed on the second display unit 12 are visually recognized by the observer 1 so as to overlap each other.
  • the predetermined distance Lp is a distance between the depth position where the first image P11 is displayed and the depth position where the second image P12 is displayed.
  • the depth position is a position in the Z-axis direction. The predetermined distance Lp is determined in advance based on the depth position where the first image P11 is displayed and the depth position of the observer 1.
  • the first image P11 displayed by the first display unit 11 and the second image P12 displayed by the second display unit 12 are images whose display timings are synchronized with each other.
  • the second display unit 12 synchronizes with the display timing of the first image P11 displayed by the first display unit 11, and A second image P12 corresponding to the first image P11 is displayed.
  • the second display unit 12 is for the left eye edge portion E1 in the first image P11 displayed by the first display unit 11 and for the left eye corresponding to the edge portion.
  • the left-eye image P12L is displayed so that the left-side edge image PE1L of the image P12L is visually recognized correspondingly.
  • the second display unit 12 includes the right side edge portion E2 in the first image P11 displayed by the first display unit 11, and the right side edge image of the right eye image P12R corresponding to the edge portion.
  • the right-eye image P12R is displayed so that the PE2R is visually recognized correspondingly.
  • the left eye display unit 12L of the second display unit 12 is visually recognized so that at least a part of the first image P11 and the left side edge image PE1L overlap the left eye L of the viewer 1.
  • the left-eye image P12L as a virtual image is displayed.
  • the left eye display unit 12L is visually recognized so that at least a part of the first image P11 and the right side edge image PE2L of the left eye image P12L overlap the left eye L of the viewer 1.
  • the left-eye image P12L as a virtual image is displayed.
  • the left eye display unit 12L when a person image is displayed in the first image P11 and an edge image PE indicating the contour of the person is displayed in the left eye image P12L, the left eye display unit 12L The left-eye image P12L is displayed so that the edge image PE is visually recognized only on the contour portion of the person's hand.
  • the left eye display unit 12L may not display the edge image PE for a portion other than the hand of the person.
  • the left-eye display unit 12L may not display the edge image PE displayed on the part other than the hand of the person so as to overlap the part other than the hand.
  • the left eye display unit 12L of the second display unit 12 is connected to the left eye L of the viewer 1 on the ( ⁇ X) side (that is, the ( ⁇ X) side of the square left side edge portion E1 indicated by the first image P11).
  • the left-eye image P12L as a virtual image is displayed so that the left-side edge portion E1 and the left-side edge image PE1L of the left-eye image P12L are visually recognized on the outside of the rectangle.
  • the left eye display unit 12L is connected to the left eye L of the observer 1 on the ( ⁇ X) side (that is, inside the rectangle) of the right edge portion E2 of the quadrangle indicated by the first image P11.
  • the left-eye image P12L as a virtual image is displayed so that E2 and the right-side edge image PE2L of the left-eye image P12L are visually recognized.
  • the right eye display unit 12R of the second display unit 12 is connected to the right eye R of the viewer 1 on the (+ X) side of the right edge portion E2 of the quadrangle indicated by the first image P11 (that is, outside the quadrangle). )
  • the right eye image P12R as a virtual image is displayed so that the right edge portion E2 and the right edge image PE2R of the right eye image P12R are visually recognized.
  • the right eye display unit 12R is connected to the right eye R of the viewer 1 on the (+ X) side (that is, inside the rectangle) of the left edge portion E1 of the quadrangle shown by the first image P11.
  • the right-eye image P12R as a virtual image is displayed so that the left-side edge image PE1R of the right-eye image P12R overlaps and is visually recognized.
  • the second display unit 12 transmits the light of the displayed first image P11 in the transmission direction and transmits the second image P12 corresponding to the first image P11. Display in the direction of.
  • the first image P11 displayed by the first display unit 11 and the second image P12 displayed by the second display unit 12 are images having different sizes.
  • the interval between the left edge portion E1 and the right edge portion E2 of the first image P11 is different from the interval between the left edge image PE1L and the right edge image PE2L of the second image P12.
  • the interval (first interval) between the left edge portion E1 and the right edge portion E2 of the first image P11 is the interval (second interval) between the left edge image PE1L and the right edge image PE2L of the second image P12. The interval is wider than the interval.
  • the first display unit 11 and the second display unit 12 see the first image P11 and the second image P12 superimposed from the position of the left eye L of the user 1, the first display unit 11 and the second display unit 12
  • the first image P11 and the second image P12 are displayed such that the apparent intervals are the same. That is, the first display unit 11 and the second display unit 12 have the same apparent size when the first image P11 and the second image P12 are viewed from the position of the left eye L of the user 1.
  • the first image P11 and the second image P12 are displayed.
  • the first display unit 11 and the second display unit 12 view the first image P11 and the second image P12 in an overlapping manner from the position of the left eye L of the user 1, the first image P11.
  • first display unit 11 and the second display unit 12 display the first image P11 when the first image P11 and the second image P12 are viewed from the position of the left eye L of the user 1.
  • the image may be displayed with the apparent size slightly smaller than the apparent size of the second image P12.
  • the observer 1 recognizes a stereoscopic image (three-dimensional image) from the first image P11 and the second image P12.
  • the observer 1 observes the first image P11 and the edge image PE corresponding to the edge portion E of the first image P11 at a position where the corresponding portions of these images overlap.
  • the observer 1 perceives an image at a depth position between the display surfaces in accordance with the luminance ratio between the first image P11 and the edge image PE. For example, when the observer 1 observes a quadrilateral pattern, there is a minute brightness step that cannot be recognized on the retina image of the observer 1.
  • FIG. 6 is a schematic diagram illustrating an example of the optical image IM in the present embodiment.
  • the optical image IM is an image in which the first image P11 and the second image P12 are visually recognized by the observer 1.
  • the optical image IM includes an optical image IML visually recognized by the left eye L of the observer 1 and an optical image IMR visually recognized by the right eye R of the observer 1.
  • the optical image IML visually recognized by the left eye L of the observer 1 will be described.
  • FIG. 6 in the left eye L of the viewer 1, the first image P11L visually recognized by the left eye L and the left eye image P12L among the second images P12 are combined.
  • An image IML is formed. Specifically, as described with reference to FIG.
  • FIG. 7 is a graph showing an example of the brightness distribution of the optical image IM in the present embodiment.
  • the X coordinates X1 to X6 shown in FIG. 7 are the X coordinates corresponding to the brightness change points of the optical image IM.
  • the case of the brightness value BR will be described as an example of the brightness of the image.
  • the first image P11L visually recognized by the left eye L will be described assuming that the brightness value BR is zero at the X coordinates X1 to X2. Further, the first image P11L has the luminance value BR2 at the X coordinates X2 to X6.
  • the left-eye image P12L has the luminance value BR1 at the X coordinates X1 to X2 and the X coordinates X4 to X5, and zero at the X coordinates X2 to X4. Accordingly, the brightness (for example, luminance) of the optical image IML visually recognized by the left eye L becomes the luminance value BR1 at the X coordinates X1 to X2.
  • the brightness of the optical image IML is the brightness value BR2 at the X coordinates X2 to X4 and the X coordinates X5 to X6, and the brightness obtained by combining the brightness value BR1 and the brightness value BR2 at the X coordinates X4 to X5. It becomes a certain luminance value BR3.
  • FIG. 8 is a graph showing an example of binocular parallax that occurs in the left eye L and right eye R in the present embodiment.
  • the distribution of brightness of the image recognized by the viewer 1 by the optical image IML formed on the retina of the left eye L is as shown by the waveform WL in FIG.
  • the observer 1 is, for example, the edge portion of the object viewing the position on the X-axis where the change in the brightness of the visually recognized image is maximized (that is, the inclination of the waveform WL is maximized).
  • the observer 1 for example, the waveform WL in the left eye L side, and viewing the position of the X EL shown in FIG. 8 (i.e., the distance L EL from the origin O of the X-axis position) It is recognized as an edge part on the left side of the rectangle.
  • the brightness (for example, the luminance value BR) of the optical image IMR visually recognized by the right eye R is visually recognized by the left eye L at the X coordinates X1 to X3 and the X coordinates X4 to X6. This is different from the brightness of the optical image IML.
  • the brightness distribution of the image recognized by the observer 1 is as shown by the waveform WR in FIG.
  • the observer 1 visually recognizes the position of the XER shown in FIG. 8 (that is, the position of the distance LER from the origin O of the X axis) for the waveform WR on the right eye R side. Recognize that it is a part.
  • the observer 1 recognizes the position X EL of the edge portion of the square left eye L is viewing, and a position X ER of the edge portion of the square right eye R is visually recognized as binocular parallax. Then, the observer 1 recognizes the quadrangular image as a stereoscopic image (three-dimensional image) based on the binocular parallax of the edge portion.
  • the display device 10 includes the first display unit 11 that displays the first image P11 and the second display unit 12 that displays the second image P12.
  • the second display unit 12 is a transmissive display unit that is attached in front of the eyes of the observer 1 and is capable of transmitting incident light.
  • the second display unit 12 includes a first image P11 displayed by the first display unit 11. The light is transmitted in the transmission direction, and the second image P12 corresponding to the first image P11 is displayed in the transmission direction.
  • the display device 10 displays the second image P12 set according to the position of the observer 1 with respect to the first display unit 11 on the second display unit 12 that is the head mounted display 50.
  • the display apparatus 10 can set and display the 2nd image P12 for every position of the observer 1.
  • FIG. Therefore, the display device 10 can display the second image P12 corresponding to the position of the observer 1 for each position of the observer 1. That is, the display device 10 can set a wide range that the observer 1 can recognize as a stereoscopic image.
  • the display device 10 displays the first image P11 and the second image P12 according to the luminance ratio between corresponding pixels of the first image P11 and the second image P12, thereby allowing the viewer 1 to Display an image. That is, the display device 10 can display a stereoscopic image to the observer 1 even if the first image P11 and the second image P12 are both planar images (two-dimensional images). Thereby, the display apparatus 10 can display the three-dimensional image in which the thickness of the object to be displayed is easily recognized by the observer 1 by reducing the writing effect.
  • the display device 10 can display an image that allows the observer 1 to recognize a stereoscopic image even if the first display unit 11 is a two-dimensional display device. Therefore, the display device 10 can display the first image P11 as a planar image (two-dimensional image) on the first display unit 11 for an observer who is not wearing the second display unit 12. That is, the display device 10 simultaneously displays a stereoscopic image for the observer 1 wearing the head mounted display 50 and a planar image for the observer not wearing the head mounted display 50. be able to. Thereby, the display device 10 can simultaneously display (screen) a stereoscopic image and a planar image in a movie theater, for example.
  • the second image P12 displayed by the second display unit 12 of the present embodiment includes an edge image PE indicating the edge portion E in the first image P11, and is displayed on the first display unit 11.
  • the edge portion E in the image P11 and the edge image PE indicating the edge portion E are set so as to be viewed by the observer 1 in correspondence with each other.
  • the display apparatus 10 can display the edge image PE (that is, the edge portion) of the first image P11 and the second image P12 in an overlapping manner.
  • the display device 10 according to the present embodiment affects the image other than the edge portion displayed on the first display portion 11 by the influence of the image (that is, the edge image PE) displayed on the second display portion 12. An image can be displayed without giving.
  • the first display unit 11 and the second display unit 12 are displayed. 12 may affect the display accuracy of a three-dimensional image (three-dimensional image).
  • display conditions for example, brightness and color of the displayed image
  • the display device 10 of the present embodiment displays the edge image PE on the second display unit 12, the first display is performed even if the display conditions of the first display unit 11 and the second display unit 12 vary. The image other than the edge portion displayed on the portion 11 is not affected. Thereby, even if the display conditions of the first display unit 11 and the second display unit 12 do not exactly match, a stereoscopic image (three-dimensional image) can be displayed with high accuracy. That is, the display device 10 of the present embodiment can display a stereoscopic image (three-dimensional image) with high accuracy.
  • the display device 10 since the display device 10 according to the present embodiment only needs to display the edge image PE on the second display unit 12, as compared with the case where an image other than the edge image PE is also displayed on the second display unit 12. Power consumption can be reduced.
  • the observer 1 recognizes a step change in the brightness (for example, luminance) of the image as a smooth change in brightness such as the waveform WL and the waveform WR. For this reason, the display apparatus 10 of this embodiment can make the observer 1 recognize a stereoscopic image even when the definition of the edge image PE is low.
  • the definition is, for example, the number of pixels constituting an image.
  • the display device 10 of the present embodiment can reduce the definition of the second display unit 12 as compared with the definition of the first display unit 11. That is, the display device 10 of the present embodiment can configure the second display unit 12 with an inexpensive display device.
  • the display device 10 of the present embodiment is configured so that the edge portion in the first image P11 displayed by the first display unit 11 and the edge image PE are visually recognized correspondingly. P11 and the second image P12 are displayed. Thereby, each image displayed by the display device 10 of the present embodiment is visually recognized so that the edge portion in the first image P11 and the edge image PE are not separated by the observer 1. Therefore, the display device 10 of the present embodiment can display a stereoscopic image with high accuracy.
  • the display device 10 of the present embodiment displays a stereoscopic image by displaying these images without reducing the contrast of the left-eye image P12L and the right-eye image P12R in the second image P12. It can be displayed with high accuracy. Further, in the display device 10 of the present embodiment, even if there is an overlapping range between the brightness range in which the left-eye image P12L is displayed and the brightness range in which the right-eye image P12R is displayed. These images can be displayed. For example, the display device 10 according to the present embodiment displays a stereoscopic image with high accuracy even when the brightness at which the left-eye image P12L is displayed matches the brightness at which the right-eye image P12R is displayed. be able to. That is, the display device 10 of the present embodiment can display a stereoscopic image with high accuracy without considering the relationship between the pixel values of the left-eye image P12L and the right-eye image P12R.
  • FIG. 9 is a schematic diagram illustrating an example of the left-eye parallax image P12aL.
  • FIG. 10 is a schematic diagram illustrating an example of the right-eye parallax image P12aR.
  • the display device 10a displays the second image P12a on the second display unit 12.
  • the second image P12a includes a left-eye image P12L and a right-eye image P12R that have binocular parallax with each other.
  • the left-eye display unit 12L of the second display unit 12 of the display device 10a displays the left-eye parallax image P12aL as the left-eye image P12L
  • the right-eye display unit 12R uses the right-eye parallax image P12aR for the right eye. Displayed as an image P12R.
  • This left-eye image P12L is shifted in the (+ X) direction by a distance LaL from the left-side edge image PE1 and the right-side edge image PE2 indicated by the edge image PE included in the second image P12 described in the first embodiment.
  • An edge image PE1aL and an edge image PE2aL are included.
  • the right-eye image P12R includes an edge image PE1aR and an edge image PE2aR that are shifted in the ( ⁇ X) direction by a distance LaR from the left-side edge image PE1 and the right-side edge image PE2.
  • the second display unit 12 displays the left-eye image P12L and the right-eye image P12R that have binocular parallax.
  • the second image P12a includes the left-eye parallax image P12aL and the right-eye parallax image P12aR that have binocular parallax
  • the second display unit 12 displays the left eye L of the viewer 1
  • the left-eye parallax image P12aL is displayed on the right eye R
  • the right-eye parallax image P12aR is displayed on the right eye R.
  • the display device 10a can set the depth position of the image of the edge image PEa visually recognized by the observer 1 with the second image P12a based on the binocular parallax of the second image P12a.
  • the display device 10a displays a second image P12a with binocular parallax in the cross direction on the second display unit 12 so that a stereoscopic image visually recognized by the observer 1 is displayed on the first display unit 11 ( + Z) direction side.
  • the display device 10a displays a second image P12a with binocular parallax in the non-intersecting direction on the second display unit 12 so that a stereoscopic image visually recognized by the observer 1 is displayed on the first display unit 11 ( -Z) Can be set on the direction side.
  • the display device 10a can set the depth position of the stereoscopic image visually recognized by the observer 1 over a wide range, as compared with the case where the binocular parallax is not added to the second image P12a. That is, the display device 10a can enhance the stereoscopic effect of the stereoscopic image visually recognized by the observer 1.
  • FIG. 11 is a configuration diagram illustrating an example of a configuration of a display system 100b including the display device 10b according to the present embodiment.
  • the image information supply device 2 supplies the first image information to the display device 10b.
  • the first image information is information for displaying the first image P11 displayed by the first display unit 11 of the display device 10b.
  • the first image information includes depth position information indicating the depth position when the display device 10b displays the first image P11 in three dimensions.
  • the display device 10b changes the position of the edge image PE indicating the edge portion based on the depth position information. Accordingly, the display device 10b sets the sense of depth of the first image P11 that is felt when the observer 1 sees the first image P11 and the second image P12 including the edge image PE in an overlapping manner. For example, the display device 10b sets the position of the edge image PE outside or inside the position of the edge portion of the first image P11. Thereby, the display device 10b sets binocular parallax of the edge portion, and sets a sense of depth that the viewer 1 feels when viewing the first image P11 and the second image P12 in an overlapping manner. Next, the setting of the feeling of depth by the display device 10b will be described.
  • the display device 10b includes a setting unit 13.
  • the setting unit 13 sets the second image P12 corresponding to the first image P11 based on the first image information supplied from the image information supply device 2, and indicates the set second image P12.
  • the second image information is supplied to the second display unit 12.
  • the setting unit 13 acquires first image information including the depth position information of the first image P11 from the image information supply device 2.
  • the setting unit 13 extracts an edge portion from the acquired first image information with a known configuration.
  • the setting unit 13 generates an edge image PE indicating the extracted edge portion, and supplies second image information indicating the generated edge image PE to the second display unit 12.
  • the setting unit 13 extracts an edge portion by applying a differential filter such as a Laplacian filter to the acquired image information.
  • FIG. 12 is a flowchart showing an example of the operation of the display device 10b in the present embodiment.
  • the first display unit 11 of the display device 10b acquires image information from the image information supply device 2 (step S110).
  • the setting unit 13 of the display device 10b acquires image information from the image information supply device 2 (step S120).
  • the first image information supplied by the image information supply device 2 includes position information indicating the depth position of the first image P11.
  • the position information indicating the depth position of the first image P11 is information added to the image information in order for the first image P11 to be recognized as a stereoscopic image by the observer 1, for example, left This is information for setting binocular parallax between the eye L and the right eye R.
  • Position information that increases the binocular parallax compared to the binocular parallax at the position of the origin O is added to the image information.
  • the setting unit 13 generates and generates the second image P12 including the edge image PE indicating the edge portion in the first image P11 based on the first image information acquired in step S120. Second image information indicating the second image P12 is output to the second display unit 12 (step S122).
  • the 1st display part 11 produces
  • the second display unit 12 acquires the second image information output from the setting unit 13 in step S122 and displays the second image P12 based on the acquired second image information.
  • the display device 10b includes the setting unit 13 that sets the second image P12 corresponding to the first image P11 based on the input image information of the first image P11.
  • the display device 10b of the present embodiment can display a stereoscopic image (three-dimensional image) without receiving the supply of the edge image PE from the image information supply device 2.
  • the setting unit 13 sets the relative position between the first image P11 and the second image P12 based on the pixel value of the pixel indicating the edge portion E in the first image P11, and the second image
  • the image P12 may be set.
  • the optical image IM that is an image in which the first image P11 and the second image P12 are overlapped is bright. May be too much.
  • the optical image IM may be difficult to be recognized as a stereoscopic image because the edge portion E is conspicuous and is visually recognized by the observer 1.
  • the setting unit 13 determines the position of the edge image PE included in the second image P12 based on the pixel value (for example, the luminance value of the pixel) of the pixel indicating the edge portion E in the first image P11.
  • the second image P12 is set by being shifted relative to the edge portion E in the first image P11. For example, when the sum of the luminance value of the pixel indicating the edge portion E in the first image P11 and the luminance value of the pixel of the edge image PE included in the second image P12 exceeds a predetermined threshold value Sets the second image P12 by shifting the position of the edge image PE by a predetermined distance.
  • the display apparatus 10b of this embodiment can reduce the discomfort of the observer 1 by only an edge part conspicuous, and can display a three-dimensional image (three-dimensional image) with high precision.
  • the setting unit 13 may be provided in the second display unit 12 or the image information supply device 2. In this case, since the display device 10 does not have to include the setting unit 13 independently, the configuration of the display device 10 can be simplified.
  • FIG. 13 is a configuration diagram illustrating an example of a configuration of a display system 100c including the display device 10c according to the present embodiment.
  • the display system 100c includes a first display unit 11c and a setting unit 13c.
  • the first display unit 11 c includes a position detection unit 14.
  • the position detection unit 14 includes a face detection sensor that detects the direction of the face of the observer 1, and the first display surface of the second display unit 12 included in the head-mounted display 50 worn by the observer 1.
  • a face detection sensor detects the relative position of the display unit 11c to the display surface (first display surface 110). That is, the position detection unit 14 detects a relative position between the first display surface 110 on which the first image P11 is displayed and the display surface of the second display unit 12 on which the second image P12 is displayed.
  • the relative position is a depth position of the display surface of the second display unit 12 with the first display surface 110 as a reference position.
  • the position detection unit 14 has a depth position on the display surface of the second display unit 12 on which the second image P12 is displayed with the first display surface 110 on which the first image P11 is displayed as a reference position. Is detected.
  • the setting unit 13c sets the second image P12 by setting the relative position between the first image P11 and the second image P12 based on the information indicating the relative position detected by the position detection unit 14. Specifically, the setting unit 13 corresponds to the first image P11 and the edge image PE based on the detection result obtained by detecting the relative position of the display surface of the second display unit 12 by the position detection unit 14. The second image P12 is set so as to be displayed.
  • the setting unit 13c determines the distance between the first image P11 and the second image P12 based on information indicating the depth position of the display surface of the second display unit 12 detected by the position detection unit 14.
  • the second image P12 is set.
  • the display device 10c of the present embodiment has the setting unit 13c, the display surface of the first display unit 11c on which the first image P11 is displayed, and the second image P12 on which the second image P12 is displayed.
  • a position detection unit 14 that detects a relative position of the display unit 12 to the display surface is provided.
  • the setting unit 13c sets the second image P12 by setting the relative position between the first image P11 and the second image P12 based on the information indicating the relative position detected by the position detection unit 14.
  • the display device 10c displays a stereoscopic image (three-dimensional image) not only for the observer 1 at a predetermined depth position but also for the observer 1 at a depth position other than the predetermined depth position. Can do. That is, the display device 10c can display a stereoscopic image (three-dimensional image) over a wide range.
  • the display device 10 when the depth position of the detected observer 1 moves, the display device 10 according to the present embodiment performs the first operation on the observer 1 at the moved depth position based on the detection result.
  • the image P11 and the edge image PE can be displayed in correspondence with each other. That is, the display device 10 of the present embodiment can display a stereoscopic image following the depth position of the moving observer 1.
  • FIG. 14 is a schematic diagram illustrating an example of setting of the setting unit 13c in a modification of the present embodiment.
  • the position detection unit 14 detects a distance Lp between the first display surface 110 and the display surface of the second display unit 12. Further, the position detection unit 14 detects an angle ⁇ 1 formed by a line segment CL connecting the reference point SP1 on the first display surface 110 and the reference point SP2 on the display surface of the second display unit 12 and the Z axis.
  • the reference point SP1 is a center point on the first display surface 110.
  • the reference point SP2 is the center point on the display surface of the second display unit 12. Further, the position detection unit 14 detects an angle ⁇ 2 formed by the line segment CL and the normal line NV of the display surface of the second display unit 12. That is, the position detection unit 14 detects the distance Lp, the angle ⁇ 1 and the angle ⁇ 2 as relative positions between the first display surface 110 and the display surface of the second display unit 12.
  • the setting unit 13c for example, based on the detection result (distance Lp, angle ⁇ 1, and angle ⁇ ) acquired from the position detection unit 14, the position of the edge image PE included in the second image P12, and the edge image PE
  • the image conversion method (for example, projective conversion or affine conversion) is set. That is, the setting unit 13c sets the relative position based on the detection result so that the first image P11 and the edge image PE are visually recognized correspondingly.
  • the display state includes image conversion (for example, projective conversion or affine conversion) of the edge image PE, and the setting unit 13c corresponds to the first image P11 and the edge image PE based on the detection result.
  • the image conversion method of the edge image PE is set so that it can be visually recognized.
  • the position detection unit 14 in the present modification has not only a relative position in the depth direction (Z-axis direction) but also a direction (X-axis direction and Y-axis) parallel to the display surface of the first display unit 11c. )) Relative position is detected.
  • the setting unit 13c sets the relative position between the first image P11 and the second image P12 based on the information indicating the relative position detected by the position detection unit 14, and sets the second image P12. Set.
  • the display device 10c provides not only the viewer 1 at the front position of the first display unit 11c but also the viewer 1 at a position other than the front of the first display unit 11c.
  • Dimensional image can be displayed. That is, the display device 10c can display a stereoscopic image (three-dimensional image) over a wide range.
  • the display device 10 of the present modification example uses the first image P11 for the observer 1 at the moved position based on the detection result. And the edge image PE can be displayed in correspondence with each other. That is, the display device 10 of the present modification can display a stereoscopic image following the position of the moving observer 1.
  • the setting unit 13c sets the edge thickness of the edge image PE to a thickness corresponding to the relative position based on information indicating the relative position detected by the position detection unit 14. May be.
  • the edge image PE may be recognized by the viewer 1 when the viewer 1 is at a position other than the front of the first display unit 11c.
  • the first image P11 may not be visually recognized as a stereoscopic image. Therefore, the display device 10c sets the edge thickness of the edge image PE to a thickness corresponding to the relative position. Thereby, the display device 10c can prevent the observer 1 from visually recognizing the edge image PE even when the observer 1 is at a position other than the front of the first display unit 11c. That is, the display device 10c can display a stereoscopic image (three-dimensional image) over a wide range.
  • the 2nd display part 12 demonstrated the example which displays the 2nd image P12 containing edge image PE, it is not restricted to this.
  • the second display unit 12 has the pixel value of the pixel of the second image P12 visually recognized by the observer 1 corresponding to the pixel of the first image P11 at the depth position of the stereoscopic image visually recognized by the observer 1.
  • the second image P12 set accordingly may be displayed.
  • the pixel value includes the luminance value of the pixel.
  • the image visually recognized by the viewer 1 corresponding to the pixels of the first image P11 is an image obtained by projecting the first image P11 onto the display surface of the second display unit 12.
  • the observer 1 projects the first image P11, and the second image P12 in which the pixel value of each pixel is set to a luminance different from the pixel value of each pixel of the first image P11. And the first image P11 are visually recognized.
  • the 2nd display part 12 can display the 2nd image P12 from which the observer 1 can visually recognize a precise three-dimensional image. That is, the display device 10 can display a precise stereoscopic image.
  • the first display unit 11 of the display device 10 in the present embodiment and its modification may be a stereoscopic display device capable of stereoscopic display (three-dimensional display).
  • the first display unit 11 displays a stereoscopic image of the first image P11 at a depth position corresponding to the input image information. That is, the display device 10 displays the edge portion in the first image P11 that is stereoscopically displayed and the edge image PE corresponding to the edge portion so as to be visually recognized by the observer 1. be able to.
  • the display apparatus 10 can set the depth position of the stereoscopic image recognized by the observer 1 in a wide range. That is, the display device 10 according to the present embodiment and the modification thereof can further enhance the stereoscopic effect recognized by the observer 1 from the stereoscopic image of the first image P11.
  • the second image P12 is, for example, an image as illustrated in FIG. 4, but is not limited thereto.
  • the second image P12 may be a second image P12 including an edge image PE indicating upper and lower edge portions in addition to the left and right edge portions of the first image P11.
  • the second image P12 may be an image showing an edge portion in which each side of the quadrangle indicated by the first image P11 is an edge portion.
  • the second image P12 may be an image including an edge image PE in which the edge portion is shown in a broken line shape.
  • the second image P12 may be an image including an edge image PE that shows an edge portion in a subjective contour shape.
  • the subjective contour is a contour that is recognized by the observer 1 so that the contour line exists even though the contour line does not exist.
  • the second display unit 12 of the display device 10 does not need to display all the images showing the edge portions, and can reduce power consumption compared to the case where the images showing all the edge portions are displayed. Can do.
  • the second image P12 may be displayed with a predetermined brightness (for example, luminance) inside the edge portion. Thereby, the display device 10 can increase the brightness of the first image P11 without changing the first image P11.
  • FIG. 15 is a block diagram showing an example of the configuration of a display system 100d according to the fifth embodiment of the present invention.
  • the display system 100d includes a portable display device 50d as the display device 10d.
  • the portable display device 50d is a handheld display device that is portable by the observer 1, and includes a second display unit 12d, a setting unit 13d, and an imaging unit 15.
  • the imaging unit 15 includes a stereo video camera having a left light receiving unit 15L and a right light receiving unit 15R.
  • the image capturing unit 15 captures an image that passes through the second display unit 12d as the first image P11, and the depth position of the captured image. Image information including information is generated.
  • the left light receiving unit 15L receives light of the first image P11L visually recognized by the left eye L of the first image P11.
  • the right light receiving unit 15R receives the light of the first image P11R visually recognized by the right eye R of the first image P11.
  • the imaging unit 15 generates depth position information of an image captured by a known configuration based on the light images received by the left light receiving unit 15L and the right light receiving unit 15R, and image information including the generated depth position information. Is generated.
  • the setting unit 13d sets a second image P12 corresponding to the first image P11 based on the image information generated by the imaging unit 15, and sets the second image information indicating the set second image P12. It supplies to the 2nd display part 12d. Specifically, the setting unit 13d acquires depth position information from information on an image captured by the imaging unit 15. Next, the setting unit 13d extracts an edge portion with a known configuration from the acquired first image information. Next, the setting unit 13d generates an edge image PE indicating the extracted edge portion, and supplies second image information indicating the generated edge image PE to the second display unit 12d. Here, for example, the setting unit 13d extracts an edge portion by applying a differential filter such as a Laplacian filter to the acquired image information. And the setting part 13d sets the display position of the extracted edge part based on the acquired depth position information, and sets the 2nd image P12.
  • a differential filter such as a Laplacian filter
  • the second display unit 12d includes a transmissive display capable of transmitting incident light, transmits the first image P11, and displays second image information indicating the second image P12 set by the setting unit 13d. Based on the above, the second image P12 is displayed. Here, the second display unit 12d displays a real image corresponding to the first image P11 as the second image P12.
  • the portable display device 50d as the display device 10d of the present embodiment includes the second display unit 12d, the setting unit 13c, and the imaging unit 15. Accordingly, the display device 10d can display the second image P12 with the position where the second image P12 is displayed positioned away from the eye of the observer 1. Therefore, the display device 10d can display the second image P12 in correspondence with the first image P11 without a configuration for displaying a virtual image. That is, the configuration of the second display unit 12 can be simplified as compared with the case where the second display unit 12 displays a virtual image.
  • the imaging unit 15 may include a video camera having one of the left light receiving unit 15L and the right light receiving unit 15R.
  • the setting unit 13d acquires information specifying the depth position input by the user, and based on information specifying the acquired depth position from among a plurality of preset depth position information. Select depth position information.
  • the setting unit 13d acquires the first image information from the imaging unit 15, and extracts an edge portion from the acquired first image information with a known configuration.
  • the setting unit 13d sets the display position of the extracted edge portion based on the selected depth position information, and sets the second image P12.
  • the display device 10d can display the second image P12 without a configuration for displaying a virtual image. That is, the configuration of the second display unit 12 can be simplified as compared with the case where the second display unit 12 displays a virtual image.
  • FIG. 16 is a configuration diagram illustrating an example of a configuration of a display system 100e including the display device 10e according to the present embodiment.
  • the display system 100e includes a first display unit 11e and a second display unit 12e.
  • the second display unit 12 is a head mounted display
  • the first display unit 11e is a head mounted display.
  • the first display unit 11e which is a head-mounted display, displays the first image P11
  • the second display unit 12e displays the edge image PE.
  • the image information supply device 2 supplies the first image information to the first display unit 11e and also supplies the second image information to the second display unit 12e.
  • the first image information is information for displaying the first image P11 displayed on the first display unit 11e.
  • the second image information is information for displaying the second image P12 displayed on the second display unit 12e, and is image information of the edge image PE generated based on the first image information.
  • This edge image PE is an image showing the edge portion E in the first image P11.
  • the first image P11 includes image information indicating the left-eye image P11L and image information indicating the right-eye image P11R. 3 is different from the first image P11 shown in FIG.
  • the edge image PE will be described later with reference to FIG.
  • the first image P11L is the first image P11L visually recognized by the left eye L in the first image P11 shown in FIG.
  • the first image P11L is the left-eye image P11L shown in FIG.
  • the second image P12 is the second image P12 shown in FIG. 4 and includes the left-eye image P12L and the right-eye image P12R.
  • the second image P12L is a second image P12L visually recognized by the left eye L by the left eye second light flux R12L in the second image P12, as shown in FIG. .
  • the second image P12R is a second image P12R that is visually recognized by the right eye R by the right eye second light flux R12R in the second image P12. It is.
  • the second display unit 12e includes a display surface 120 that displays an image in the (+ Z) direction, and displays the second image P12 based on the second image information acquired from the image information supply device 2. Display on surface 120.
  • the second light beam R12 emitted from the second image P12 displayed on the display surface 120 is visually recognized as an optical image by the observer 1 sitting on the chair 3.
  • This observer 1 sits on the chair 3 provided at a position away from the display surface 120 in the (+ Z) direction by a predetermined distance, and observes the display surface 120 in the ⁇ Z direction.
  • the chair 3 fixes the position where the observer 1 observes the second display unit 12e, that is, the relative position between the first display unit 11e and the second display unit 12e.
  • the 1st display part 11e is a display part with which the head mounted display 50e with which the observer's 1 head is mounted
  • the first display unit 11e displays a virtual image of the supplied first image P11 by an eyepiece optical system (not shown) and transmits incident light in a direction in which the first image P11 is displayed.
  • the first display unit 11e includes a left eye display unit 11L and a right eye display unit 11R. That is, the first display unit 11e is different from the head mounted display 50 described with reference to FIG. 2 in that the first image P11 is displayed instead of the second image P12.
  • the left eye display unit 11L receives the first image information supplied from the image information supply apparatus 2 so as to be visually recognized by the left eye L of the observer 1.
  • the left-eye image P11L shown is displayed.
  • the right eye display unit 11R of the second display unit 12e is an image for the right eye indicated by the first image information supplied from the image information supply device 2 so as to be visually recognized by the right eye R of the observer 1. P11R is displayed.
  • the first display unit 11e emits the second light beam R12 (light) of the second image displayed by the second display unit 12e when the observer 1 sits on the chair 3 and observes the second display unit 12e. Transmit in the transmission direction.
  • the transmission direction is the (+ Z) direction. That is, the left eye display unit 11L of the first display unit 11e transmits the left eye second light beam R12L of the incident second light beam R12. Similarly, the right eye display unit 11R of the first display unit 11e transmits the right eye second light beam R12R out of the incident second light beam R12.
  • the left-eye display unit 11L transmits the incident left-eye second light beam R12L and displays the left-eye image P11L to emit the left-eye first light beam R11L.
  • the left eye first light beam R11L and the left eye second light beam R12L are visually recognized by the left eye L of the observer 1 who is observing the second display unit 12e in the ( ⁇ Z) direction as corresponding optical images.
  • the right eye display unit 11R transmits the incident right eye second light beam R12R, displays the right eye image P11R, and emits the right eye first light beam R11R. That is, the right eye first light beam R11R and the right eye second light beam R12R are visually recognized by the right eye R of the observer 1 who is observing the second display unit 12e in the ( ⁇ Z) direction as a corresponding optical image.
  • 1st image P11 is an image which shows the square pattern shown, for example in FIG.
  • the four sides constituting the quadrilateral can each be the edge portion E, but in the following description, for the sake of convenience, the left side edge portion E1 indicating the left side of the quadrilateral and The right side edge portion E2 indicating the right side will be described as the edge portion E.
  • the first image P11 includes a left eye image P11L displayed by the left eye display unit 11L of the first display unit 11e and a right eye image P11R displayed by the right eye display unit 11R of the first display unit 11e. included. That is, the first image P11 of this embodiment includes the left-eye image P11L and the right-eye image P11R in that the first image P11 of the present embodiment includes the first image P11 with reference to FIG. Different from the first image P11 described. Since the left-eye image P11L and the right-eye image P11R are the same image, in the following description, the two images are collectively referred to as the first image P11 unless otherwise distinguished.
  • the second image P12 is, for example, an image including a left-side edge image PE1 indicating the left-side edge portion E1 of the square pattern and a right-side edge image PE2 indicating the right-side edge portion E2. That is, the second image P12 of this embodiment does not include the left-eye image P12L and the right-eye image P12R, and the second image P12 of this embodiment refers to FIG. This is different from the second image P12 described above.
  • the edge portion E (which may be simply expressed as an edge or an edge region) is, for example, a portion where the brightness (for example, luminance) of adjacent or neighboring pixels in the image changes suddenly.
  • the edge portion E indicates a theoretical line segment having no width on the left side or the right side of the quadrangle, and also indicates, for example, a region around the edge having a finite width corresponding to the resolution of the second display unit 12e. ing.
  • FIG. 17 is a schematic diagram illustrating an example of an image displayed by the display device 10e according to the present embodiment.
  • the first display unit 11e displays the first image P11 as a virtual image in the (+ Z) direction so that the viewer 1 can visually recognize it.
  • the second display unit 12e displays the second image P12 in the (+ Z) direction so that the viewer 1 can visually recognize it.
  • the first image P11 is displayed at a position that is a predetermined distance Lp away from the position where the second image P12 is displayed in the (+ Z) direction.
  • the first display unit 11e is a display unit included in the transmissive head mounted display 50e that transmits light. Therefore, the first image P11 displayed on the first display unit 11e and the second image P12 displayed on the second display unit 12e are visually recognized by the observer 1.
  • the predetermined distance Lp is a distance between the depth position where the first image P11 is displayed and the depth position where the second image P12 is displayed.
  • the depth position is a position in the Z-axis direction.
  • the predetermined distance Lp is determined in advance based on the depth position where the first image P11 is displayed and the depth position of the observer 1.
  • the second display unit 12e includes a left side edge part E1 in the left eye image P11L displayed by the left eye display unit 11L of the first display unit 11e, and the left side edge part E1.
  • the second image P12 is displayed so that the left-side edge image PE1 corresponding to is visually recognized.
  • the second display unit 12e includes a right side edge portion E2 in the right eye image P11R displayed by the right eye display unit 11R of the first display unit 11e, and a right side corresponding to the right side edge portion E2.
  • the second image P12 is displayed so that the edge image PE2 is visually recognized correspondingly.
  • the left-eye display unit 11L of the first display unit 11e is connected to the left eye L of the viewer 1 on the ( ⁇ X) side of the left-side edge portion E1 of the quadrangle indicated by the left-eye image P11L (that is, outside the quadrangle ),
  • the left-eye image P11L as a virtual image is displayed so that the left-side edge portion E1 and the left-side edge image PE1 of the second image P12 overlap each other.
  • the left eye display unit 11L is connected to the left eye L of the observer 1 on the ( ⁇ X) side (that is, the inner side of the rectangle) of the right edge portion E2 of the rectangle indicated by the left eye image P11L.
  • the left-eye image P11L as a virtual image is displayed so that E2 and the right-side edge image PE2 of the second image P12 are visually recognized.
  • the right eye display unit 11R is connected to the right eye R of the observer 1 on the (+ X) side (that is, outside the rectangle) of the right edge portion E2 of the rectangle indicated by the right eye image P11R.
  • the right-eye image P11R as a virtual image is displayed so that E2 and the right-side edge image PE2 of the second image P12 are visually recognized.
  • the right eye display unit 11R is connected to the right eye R of the observer 1 on the (+ X) side (that is, inside the rectangle) of the left edge portion E1 of the quadrangle shown by the right eye image P11R.
  • the right-eye image P11R as a virtual image is displayed so that the left-side edge image PE1 of the second image P12 can be visually recognized.
  • the first display unit 11e displays the supplied first image P11 as a virtual image by the eyepiece optical system, and displays the incident light of the second image P12 in the first image P11 direction. Make it transparent.
  • the observer 1 recognizes a stereoscopic image (three-dimensional image) from the first image P11 and the second image P12.
  • the observer 1 observes the first image P11 and the edge image PE corresponding to the edge portion E of the first image P11 at a position where the corresponding portions of these images overlap.
  • the observer 1 perceives an image at a depth position between the display surfaces in accordance with the luminance ratio between the first image P11 and the edge image PE. For example, when the observer 1 observes a quadrilateral pattern, there is a minute brightness step that cannot be recognized on the retina image of the observer 1.
  • a virtual edge is perceived between steps of brightness (for example, luminance) and recognized as one object.
  • the observer 1 seems to change the depth position of the image by causing a slight shift in the virtual edge recognized by the left eye L and the right eye R and perceiving it as binocular parallax. Recognized. Next, this mechanism will be described in detail.
  • the optical image IM is an image in which the first image P11 and the second image P12 are visually recognized by the observer 1.
  • the optical image IM includes an optical image IML visually recognized by the left eye L of the observer 1 and an optical image IMR visually recognized by the right eye R of the observer 1.
  • the optical image IML visually recognized by the left eye L of the observer will be described.
  • the left-eye image P11L and the second image P12L visually recognized by the left eye L of the second images P12 are combined.
  • the formed optical image IML is formed.
  • An optical image IML formed by combining the left side edge image PE1L visually recognized by L is formed.
  • an image showing the right side edge portion E2 and the left eye L are visually recognized on the ( ⁇ X) side (that is, inside the square) of the right side edge portion E2 of the quadrangle shown by the first image P11.
  • the optical image IML synthesized with the right edge image PE2L is formed.
  • the optical image IML of this embodiment is different from the optical image IML described with reference to FIG. 6 in that the first image P11L is at a depth position closer to the observer than the second image P12L.
  • the case of the brightness value BR will be described as an example of the brightness of the image.
  • the left-eye image P11L will be described assuming that the luminance value BR is zero at the X coordinates X1 to X2.
  • the left-eye image P11L has the brightness value BR2 at the X coordinates X2 to X6.
  • the second image P12L visually recognized by the left eye L has the brightness value BR1 at the X coordinates X1 to X2 and the X coordinates X4 to X5, and zero at the X coordinates X2 to X4.
  • the brightness (for example, luminance) of the optical image IML visually recognized by the left eye L becomes the luminance value BR1 at the X coordinates X1 to X2.
  • the brightness of the optical image IML is the brightness value BR2 at the X coordinates X2 to X4 and the X coordinates X5 to X6, and the brightness obtained by combining the brightness value BR1 and the brightness value BR2 at the X coordinates X4 to X5. It becomes a certain luminance value BR3.
  • the mechanism by which the edge portion E is visually recognized by the left eye L of the observer 1 is the same as the mechanism described with reference to FIG.
  • the difference between the optical image IMR visually recognized by the right eye R of the observer 1 from the optical image IML and the mechanism for recognizing a stereoscopic image (three-dimensional image) based on the difference are also shown in FIG. Since it is the same as the difference and mechanism demonstrated with reference to FIG. 8, the description is abbreviate
  • the display system 100 of this embodiment includes the first display unit 11e and the second display unit 12e.
  • the first display unit 11e is mounted on the head of the observer 1 and displays the supplied first image P11 using an eyepiece optical system, and transmits incident light in a direction in which the first image P11 is displayed.
  • the second display unit 12e displays a second image P12 indicating the edge portion E of the first image P11 displayed by the first display unit 11e.
  • the display system 100 displays the first image P11 set according to the position of the observer 1 with respect to the second display unit 12e on the first display unit 11e provided in the head mounted display 50e.
  • the display system 100 can set and display the first image P11 for each position of the observer 1. Therefore, the display system 100 can display the first image P11 corresponding to the position of the observer 1 for each position of the observer 1. That is, the display system 100 can set a wide range that the observer 1 can recognize as a stereoscopic image.
  • the display system 100 displays the edge image PE on the first display unit 11e. For this reason, only the edge image PE is displayed for the observer 1 who observes only the first display unit 11e without passing through the second display unit 12e. In this way, the display system 100 displays the edge image PE, that is, a low-quality image whose contents are difficult to understand, for the observer 1 who observes only the first display unit 11e.
  • the display system 100 displays the second image P12 in addition to the edge image PE for the observer 1 who observes the first display unit 11e through the second display unit 12e. That is, the display system 100 can display a high-quality image for the observer 1 who observes the first display unit 11e through the second display unit 12e.
  • the display system 100 displays the first image P11 and the second image P12 according to the luminance ratio between corresponding pixels of the first image P11 and the second image P12, so that the viewer 1 can Display an image. That is, the display system 100 can display a stereoscopic image to the observer 1 even if both the first image P11 and the second image P12 are planar images (two-dimensional images). As a result, the display system 100 can display a stereoscopic image in which the viewer 1 can easily recognize the thickness of the object to be displayed by reducing the book splitting effect.
  • the second image P12 displayed by the second display unit 12e of the present embodiment includes an edge image PE showing the edge portion E in the first image P11, and is displayed on the first display unit 11e.
  • the edge portion E in the image P11 and the edge image PE indicating the edge portion E are set so as to be viewed by the observer 1 in correspondence with each other.
  • the display system 100 can display the edge image PE (that is, the edge portion E) of the first image P11 and the second image P12 in an overlapping manner.
  • the display system 100 according to the present embodiment affects the image other than the edge portion displayed on the first display portion 11e by the influence of the image displayed on the second display portion 12e (that is, the edge image PE). An image can be displayed without giving.
  • the first display unit 11e and the second display unit 12e are displayed.
  • the variation in display conditions with 12e may affect the display accuracy of a stereoscopic image (three-dimensional image).
  • the display conditions for example, brightness and color of the displayed image
  • the display conditions for example, brightness and color of the displayed image
  • the display system 100 of the present embodiment displays the edge image PE on the second display unit 12e
  • the first display is performed even if the display conditions between the first display unit 11e and the second display unit 12e vary.
  • the image other than the edge portion displayed on the portion 11e is not affected.
  • a stereoscopic image three-dimensional image
  • the display system 100 of the present embodiment can display a stereoscopic image (three-dimensional image) with high accuracy.
  • the display system 100 of the present embodiment since the display system 100 of the present embodiment only needs to display the edge image PE on the second display unit 12e, compared to the case where an image other than the edge image PE is also displayed on the second display unit 12e. Power consumption can be reduced.
  • the observer 1 recognizes a step change in the brightness (for example, luminance) of the image as a smooth change in brightness such as the waveform WL and the waveform WR. For this reason, the display system 100 of this embodiment can make the observer 1 recognize a stereoscopic image even when the definition of the edge image PE is low.
  • the definition is, for example, the number of pixels constituting an image.
  • the display system 100 of this embodiment can reduce the definition of the 2nd display part 12e compared with the definition of the 1st display part 11e. That is, the display system 100 of the present embodiment can be configured with the second display unit 12e as an inexpensive display device.
  • the display system 100 is configured so that the edge portion E in the first image P11 displayed by the first display unit 11e and the edge image PE are visually recognized correspondingly.
  • the image P11 and the second image P12 are displayed.
  • each image displayed by the display system 100 of the present embodiment is visually recognized so that the edge portion E and the edge image PE in the first image P11 are not separated by the observer 1. Therefore, the display device 10e of the present embodiment can display a stereoscopic image with high accuracy.
  • FIG. 18 is a block diagram showing an example of the configuration of a display system 100f according to the seventh embodiment of the present invention.
  • the display system 100 f includes an image information supply device 2 f and an image setting device 4.
  • the image setting device 4 includes a detection unit 41, an extraction unit 42, and a setting unit 43, and sets the second image P12 to be displayed on the second display unit 12.
  • the image information supply device 2f includes an image distribution server, and supplies first image information that is information for displaying the first image P11 on the first display unit 11 and the extraction unit 42 of the image setting device 4. To do.
  • the detection unit 41 includes a position sensor that detects the position of the first display unit 11, and detects the position of the first display unit 11.
  • the detection unit 41 is fixed in the vicinity of the second display unit 12 so that the relative position between the detection unit 41 and the second display unit 12 does not change.
  • the detection unit 41 detects the position of the first display unit 11 with respect to the second display unit 12 by detecting the position of the first display unit 11. That is, the detection unit 41 detects the relative position between the first display unit 11 and the second display unit 12.
  • the extraction unit 42 acquires the first image information supplied from the image information supply device 2f, and extracts the edge portion E of the first image P11 from the acquired first image information with a known configuration. Specifically, the extraction unit 42 generates an edge image PE by applying an edge extraction filter such as a Laplacian filter to the acquired first image information. Next, the extraction unit 42 outputs the generated edge image PE to the setting unit 43.
  • an edge extraction filter such as a Laplacian filter
  • the setting unit 43 deforms the edge portion E extracted by the extraction unit 42 based on the relative position detected by the detection unit 41, and converts the image showing the deformed edge portion E into the edge portion E of the first image P11.
  • the corresponding second image P12 is set.
  • the setting unit 43 acquires the position of the first display unit 11 detected by the detection unit 41.
  • the setting unit 43 reads the relative position between the detection unit 41 and the second display unit 12 from a storage unit (not illustrated) that stores the relative position between the detection unit 41 and the second display unit 12 in advance.
  • the setting unit 43 sets the first display unit 11 and the second display unit based on the detected position of the first display unit 11 and the read relative position between the detection unit 41 and the second display unit 12. The relative position with respect to 12 is calculated.
  • the setting unit 43 acquires the edge image PE generated by the extraction unit 42.
  • the setting unit 43 stores the deformation vector of the image associated with the relative position that matches the calculated relative position from a storage unit (not illustrated) that stores the relative position and the deformation vector of the image in association with each other.
  • the image deformation vector is information indicating the direction and amount of deformation of the first image P11 for each pixel of the first image P11.
  • the direction and amount of deformation of the first image P11 are the same as the first image P11 displayed by the first display unit 11 when the observer 1 observes the second display unit 12 via the first display unit 11. Is projected on the second display unit 12 and the first image P11 observed by the observer 1 is set.
  • the first image P ⁇ b> 11 is radially projected from the center of the second display unit 12. Is done.
  • the deformation vector of the image is set to an amount corresponding to the distance between the observer 1 and the second display unit 12 and to a direction spreading radially from the center of the second display unit 12.
  • the deformation vector of the image is set to an amount corresponding to the distance between the observer 1 and the second display unit 12 and a direction spreading in a trapezoidal shape on the second display unit 12.
  • the deformation vector of the image is the first image P11 displayed on the first display unit 11 when the observer 1 observes the second display unit 12 via the first display unit 11, and This is information in which the direction and amount of deformation are set so as to correspond to the second image P12.
  • the setting unit 43 deforms the edge image PE generated by the extraction unit 42 by using the deformation vector of the read image. Further, the setting unit 43 sets the deformed edge image PE as the second image P12 and outputs it to the second display unit 12. That is, the setting unit 43 sets the first image P11 or the second image P12 when the light of the second image P12 displayed by the second display unit 12 is transmitted through the first display unit 11. Here, the setting unit 43 sets the first image P11 or the first image P11 so that the edge portion E of the first image P11 displayed by the first display unit 11 corresponds to the edge portion E indicated by the second image P12. The second image P12 is set.
  • FIG. 19 is a flowchart showing an example of the operation of the display system 100f in the present embodiment.
  • the detection unit 41 of the image setting device 4 detects the position of the first display unit 11 (step S10).
  • the setting unit 43 calculates the relative position between the first display unit 11 and the second display unit 12 based on the detected position of the first display unit 11.
  • the extraction unit 42 of the image setting device 4 acquires the first image information supplied from the image information supply device 2f (step S20).
  • the extraction unit 42 generates an edge image PE from which the edge portion E of the first image P11 is extracted from the acquired first image information (step S30).
  • the extraction unit 42 outputs the generated edge image PE to the setting unit 43.
  • the setting unit 43 of the image setting device 4 deforms the edge image PE extracted in step S30 based on the position of the first display unit 11 detected in step S10 (step S40).
  • the setting unit 43 sets the deformed edge image PE as the second image P12, and outputs the set second image P12 to the second display unit 12 (step S50).
  • the second display unit 12 displays the second image P12 set in this way.
  • the first display unit displays the first image P11 supplied from the image information supply device 2f.
  • the display system 100f of the present embodiment when the light of the second image P12 displayed by the second display unit 12 is transmitted through the first display unit 11, the first image P11 or the second image P12 is displayed.
  • a setting unit 43 for setting the image P12 is provided.
  • the setting unit 43 sets the first image P11 or the second image so that the edge portion E of the first image P11 displayed by the first display unit 11 corresponds to the edge portion E indicated by the second image P12.
  • An image P12 is set.
  • the display system 100f performs the first image according to the relative position between the first display unit 11 and the second display unit 12 (that is, the position of the observer 1 wearing the head mounted display 50). P11 or the second image P12 is set.
  • the display system 100f can set the second image P12 according to the position of the observer 1. That is, the display system 100f can display a wide range that the observer 1 can recognize as a stereoscopic image.
  • the display system 100f extracts a detection unit 41 that detects a relative position between the first display unit 11 and the second display unit 12, and an edge portion E of the first image P11 displayed by the first display unit 11. And an extraction unit 42. Further, the setting unit 43 of the display system 100f deforms the edge portion E extracted by the extraction unit 42 based on the relative position detected by the detection unit 41. Further, the setting unit 43 sets an image showing the deformed edge portion E as the second image P12 corresponding to the edge portion E of the first image P11. Thereby, the display system 100f generates second image information from the first image information.
  • the display system 100f displays the second image P12 according to the relative position between the first display unit 11 and the second display unit 12 (that is, the position of the observer 1 wearing the head mounted display 50).
  • the second image information indicating is generated.
  • the display system 100f can generate the second image information indicating the second image P12 according to the position of the observer 1. That is, the display system 100f can display a wide range that the observer 1 can recognize as a stereoscopic image.
  • the display system 100f can display the second image P12 corresponding to the first image P11 on the second display unit 12 without acquiring the second image information from the image information supply device 2f. That is, the display system 100f can receive supply of image information from the general-purpose image information supply device 2f that supplies only the first image information.
  • the head mounted display 50 may include the image setting device 4.
  • the second display unit 12 may include the image setting device 4.
  • the display system 100f can incorporate the image setting device 4 in any of the devices, and thus the display system 100f can be downsized.
  • FIG. 20 is a block diagram showing an example of the configuration of a display system 100g according to the eighth embodiment of the present invention.
  • the display system 100g includes an image setting device 4g.
  • the image setting device 4g includes a determination unit 44, and sets the second image P12 based on the supplied first image P11.
  • the determination unit 44 determines whether or not the observer 1 is at a position where the second display unit 12 can be observed via the first display unit 11.
  • the determination unit 44 displays the second image P12 set by the setting unit 43 as the second display unit. 12 is supplied. That is, in the determination unit 44, when the second display unit 12 displays the second image P12, the detection unit 41 detects whether or not the light of the second image P12 is incident on the first display unit 11. The determination is made based on the relative position. Specifically, the determination unit 44 calculates the relative position between the first display unit 11 and the second display unit 12 in the same manner as the setting unit 43 described above.
  • the determination unit 44 determines whether or not the first display unit 11 is within the preset display range of the second display unit 12 based on the calculated relative position. Next, when the determination unit 44 determines that the first display unit 11 is within the display range of the second display unit 12, the determination unit 44 displays the second image P ⁇ b> 12 set by the setting unit 43. Is output to the second display unit 12. On the other hand, if the determination unit 44 determines that the first display unit 11 is not within the display range of the second display unit 12, the determination unit 44 displays the second image P12 set by the setting unit 43. The image information is not output to the second display unit 12.
  • the display system 100g of the present embodiment determines whether or not the light of the second image P12 enters the first display unit 11 when the second display unit 12 displays the second image P12.
  • the determination unit 44 is configured to make a determination based on the relative position detected by the detection unit 41.
  • the second display unit 12 of the display system 100g displays the second image P12 when the determination unit 44 determines that the light of the second image P12 is incident on the first display unit 11.
  • the second image P12 is an image showing the edge portion E of the first image P11. For this reason, when the observer 1 who does not wear the head mounted display 50 observes the second image P12, the observer 1 observes an image whose contents are difficult to understand.
  • the observer 1 may feel uncomfortable when observing an image showing the edge portion E where it is difficult to understand the content.
  • the display system 100g does not display the second image P12 on the second display unit 12 when the observer 1 wearing the head mounted display 50 and observing the second display unit 12 is not present. Therefore, the display system 100g can reduce discomfort felt by the observer 1 who is not wearing the head mounted display 50 by observing the second image P12 (that is, an image whose contents are difficult to understand). . Furthermore, in this case, the display system 100g can reduce power consumption by not displaying the second image P12.
  • FIG. 21 is a block diagram showing an example of the configuration of a display system 100h according to the ninth embodiment of the present invention.
  • the display system 100h includes an image information supply device 2h and an image setting device 4h.
  • the image information supply device 2h supplies the first image information for displaying the first image P11 to the first display unit 11 and the image setting device 4h.
  • the image information supply device 2h supplies third image information for displaying the third image P13 to the image setting device 4h.
  • the third image P13 is an image displayed on the second display unit 12 and is an image different from the second image P12.
  • This third image P13 is not an image in which it is difficult to understand the content displayed like the edge image PE when the observer 1 who is not wearing the head mounted display 50 observes. Is an easy image.
  • the third image P13 is an image that is less likely to cause the observer 1 to feel uncomfortable than when the observer 1 observes the second image P12. Note that the third image P13 may be the same image as the first image P11.
  • the image setting device 4h includes a determination unit 44h.
  • the determination unit 44h determines whether or not the observer 1 wearing the head mounted display 50 is at a position where the second display unit 12 can be observed via the first display unit 11. Determine.
  • the determination unit 44h determines that the first display unit 11 is within the display range of the second display unit 12
  • the determination unit 44h displays a second image P12 for displaying the second image P12 set by the setting unit 43. Is output to the second display unit 12.
  • the determination unit 44h displays third image information for displaying the supplied third image P13. This is supplied to the second display unit 12.
  • the third image P13 is displayed on the second display unit. 12 is supplied.
  • the second display unit 12 displays a third image different from the second image P12 when the determination unit 44h determines that the light of the second image P12 does not enter the first display unit 11. .
  • the display system 100h replaces the second image P12 with the second display unit 12 when the observer 1 wearing the head mounted display 50 is not observing the second display unit 12.
  • a third image P13 can be displayed.
  • the second image P12 is an image showing the edge portion E of the first image P11.
  • the third image P13 is an image different from the second image P12, and when the observer 1 who does not wear the head mounted display 50 observes, the content is easy to understand. It is an image.
  • the third image P13 is the same image as the first image P11.
  • the display system 100h when there is no observer 1 who is wearing the head-mounted display 50 and observing the second display unit 12, the display system 100h understands the content instead of displaying an image whose content is difficult to understand. Can display the third image P13.
  • the display system 100h can reduce discomfort felt by the observer 1 who is not wearing the head mounted display 50 by observing the second image P12 (that is, an image whose contents are difficult to understand). it can.
  • the second display unit 12 may display the second image P12 so as to overlap the third image P13.
  • the brightness (for example, luminance) of the second image P12 is determined based on the brightness of the third image P13.
  • These images may be displayed at a reduced level.
  • the display system 100h can display the first image P11 and the second image P12 in correspondence with each other, so that the observer 1 wearing the head-mounted display 50 can display it.
  • a stereoscopic image can be displayed over a wide range. Accordingly, the display system 100h is different from the second image P12 while reducing the discomfort felt by the observer 1 who is not wearing the head mounted display 50 by observing the second image P12. 3 images P13 can be displayed.
  • FIG. 22 is a block diagram showing an example of the configuration of a display system 100j according to the tenth embodiment of the present invention.
  • the display system 100j includes an image information supply device 2j and an image setting device 4j.
  • the image information supply device 2j supplies the first image information including the depth position information of the stereoscopic image to the image setting device 4j. That is, the first image information is image information for displaying a stereoscopic image.
  • the image setting device 4j includes a parallax setting unit 45.
  • the parallax setting unit 45 sets the range of the depth position of the three-dimensional image set by the binocular parallax to the range of the depth position including the depth position where the second display unit 12 displays the second image P12. Set the parallax. Specifically, the parallax setting unit 45 calculates the relative position between the first display unit 11 and the second display unit 12 in the same manner as the setting unit 43. Next, the parallax setting unit 45 acquires first image information for displaying the first image P11 from the image information supply device 2j. Next, the parallax setting unit 45 calculates the depth position of the second display unit 12 with respect to the first display unit 11 based on the calculated relative position. The depth position of the second display unit 12 calculated by the parallax setting unit 45 will be described with reference to FIG.
  • FIG. 23 is a schematic diagram illustrating an example of the depth position of the three-dimensional image in the present embodiment.
  • the parallax setting unit 45 calculates the distance Lp between the first display unit 11 and the second display unit 12 as the depth position of the second display unit 12 with respect to the first display unit 11.
  • the parallax setting unit 45 sets the range of the distance Lip in the depth direction around the calculated distance Lp as the range of the depth position where the stereoscopic image is displayed.
  • the parallax setting unit 45 sets the left-eye image P11L within the set depth position range (that is, the distance Lip range) based on the depth position information of the stereoscopic image included in the acquired first image information.
  • binocular parallax of the right-eye image P11R binocular parallax of the right-eye image P11R.
  • the parallax setting unit 45 causes the binocular parallax of the left-eye image P11L and the right-eye image P11R so that the stereoscopic image is displayed in the range of the depth position including the depth position where the second image P12 is displayed.
  • the first display unit 11 displays the stereoscopic image IP11 corresponding to the first image P11 at the depth position within the range of the distance Lip for the observer 1 who observes the first image P11.
  • the first display unit 11 of the display system 100j of the present embodiment includes the left eye display unit 11L that displays the left-eye image P11L on the left eye of the viewer 1, and the right eye of the viewer 1 on the right side.
  • a right-eye display unit 11R that displays the eye image P11R.
  • the first image P11 includes a left-eye image P11L and a right-eye image P11R that have binocular parallax.
  • the display system 100j changes the depth position range of the three-dimensional image set by binocular parallax to a depth position range including the depth position at which the second display unit 12 displays the second image P12.
  • a parallax setting unit 45 for setting eye parallax is provided. Accordingly, the display system 100j brings the depth position of the stereoscopic image due to binocular parallax included in the first image P11 close to the depth position of the second image P12 indicating the edge portion E of the first image P11. Display each image.
  • the display system 100j brings the depth position of the stereoscopic image due to binocular parallax and the depth position of the second image P12 close to each other, whereby the first image P11 and the second image P12 that the observer 1 observes. And make it easier to deal with. That is, the display system 100j can reduce a sense of incongruity that occurs when the observer 1 observes the first image P11 having the binocular parallax and the second image P12.
  • FIG. 24 is a block diagram showing an example of the configuration of a display system 100k according to the eleventh embodiment of the present invention.
  • the display system 100k includes an image information supply device 2k, an image setting device 4k, and a head mounted display 50k.
  • the image information supply device 2k supplies image information to the image setting device 4k and the head mounted display 50k.
  • the image setting device 4k sets the second image information based on the input image information and the detected relative positions of the first display unit 11 and the second display unit in the same manner as the image setting device 4.
  • the head mounted display 50k includes an image detection unit 51 and an image setting unit 52.
  • the image detection unit 51 includes an image sensor (not shown), and detects the second image P12 displayed by the second display unit 12 by the image sensor. Further, the image detection unit 51 outputs image information indicating the detected second image P12 to the image setting unit 52.
  • the image setting unit 52 acquires image information indicating the second image P12 detected by the image detection unit 51.
  • the image setting unit 52 selects an image corresponding to the detected second image P12 from the images supplied by the image information supply device 2k, and sets the image information of the selected image as the image information supply device. Get from 2k.
  • the image setting unit 52 outputs the image information acquired from the image information supply device 2k to the first display unit 11 as the first image information indicating the first image P11. In this way, the first display unit 11 displays the first image P11 set by the image setting unit 52.
  • the second display unit 12 displays the second image P12 set by the image setting device 4k.
  • FIG. 25 is a schematic diagram illustrating an example of the configuration of the display system 100k in the present embodiment.
  • the image information supply device 2k includes an image server that is operated by an image distributor.
  • the image information supply device 2k supplies the plurality of image information stored in the image server to the image setting device 4k via the antenna A1 and the antenna A2.
  • the image setting device 4k and the second display unit 12 are, for example, a digital signage system installed in a station premises.
  • the image setting device 4k displays image information indicating an image to be displayed on the second display unit 12 as the second image P12 among the plurality of pieces of image information acquired from the image information supply device 2k via the antenna A1 and the antenna A2. select. Then, as described above, the image setting device 4k extracts the edge portion E of the image included in the selected image information, sets the second image P12, and displays the set second image P12. Is supplied to the second display unit 12.
  • the second display unit 12 displays the second image P12 based on the second image information supplied from the image setting device 4k.
  • the image information supply device 2k supplies a plurality of image information stored in the image server to the head mounted display 50k worn by the observer 1 via the network 5 and the antenna A3.
  • the head mounted display 50k includes an antenna (not shown), and receives a plurality of pieces of image information supplied from the image information supply device 2k via the network 5 and the antenna A3.
  • the observer 1 wearing the head mounted display 50k observes the second display unit 12.
  • the detection unit 41 of the image setting device 4 k detects the relative position of the first display unit 11.
  • the setting unit 43 of the image setting device 4k sets the second image P12 obtained by deforming the edge portion E based on the detected relative position.
  • the image detection unit 51 of the head mounted display 50k detects the second image P12 displayed by the second display unit 12.
  • the image setting unit 52 selects image information corresponding to the detected second image P12 from among a plurality of pieces of image information supplied from the image information supply device 2k. Then, the image setting unit 52 supplies the selected image information to the first display unit 11 as first image information indicating the first image P11.
  • the first display unit 11 displays the first image P11 based on the supplied image information.
  • the display system 100k uses the image displayed by the digital signage system, that is, the image corresponding to the second image P12 displayed by the second display unit 12 as the first image P11. Displayed on the first display unit 11.
  • the display system 100k includes the image detection unit 51 and the image setting unit 52.
  • the image detection unit 51 detects the second image P12 displayed by the second display unit 12.
  • the image setting unit 52 sets the first image P11 based on the second image P12 detected by the image detection unit 51, and supplies the set first image P11 to the first display unit 11.
  • the image setting unit 52 of the display system 100k selects an image to be displayed as the first image P11 from among a plurality of input images based on the second image P12 detected by the image detection unit 51.
  • the first image P11 is set.
  • the display system 100k can set the first image P11 based on the second image P12 displayed on the second display unit 12. That is, the display system 100k can set the first image P11 to be displayed on the first display unit 11 by using the second image P12.
  • the display system 100k displays an image indicating the edge portion E of the signage image on the second display unit 12.
  • the signage image is displayed on the first display unit 11 provided in the head mounted display 50k worn by the observer 1. Is displayed.
  • the display system 100k displays the signage image in response to the observation of the second display unit 12 with respect to the observer 1, the signage image having a strong impression can be displayed. That is, the display system 100k can improve the degree of impression received by the observer 1 when the observer 1 observes the displayed image.
  • the image setting unit 52 may set the first image P11 by generating the first image information. In this case, the image setting unit 52 supplies the generated first image information to the first display unit 11. Even if comprised in this way, the display system 100k can display the 1st image P11 corresponding to the 2nd image P12. Accordingly, the display system 100k can display the first image P11 on the first display unit 11 even if the head mounted display 50k does not include the antenna A3.
  • the detection unit 51, the image setting unit 52 (hereinafter collectively referred to as the control unit CONT) or each unit included in the control unit CONT may be realized by dedicated hardware. It may be realized by a memory and a microprocessor.
  • Each unit included in the control unit CONT includes a memory and a CPU (central processing unit), and the function is realized by loading a program for realizing the function of each unit included in the control unit CONT into the memory and executing the program. It may be realized.
  • control unit CONT By recording a program for realizing the function of each unit included in the control unit CONT on a computer-readable recording medium, causing the computer system to read and execute the program recorded on the recording medium, the control unit You may perform the process by each part with which CONT is provided.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.

Abstract

This display apparatus is provided with: a first display unit, which displays a first image; and a second display unit that is a transmissive display unit, which is worn in front of the eyes of a user, and which is capable of passing through light inputted thereto, said second display unit passing through, in the light transmitting direction, light of the first image displayed by the first display unit, and displaying, in the light transmitting direction, a second image corresponding to the first image.

Description

表示装置及びプログラムDisplay device and program
 本発明は、表示装置及びプログラムに関する。 The present invention relates to a display device and a program.
 近年、例えば、立体像(3次元画像)の奥行き位置に応じた明るさ(例えば、輝度)の比を付けた複数の画像を、奥行き位置の異なる複数の表示面に表示させることによって、これら複数の画像を視認した観察者に立体像を認識させる表示方法が知られている(例えば、特許文献1を参照)。 In recent years, for example, by displaying a plurality of images with a ratio of brightness (for example, luminance) corresponding to the depth position of a stereoscopic image (three-dimensional image) on a plurality of display surfaces having different depth positions, the plurality of images are displayed. There is known a display method for causing a viewer who has visually recognized the image to recognize a stereoscopic image (see, for example, Patent Document 1).
特許3464633号公報Japanese Patent No. 3464633
 しかしながら、上記のような表示方法は、観察者が立体像(3次元画像)を認識することができる視点が限定されることがあり、この場合には、観察者が立体像として認識可能な範囲を広範囲にして表示することができないという問題があった。 However, the display method as described above may limit the viewpoint from which the observer can recognize a stereoscopic image (three-dimensional image). In this case, the range that the observer can recognize as a stereoscopic image. There is a problem that it cannot be displayed in a wide range.
 本発明は、上記問題を解決すべくなされたもので、その目的は、観察者が立体像として認識可能な範囲を広範囲にして表示することができる表示装置を提供することにある。 The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a display device that can display a wide range of a range that an observer can recognize as a stereoscopic image.
[1]本発明の一態様である表示装置は、第1被写体を含む第1画像を表示する第1表示部と、前記第1表示部が表示する前記第1画像の光を透過させ、前記第1被写体に対応する第2被写体を含む第2画像を表示し、前記第1被写体と前記第2被写体との少なくとも一部が重畳するように表示する、ユーザに可搬される第2表示部とを備えることを特徴とする。 [1] A display device according to an aspect of the present invention includes a first display unit that displays a first image including a first subject, and transmits light of the first image displayed by the first display unit. A second display unit that is portable to the user, displays a second image including a second subject corresponding to the first subject, and displays the second image so that at least a part of the first subject and the second subject overlap each other. It is characterized by providing.
[2]また、本発明の一態様である表示装置は、第1被写体からの光を含む第1の光を透過させ、前記第1被写体に対応する第2被写体を含む第2画像を表示し、前記第1被写体と前記第2被写体との少なくとも一部が重畳するように表示する、ユーザに可搬される表示部とを備えることを特徴とする。 [2] In addition, the display device according to one embodiment of the present invention transmits the first light including light from the first subject and displays the second image including the second subject corresponding to the first subject. And a display unit that is portable for a user and that displays so that at least a part of the first subject and the second subject overlap each other.
[3]また、本発明の一態様であるプログラムは、第1被写体を含む第1画像を表示する第1表示部と、前記第1表示部が表示する前記第1画像の光を透過させ、前記第1被写体に対応する第2被写体を含む第2画像を表示する、ユーザに可搬される第2表示部とを有する表示装置のコンピュータに、前記第1被写体と前記第2被写体との少なくとも一部が重畳するように表示する表示手順を実行させる。 [3] A program according to an aspect of the present invention is configured to transmit a first display unit that displays a first image including a first subject, and light of the first image displayed by the first display unit, At least one of the first subject and the second subject is displayed on a computer of a display device having a second display unit that is portable for a user and displays a second image including a second subject corresponding to the first subject. A display procedure for displaying a part so as to overlap is executed.
[4]また、本発明の一態様である表示装置は、観察者の頭部に装着される装着部と、前記装着部に有し、第1被写体からの光を含む第1の光を透過させ、前記第1被写体に対応する第2画像を表示する表示部とを備え、前記表示部は、前記表示部を介して前記第1被写体と前記第2画像とを観察した前記観察者に対して、前記表示部を介さずに前記第1被写体を観察した場合とは異なる立体感を視認させることを特徴とする。 [4] A display device according to one embodiment of the present invention includes a mounting unit that is mounted on an observer's head and the mounting unit that transmits first light including light from the first subject. And a display unit that displays a second image corresponding to the first subject, wherein the display unit is configured for the observer who has observed the first subject and the second image via the display unit. Thus, a stereoscopic effect different from the case where the first subject is observed without using the display unit is visually recognized.
[5]また、本発明の一態様である表示装置は、観察者の頭部に装着される表示装置であって、第1画像を接眼光学系により表示し、表示される前記第1画像のエッジ部分を示す第2画像の光を前記第1画像を表示する方向に透過させる第1表示部を備えることを特徴とする。 [5] A display device according to one embodiment of the present invention is a display device mounted on the head of an observer, and displays a first image by an eyepiece optical system and displays the first image displayed. A first display unit that transmits light of a second image indicating an edge portion in a direction in which the first image is displayed is provided.
[6]また、本発明の一態様である表示装置は、第1画像を表示し、入射する光を透過させる第1表示部と、前記第1画像のエッジ部分を示す第2画像を検出する画像検出部と、前記画像検出部が検出した前記第2画像を第2表示部に供給する供給部と、前記第2表示部に対する前記第1表示部の位置を検出する検出部と、前記供給部が供給する前記第2画像と前記検出部が検出する前記位置とに基づいて、前記第1画像のエッジ部分を示す前記第2画像を設定する設定部とを備えることを特徴とする。 [6] In addition, the display device according to one embodiment of the present invention displays a first image, detects a first display portion that transmits incident light, and a second image that indicates an edge portion of the first image. An image detection unit; a supply unit that supplies the second image detected by the image detection unit to a second display unit; a detection unit that detects a position of the first display unit with respect to the second display unit; and the supply A setting unit configured to set the second image indicating an edge portion of the first image based on the second image supplied by the unit and the position detected by the detection unit.
[7]また、本発明の一態様である表示装置は、第1画像を表示し、入射する光を透過させる第1表示部に、前記第1画像を供給する供給部と、前記第1表示部の位置を検出する検出部と、前記供給部が供給する前記第1画像と前記検出部が検出する前記位置とに基づいて、前記第1画像のエッジ部分を示す第2画像を設定する設定部と、前記設定部が設定した前記第2画像を表示する第2表示部とを備えることを特徴とする。 [7] A display device according to one embodiment of the present invention includes a supply unit that displays a first image and supplies the first image to a first display unit that transmits incident light, and the first display. A setting for setting a second image indicating an edge portion of the first image based on a detection unit for detecting a position of the part, the first image supplied by the supply unit, and the position detected by the detection unit And a second display unit that displays the second image set by the setting unit.
 この発明によれば、観察者が立体像(3次元画像)として認識可能な範囲を広範囲にして表示することができる。 According to the present invention, it is possible to display a wide range that can be recognized as a stereoscopic image (three-dimensional image) by the observer.
本発明の第1の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 1st Embodiment of this invention. 本実施形態の第2表示部の構成の一例を示す斜視図である。It is a perspective view which shows an example of a structure of the 2nd display part of this embodiment. 本実施形態における第1の画像の一例を示す模式図である。It is a schematic diagram which shows an example of the 1st image in this embodiment. 本実施形態における第2の画像の一例を示す模式図である。It is a schematic diagram which shows an example of the 2nd image in this embodiment. 本実施形態における表示装置によって表示される画像の一例を示す模式図である。It is a schematic diagram which shows an example of the image displayed by the display apparatus in this embodiment. 本実施形態における光学像の一例を示す模式図である。It is a schematic diagram which shows an example of the optical image in this embodiment. 本実施形態における光学像の明るさの分布の一例を示すグラフである。It is a graph which shows an example of the brightness distribution of the optical image in this embodiment. 本実施形態における左眼と右眼とに生じる両眼視差の一例を示すグラフである。It is a graph which shows an example of the binocular parallax which arises in the left eye and right eye in this embodiment. 本発明の第2の実施形態に係る左眼用視差画像の一例を示す模式図である。It is a schematic diagram which shows an example of the parallax image for left eyes which concerns on the 2nd Embodiment of this invention. 本実施形態における右眼用視差画像の一例を示す模式図である。It is a schematic diagram which shows an example of the parallax image for right eyes in this embodiment. 本発明の第3の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 3rd Embodiment of this invention. 本実施形態における表示装置の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the display apparatus in this embodiment. 本発明の第4の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 4th Embodiment of this invention. 本実施形態の変形例における設定部の設定の一例を示す模式図である。It is a schematic diagram which shows an example of the setting of the setting part in the modification of this embodiment. 本発明の第5の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 5th Embodiment of this invention. 本発明の第6の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 6th Embodiment of this invention. 本実施形態における表示装置によって表示される画像の一例を示す模式図である。It is a schematic diagram which shows an example of the image displayed by the display apparatus in this embodiment. 本発明の第7の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 7th Embodiment of this invention. 本実施形態における表示システムの動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the display system in this embodiment. 本発明の第8の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 8th Embodiment of this invention. 本発明の第9の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 9th Embodiment of this invention. 本発明の第10の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 10th Embodiment of this invention. 本実施形態における三次元画像の奥行き位置の一例を示す模式図である。It is a schematic diagram which shows an example of the depth position of the three-dimensional image in this embodiment. 本発明の第11の実施形態に係る表示システムの構成の一例を示す構成図である。It is a block diagram which shows an example of a structure of the display system which concerns on the 11th Embodiment of this invention. 本実施形態における表示システムの構成の一例を示す模式図である。It is a schematic diagram which shows an example of a structure of the display system in this embodiment.
 [第1の実施形態]
 以下、図面を参照して、本発明の第1の実施形態を説明する。
 図1は、本実施形態における表示システム100の構成の一例を示す構成図である。
 本実施形態の表示システム100は、画像情報供給装置2と、表示装置10とを備えている。表示装置10は、第1表示部11と、第2表示部12とを備えている。
 以下、各図の説明においてはXYZ直交座標系を設定し、このXYZ直交座標系を参照しつつ各部の位置関係について説明する。第1表示部11が画像を表示している方向をZ軸の正の方向とし、当該Z軸方向に垂直な平面上の直交方向をそれぞれX軸方向及びY軸方向とする。ここでは、X軸方向は、第1表示部11の水平方向とし、Y軸方向は第1表示部11の鉛直方向とする。
[First Embodiment]
Hereinafter, a first embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a configuration diagram illustrating an example of a configuration of a display system 100 according to the present embodiment.
The display system 100 of this embodiment includes an image information supply device 2 and a display device 10. The display device 10 includes a first display unit 11 and a second display unit 12.
Hereinafter, in the description of each drawing, an XYZ orthogonal coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ orthogonal coordinate system. A direction in which the first display unit 11 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively. Here, the X-axis direction is the horizontal direction of the first display unit 11, and the Y-axis direction is the vertical direction of the first display unit 11.
 画像情報供給装置2は、第1表示部11に第1の画像情報を供給するとともに、第2表示部12に第2の画像情報を供給する。ここで、第1の画像情報は、第1表示部11に表示される第1の画像P11を表示するための情報である。第2の画像情報は、第2表示部12に表示される第2の画像P12を表示するための情報であり、第1の画像情報に基づいて生成されているエッジ画像PEの画像情報である。このエッジ画像PEとは、第1の画像P11内のエッジ部分Eを示す画像である。このエッジ画像PEについては、図3を参照して後述する。また、この第2の画像情報には、図4を参照して後述するように、左眼用画像P12Lを示す画像情報と右眼用画像P12Rを示す画像情報とが含まれている。 The image information supply device 2 supplies the first image information to the first display unit 11 and also supplies the second image information to the second display unit 12. Here, the first image information is information for displaying the first image P11 displayed on the first display unit 11. The second image information is information for displaying the second image P12 displayed on the second display unit 12, and is image information of the edge image PE generated based on the first image information. . The edge image PE is an image showing the edge portion E in the first image P11. The edge image PE will be described later with reference to FIG. The second image information includes image information indicating the left-eye image P12L and image information indicating the right-eye image P12R, as will be described later with reference to FIG.
 表示装置10は、上述したように、第1表示部11と第2表示部12とを備えており、画像情報供給装置2から取得した第1の画像情報に基づいて、第1の画像P11を表示するとともに、画像情報供給装置2から取得した第2の画像情報に基づいて、第2の画像P12を表示する。 As described above, the display device 10 includes the first display unit 11 and the second display unit 12, and based on the first image information acquired from the image information supply device 2, the first image P11 is displayed. While displaying, based on the 2nd image information acquired from the image information supply apparatus 2, the 2nd image P12 is displayed.
 第1表示部11は、(+Z)方向に向けて画像を表示する第1表示面110を備えており、画像情報供給装置2から取得した第1の画像情報に基づいて、第1の画像P11を第1表示面110に表示する。第1表示面110に表示された第1の画像P11から発せられる第1光束R11は、椅子3に座っている観察者1(以下の説明において、ユーザとも記載する。)に光学像として視認される。この観察者1は、第1表示面110から(+Z)方向に所定の距離だけ離れた位置に備えられる椅子3に座って、-Z方向にある第1表示面110を観察している。 The first display unit 11 includes a first display surface 110 that displays an image in the (+ Z) direction. Based on the first image information acquired from the image information supply device 2, the first image P11 is displayed. Is displayed on the first display surface 110. The first light beam R11 emitted from the first image P11 displayed on the first display surface 110 is visually recognized as an optical image by the observer 1 (also referred to as a user in the following description) sitting on the chair 3. The This observer 1 sits on the chair 3 provided at a position away from the first display surface 110 in the (+ Z) direction by a predetermined distance, and observes the first display surface 110 in the −Z direction.
 次に、図2を参照して、第2表示部12について説明する。
 図2は、本実施形態の第2表示部12の構成の一例を示す斜視図である。第2表示部12とは、入射する光を透過可能な透過型のヘッドマウントディスプレイ50が備える表示部である。この第2表示部12は、ヘッドマウントディスプレイ50を装着する観察者1の眼の前において画像を表示する。すなわち、第2表示部12は、ユーザに可搬される可搬型の透過型ディスプレイである。この第2表示部12は、レンズなどの光学系(不図示)を備えており、観察者1に虚像を表示する。また、第2表示部12は、左眼表示部12Lと右眼表示部12Rとを備えている。この第2表示部12の左眼表示部12Lは、観察者1に装着された場合に、画像情報供給装置2から供給される第2の画像情報に含まれる左眼用画像P12Lを観察者1の左眼Lに視認されるように表示する。同様に、第2表示部12の右眼表示部12Rは、第2の画像情報に含まれる右眼用画像P12Rを観察者1の右眼Rに視認されるように表示する。
Next, the second display unit 12 will be described with reference to FIG.
FIG. 2 is a perspective view showing an example of the configuration of the second display unit 12 of the present embodiment. The 2nd display part 12 is a display part with which the transmissive head mount display 50 which can permeate | transmit incident light is provided. The second display unit 12 displays an image in front of the eyes of the observer 1 wearing the head mounted display 50. That is, the second display unit 12 is a portable transmissive display that is portable by the user. The second display unit 12 includes an optical system (not shown) such as a lens, and displays a virtual image to the observer 1. The second display unit 12 includes a left eye display unit 12L and a right eye display unit 12R. When the left eye display unit 12L of the second display unit 12 is attached to the viewer 1, the left eye image P12L included in the second image information supplied from the image information supply device 2 is displayed on the viewer 1. The left eye L is displayed so as to be visually recognized. Similarly, the right eye display unit 12R of the second display unit 12 displays the right eye image P12R included in the second image information so that the right eye R of the observer 1 can visually recognize it.
 また、第2表示部12は、観察者1が椅子3に座って第1表示部11を観察した場合に、第1表示部11が表示する第1の画像の第1光束R11(光)を透過方向に透過させる。ここで、透過方向は(+Z)方向である。つまり、第2表示部12の左眼表示部12Lは、入射する第1光束R11のうちの左眼第1光束R11Lを透過させる。同様に、第2表示部12の右眼表示部12Rは、入射する第1光束R11のうちの右眼第1光束R11Rを透過させる。このようにして左眼表示部12Lは、入射する左眼第1光束R11Lを透過させるとともに、左眼第2光束R12Lを発する。この左眼第2光束R12L及び左眼第1光束R11Lは、対応する光学像として(-Z)方向を観察している観察者1の左眼に視認される。同様に、右眼表示部12Rは、入射する右眼第1光束R11Rを透過させるとともに、右眼第2光束R12Rを発する。つまり、この右眼第2光束R12R及び右眼第1光束R11Rは、対応する光学像として(-Z)方向を観察している観察者1の右眼に視認される。 The second display unit 12 also emits the first light beam R11 (light) of the first image displayed by the first display unit 11 when the observer 1 sits on the chair 3 and observes the first display unit 11. Transmit in the transmission direction. Here, the transmission direction is the (+ Z) direction. That is, the left eye display unit 12L of the second display unit 12 transmits the left eye first light beam R11L of the incident first light beam R11. Similarly, the right eye display unit 12R of the second display unit 12 transmits the right eye first light beam R11R out of the incident first light beam R11. In this way, the left eye display unit 12L transmits the incident left eye first light beam R11L and emits the left eye second light beam R12L. The left eye second light beam R12L and the left eye first light beam R11L are visually recognized by the left eye of the observer 1 who is observing the (−Z) direction as corresponding optical images. Similarly, the right eye display unit 12R transmits the incident right eye first light beam R11R and emits the right eye second light beam R12R. That is, the right eye second light beam R12R and the right eye first light beam R11R are visually recognized by the right eye of the observer 1 who is observing the (−Z) direction as corresponding optical images.
 次に、図3及び図4を参照して、本実施形態の第1の画像P11と第2の画像P12を説明する。ここで、以下の図面において画像を示す場合には、便宜上、画像の明るさが明るい(例えば、輝度が高い)部分を薄く示している。
 図3は、本実施形態における第1の画像P11の一例を示す模式図である。図4は、本実施形態における第2の画像P12の一例を示す模式図である。第1の画像P11は、例えば、図3に示すように四角形のパターンを示す画像である。ここで、第1の画像P11が示す四角形のパターンにおいては、四角形を構成する4辺がそれぞれエッジ部分になりうるが、以下の説明においては、便宜上、四角形の左辺を示す左辺エッジ部分E1及び右辺を示す右辺エッジ部分E2をエッジ部分Eとして説明する。
Next, the first image P11 and the second image P12 of this embodiment will be described with reference to FIGS. Here, when an image is shown in the following drawings, a portion where the brightness of the image is bright (for example, high brightness) is shown thinly for convenience.
FIG. 3 is a schematic diagram illustrating an example of the first image P11 in the present embodiment. FIG. 4 is a schematic diagram illustrating an example of the second image P12 in the present embodiment. The first image P11 is, for example, an image showing a square pattern as shown in FIG. Here, in the quadrangular pattern indicated by the first image P11, the four sides constituting the quadrilateral can be edge portions, but in the following description, for the sake of convenience, the left side edge portion E1 indicating the left side of the quadrilateral and the right side The right-side edge portion E2 indicating the above is described as the edge portion E.
 第2の画像P12は、例えば、図4に示すように四角形のパターンの左辺エッジ部分E1を示す左辺エッジ画像PE1及び、右辺エッジ部分E2を示す右辺エッジ画像PE2を含む画像である。ここで、エッジ部分(単にエッジ、又はエッジ領域と表現してもよい)とは、例えば、画像内において隣接する又は近傍の画素の明るさ(例えば、輝度)が急変する部分である。また、例えば、エッジ部分Eとは、画像(例えば、第1の画像P11)のうち、高周波成分を含む部分である。また、例えば、エッジ部分Eとは、画像(例えば、第1の画像P11)に対して所定の周波数成分を通過させるフィルタ(例えば、バンドパスフィルタ)により抽出される部分である。例えば、エッジ部分Eは、図3に示す四角形の左辺または右辺の、幅が無い理論的な線分を示すとともに、例えば、第2表示部12の解像度に応じた有限の幅を有するエッジ周囲の領域をも示している。第2の画像P12には、第2表示部12の左眼表示部12Lが表示する左眼用画像P12Lと、第2表示部12の右眼表示部12Rが表示する右眼用画像P12Rとが含まれる。この左眼用画像P12Lとは、左辺エッジ画像PE1Lと、右辺エッジ画像PE2Lとを含む画像である。この右眼用画像P12Rとは、左辺エッジ画像PE1Rと、右辺エッジ画像PE2Rとを含む画像である。これら左眼用画像P12L及び右眼用画像P12Rは、同一の画像であるため、以下の説明において両画像を特に区別しない場合には、第2の画像P12と総称して説明する。 The second image P12 is, for example, an image including a left side edge image PE1 showing a left side edge part E1 of a square pattern and a right side edge image PE2 showing a right side edge part E2 as shown in FIG. Here, an edge portion (which may be simply expressed as an edge or an edge region) is a portion where the brightness (for example, luminance) of adjacent or neighboring pixels in the image changes suddenly. For example, the edge portion E is a portion including a high frequency component in the image (for example, the first image P11). For example, the edge portion E is a portion extracted by a filter (for example, a band pass filter) that allows a predetermined frequency component to pass through the image (for example, the first image P11). For example, the edge portion E indicates a theoretical line segment having no width on the left side or the right side of the quadrangle shown in FIG. 3 and, for example, around the edge having a finite width corresponding to the resolution of the second display unit 12. The area is also shown. The second image P12 includes a left eye image P12L displayed by the left eye display unit 12L of the second display unit 12, and a right eye image P12R displayed by the right eye display unit 12R of the second display unit 12. included. The left-eye image P12L is an image including a left-side edge image PE1L and a right-side edge image PE2L. This right-eye image P12R is an image including a left-side edge image PE1R and a right-side edge image PE2R. Since the left-eye image P12L and the right-eye image P12R are the same image, in the following description, the two images P12 will be collectively referred to unless otherwise distinguished.
 次に、図5を参照して、表示装置10が、第1の画像P11と第2の画像P12とを対応させて表示する構成について説明する。図5は、本実施形態における表示装置10によって表示される画像の一例を示す模式図である。
 第1表示部11は、観察者1に視認されるように(+Z)方向に虚像としての第1の画像P11を表示する。また、第2表示部12は、観察者1に視認されるように第2の画像P12を(+Z)方向に表示する。そして、第2の画像P12は、第1の画像P11が表示される位置から(+Z)方向に所定の距離Lpだけ離れている位置に表示される。上述したように第2表示部12とは、光を透過させる透過型のヘッドマウントディスプレイ50が備える表示部である。このため、第1表示部11に表示される第1の画像P11と第2表示部12に表示される第2の画像P12とは、観察者1によって重なるように視認される。ここで、所定の距離Lpとは、第1の画像P11が表示されている奥行き位置と、第2の画像P12が表示されている奥行き位置の間の距離である。ここで、奥行き位置とは、Z軸方向の位置である。この所定の距離Lpは、第1の画像P11が表示されている奥行き位置と、観察者1の奥行き位置とに基づいて予め定められる。
Next, a configuration in which the display device 10 displays the first image P11 and the second image P12 in association with each other will be described with reference to FIG. FIG. 5 is a schematic diagram illustrating an example of an image displayed by the display device 10 according to the present embodiment.
The first display unit 11 displays the first image P11 as a virtual image in the (+ Z) direction so that the viewer 1 can visually recognize it. In addition, the second display unit 12 displays the second image P12 in the (+ Z) direction so that the viewer 1 can visually recognize it. The second image P12 is displayed at a position that is a predetermined distance Lp away from the position where the first image P11 is displayed in the (+ Z) direction. As described above, the second display unit 12 is a display unit included in the transmissive head mounted display 50 that transmits light. Therefore, the first image P11 displayed on the first display unit 11 and the second image P12 displayed on the second display unit 12 are visually recognized by the observer 1 so as to overlap each other. Here, the predetermined distance Lp is a distance between the depth position where the first image P11 is displayed and the depth position where the second image P12 is displayed. Here, the depth position is a position in the Z-axis direction. The predetermined distance Lp is determined in advance based on the depth position where the first image P11 is displayed and the depth position of the observer 1.
 ここで、第1表示部11が表示する第1の画像P11と、第2表示部12が表示する第2の画像P12とは、表示タイミングが互いに同期されている画像である。例えば、第1の画像P11および第2の画像P12が動画である場合には、第2表示部12は、第1表示部11が表示する第1の画像P11の表示タイミングに同期させて、第1の画像P11に対応する第2の画像P12を表示する。 Here, the first image P11 displayed by the first display unit 11 and the second image P12 displayed by the second display unit 12 are images whose display timings are synchronized with each other. For example, when the first image P11 and the second image P12 are moving images, the second display unit 12 synchronizes with the display timing of the first image P11 displayed by the first display unit 11, and A second image P12 corresponding to the first image P11 is displayed.
 また、図5に示すように、第2表示部12は、第1表示部11によって表示されている第1の画像P11内の左辺エッジ部分E1と、当該エッジ部分に対応している左眼用画像P12Lの左辺エッジ画像PE1Lとが、対応して視認されるように左眼用画像P12Lを表示する。同様に、第2表示部12は、第1表示部11によって表示されている第1の画像P11内の右辺エッジ部分E2と、当該エッジ部分に対応している右眼用画像P12Rの右辺エッジ画像PE2Rとが、対応して視認されるように右眼用画像P12Rを表示する。 Further, as shown in FIG. 5, the second display unit 12 is for the left eye edge portion E1 in the first image P11 displayed by the first display unit 11 and for the left eye corresponding to the edge portion. The left-eye image P12L is displayed so that the left-side edge image PE1L of the image P12L is visually recognized correspondingly. Similarly, the second display unit 12 includes the right side edge portion E2 in the first image P11 displayed by the first display unit 11, and the right side edge image of the right eye image P12R corresponding to the edge portion. The right-eye image P12R is displayed so that the PE2R is visually recognized correspondingly.
 すなわち、第2表示部12の左眼表示部12Lは、観察者1の左眼Lに、第1の画像P11の少なくとも一部の画像と、左辺エッジ画像PE1Lとが重なって視認されるように、虚像としての左眼用画像P12Lを表示する。また、左眼表示部12Lは、観察者1の左眼Lに、第1の画像P11の少なくとも一部の画像と、左眼用画像P12Lの右辺エッジ画像PE2Lとが重なって視認されるように、虚像としての左眼用画像P12Lを表示する。例えば、第1の画像P11には、人物の画像が表示されており、左眼用画像P12Lにはこの人物の輪郭部を示すエッジ画像PEが表示されている場合、左眼表示部12Lは、この人物の手の輪郭部のみにエッジ画像PEが重なって視認されるように左眼用画像P12Lを表示する。また、この場合において、左眼表示部12Lは、この人物の手以外の部分についてエッジ画像PEを表示しなくてもよい。また、この場合において、左眼表示部12Lは、この人物の手以外の部分について表示するエッジ画像PEが、この手以外の部分に重なって表示されていなくてもよい。 That is, the left eye display unit 12L of the second display unit 12 is visually recognized so that at least a part of the first image P11 and the left side edge image PE1L overlap the left eye L of the viewer 1. The left-eye image P12L as a virtual image is displayed. Further, the left eye display unit 12L is visually recognized so that at least a part of the first image P11 and the right side edge image PE2L of the left eye image P12L overlap the left eye L of the viewer 1. The left-eye image P12L as a virtual image is displayed. For example, when a person image is displayed in the first image P11 and an edge image PE indicating the contour of the person is displayed in the left eye image P12L, the left eye display unit 12L The left-eye image P12L is displayed so that the edge image PE is visually recognized only on the contour portion of the person's hand. In this case, the left eye display unit 12L may not display the edge image PE for a portion other than the hand of the person. In this case, the left-eye display unit 12L may not display the edge image PE displayed on the part other than the hand of the person so as to overlap the part other than the hand.
 具体的には、第2表示部12の左眼表示部12Lは、観察者1の左眼Lに、第1の画像P11によって示される四角形の左辺エッジ部分E1の(-X)側(つまり、四角形の外側)に、左辺エッジ部分E1と左眼用画像P12Lの左辺エッジ画像PE1Lとが重なって視認されるように、虚像としての左眼用画像P12Lを表示する。また、左眼表示部12Lは、観察者1の左眼Lに、第1の画像P11によって示される四角形の右辺エッジ部分E2の(-X)側(つまり、四角形の内側)に、右辺エッジ部分E2と左眼用画像P12Lの右辺エッジ画像PE2Lとが重なって視認されるように、虚像としての左眼用画像P12Lを表示する。同様に、第2表示部12の右眼表示部12Rは、観察者1の右眼Rに、第1の画像P11によって示される四角形の右辺エッジ部分E2の(+X)側(つまり、四角形の外側)に、右辺エッジ部分E2と右眼用画像P12Rの右辺エッジ画像PE2Rとが重なって視認されるように、虚像としての右眼用画像P12Rを表示する。また、右眼表示部12Rは、観察者1の右眼Rに、第1の画像P11によって示される四角形の左辺エッジ部分E1の(+X)側(つまり、四角形の内側)に、左辺エッジ部分E1と右眼用画像P12Rの左辺エッジ画像PE1Rが重なって視認されるように、虚像としての右眼用画像P12Rを表示する。このようにして、第2表示部12(表示部)は、表示される第1の画像P11の光を透過の方向に透過させるとともに、第1の画像P11に対応する第2の画像P12を透過の方向に表示する。 Specifically, the left eye display unit 12L of the second display unit 12 is connected to the left eye L of the viewer 1 on the (−X) side (that is, the (−X) side of the square left side edge portion E1 indicated by the first image P11). The left-eye image P12L as a virtual image is displayed so that the left-side edge portion E1 and the left-side edge image PE1L of the left-eye image P12L are visually recognized on the outside of the rectangle. Further, the left eye display unit 12L is connected to the left eye L of the observer 1 on the (−X) side (that is, inside the rectangle) of the right edge portion E2 of the quadrangle indicated by the first image P11. The left-eye image P12L as a virtual image is displayed so that E2 and the right-side edge image PE2L of the left-eye image P12L are visually recognized. Similarly, the right eye display unit 12R of the second display unit 12 is connected to the right eye R of the viewer 1 on the (+ X) side of the right edge portion E2 of the quadrangle indicated by the first image P11 (that is, outside the quadrangle). ), The right eye image P12R as a virtual image is displayed so that the right edge portion E2 and the right edge image PE2R of the right eye image P12R are visually recognized. In addition, the right eye display unit 12R is connected to the right eye R of the viewer 1 on the (+ X) side (that is, inside the rectangle) of the left edge portion E1 of the quadrangle shown by the first image P11. The right-eye image P12R as a virtual image is displayed so that the left-side edge image PE1R of the right-eye image P12R overlaps and is visually recognized. In this way, the second display unit 12 (display unit) transmits the light of the displayed first image P11 in the transmission direction and transmits the second image P12 corresponding to the first image P11. Display in the direction of.
 ここで、第1表示部11が表示する第1の画像P11と、第2表示部12が表示する第2の画像P12とは、その大きさが相違する画像である。具体的には、第1の画像P11の左辺エッジ部分E1と右辺エッジ部分E2との間隔と、第2の画像P12の左辺エッジ画像PE1Lと右辺エッジ画像PE2Lとの間隔とは、相違する。例えば、第1の画像P11の左辺エッジ部分E1と右辺エッジ部分E2との間隔(第1の間隔)とは、第2の画像P12の左辺エッジ画像PE1Lと右辺エッジ画像PE2Lとの間隔(第2の間隔)よりも広い間隔である。第1表示部11および第2表示部12は、ユーザ1の左眼Lの位置から第1の画像P11と第2の画像P12とを重ねて見た場合に、第1の間隔および第2の間隔の見かけの間隔が同じ間隔になるようにして、第1の画像P11と、第2の画像P12とを表示する。すなわち、第1表示部11および第2表示部12は、ユーザ1の左眼Lの位置から第1の画像P11と第2の画像P12とを重ねて見た場合に、見かけの大きさが同じになるようにして、第1の画像P11と、第2の画像P12とを表示する。このとき、第1表示部11および第2表示部12は、ユーザ1の左眼Lの位置から第1の画像P11と第2の画像P12とを重ねて見た場合に、第1の画像P11の見かけの大きさを第2の画像P12の見かけの大きさよりも少しだけ大きくして、画像を表示してもよい。また、第1表示部11および第2表示部12は、ユーザ1の左眼Lの位置から第1の画像P11と第2の画像P12とを重ねて見た場合に、第1の画像P11の見かけの大きさを第2の画像P12の見かけの大きさよりも少しだけ小さくして、画像を表示してもよい。 Here, the first image P11 displayed by the first display unit 11 and the second image P12 displayed by the second display unit 12 are images having different sizes. Specifically, the interval between the left edge portion E1 and the right edge portion E2 of the first image P11 is different from the interval between the left edge image PE1L and the right edge image PE2L of the second image P12. For example, the interval (first interval) between the left edge portion E1 and the right edge portion E2 of the first image P11 is the interval (second interval) between the left edge image PE1L and the right edge image PE2L of the second image P12. The interval is wider than the interval. When the first display unit 11 and the second display unit 12 see the first image P11 and the second image P12 superimposed from the position of the left eye L of the user 1, the first display unit 11 and the second display unit 12 The first image P11 and the second image P12 are displayed such that the apparent intervals are the same. That is, the first display unit 11 and the second display unit 12 have the same apparent size when the first image P11 and the second image P12 are viewed from the position of the left eye L of the user 1. Thus, the first image P11 and the second image P12 are displayed. At this time, when the first display unit 11 and the second display unit 12 view the first image P11 and the second image P12 in an overlapping manner from the position of the left eye L of the user 1, the first image P11. May be displayed slightly larger than the apparent size of the second image P12. In addition, the first display unit 11 and the second display unit 12 display the first image P11 when the first image P11 and the second image P12 are viewed from the position of the left eye L of the user 1. The image may be displayed with the apparent size slightly smaller than the apparent size of the second image P12.
 次に、観察者1によって、第1の画像P11と第2の画像P12とから立体像(3次元画像)が認識される仕組みについて説明する。まず、観察者1は、第1の画像P11と、第1の画像P11のエッジ部分Eに対応するエッジ画像PEとを、これらの画像の対応する部分が重なるような位置で観察する。これにより、観察者1は、第1の画像P11とエッジ画像PEとの輝度比に合わせて表示面間内の奥行き位置に像を知覚する。例えば、観察者1が四角形のパターンを観察したとき、観察者1の網膜像上では認識できないくらいの微小な輝度の段差ができる。このような場合においては、明るさ(例えば、輝度)の段差間に仮想的なエッジを知覚して1つの物体として認識する。このとき、観察者1には、左眼Lと右眼Rとにおいて認識される仮想的なエッジに微小なずれが生じて両眼視差として知覚されることにより、像の奥行き位置が変化するように認識される。次に、この仕組みについて、図6から図8を参照して、詳細に説明する。 Next, a mechanism in which the observer 1 recognizes a stereoscopic image (three-dimensional image) from the first image P11 and the second image P12 will be described. First, the observer 1 observes the first image P11 and the edge image PE corresponding to the edge portion E of the first image P11 at a position where the corresponding portions of these images overlap. Thereby, the observer 1 perceives an image at a depth position between the display surfaces in accordance with the luminance ratio between the first image P11 and the edge image PE. For example, when the observer 1 observes a quadrilateral pattern, there is a minute brightness step that cannot be recognized on the retina image of the observer 1. In such a case, a virtual edge is perceived between steps of brightness (for example, luminance) and recognized as one object. At this time, the observer 1 seems to change the depth position of the image by causing a slight shift in the virtual edge recognized by the left eye L and the right eye R and perceiving it as binocular parallax. Recognized. Next, this mechanism will be described in detail with reference to FIGS.
 図6は、本実施形態における光学像IMの一例を示す模式図である。ここで、光学像IMとは、第1の画像P11及び第2の画像P12が観察者1によって視認される像である。光学像IMには、観察者1の左眼Lに視認される光学像IMLと、観察者1の右眼Rに視認される光学像IMRとが含まれる。
 まず、観察者1の左眼Lに視認される光学像IMLについて説明する。図6に示すように、観察者1の左眼Lにおいては、左眼Lに視認される第1の画像P11Lと、第2の画像P12のうちの左眼用画像P12Lとが合成された光学像IMLが結像する。具体的には、図5を参照して説明したように、左眼Lにおいては、第1の画像P11によって示される四角形の左辺エッジ部分E1の(-X)側(つまり、四角形の外側)に、左辺エッジ部分E1を示す画像と左辺エッジ画像PE1Lとが合成された光学像IMLが結像する。また、左眼Lにおいては、第1の画像P11によって示される四角形の右辺エッジ部分E2の(-X)側(つまり、四角形の内側)に、右辺エッジ部分E2を示す画像と右辺エッジ画像PE2Lとが合成された光学像IMLが結像する。
FIG. 6 is a schematic diagram illustrating an example of the optical image IM in the present embodiment. Here, the optical image IM is an image in which the first image P11 and the second image P12 are visually recognized by the observer 1. The optical image IM includes an optical image IML visually recognized by the left eye L of the observer 1 and an optical image IMR visually recognized by the right eye R of the observer 1.
First, the optical image IML visually recognized by the left eye L of the observer 1 will be described. As shown in FIG. 6, in the left eye L of the viewer 1, the first image P11L visually recognized by the left eye L and the left eye image P12L among the second images P12 are combined. An image IML is formed. Specifically, as described with reference to FIG. 5, in the left eye L, on the (−X) side (that is, outside the rectangle) of the left edge portion E1 of the rectangle shown by the first image P11. Then, an optical image IML formed by combining the image indicating the left side edge portion E1 and the left side edge image PE1L is formed. In the left eye L, an image showing the right edge portion E2 and a right edge image PE2L on the (−X) side of the square right edge portion E2 shown by the first image P11 (that is, inside the rectangle) An optical image IML synthesized with is formed.
 次に、図6の場合において、左眼Lに視認されている光学像IMLの明るさの分布を図7を参照して説明する。図7は、本実施形態における光学像IMの明るさの分布の一例を示すグラフである。ここで、図7に示すX座標X1~X6は、光学像IMの明るさの変化点に対応するX座標である。ここでは、画像の明るさの一例として、輝度値BRの場合について説明する。また、左眼Lに視認される第1の画像P11Lは、X座標X1~X2において輝度値BRがゼロとして説明する。また、第1の画像P11Lは、X座標X2~X6において輝度値BR2である。また、左眼用画像P12Lは、X座標X1~X2及びX座標X4~X5において輝度値BR1であり、X座標X2~X4においてゼロである。したがって、左眼Lに視認される光学像IMLの明るさ(例えば、輝度)は、X座標X1~X2において輝度値BR1になる。また、光学像IMLの明るさは、X座標X2~X4及びX座標X5~X6において、輝度値BR2になり、X座標X4~X5において輝度値BR1と輝度値BR2とが合成された明るさである輝度値BR3になる。 Next, the brightness distribution of the optical image IML visually recognized by the left eye L in the case of FIG. 6 will be described with reference to FIG. FIG. 7 is a graph showing an example of the brightness distribution of the optical image IM in the present embodiment. Here, the X coordinates X1 to X6 shown in FIG. 7 are the X coordinates corresponding to the brightness change points of the optical image IM. Here, the case of the brightness value BR will be described as an example of the brightness of the image. The first image P11L visually recognized by the left eye L will be described assuming that the brightness value BR is zero at the X coordinates X1 to X2. Further, the first image P11L has the luminance value BR2 at the X coordinates X2 to X6. The left-eye image P12L has the luminance value BR1 at the X coordinates X1 to X2 and the X coordinates X4 to X5, and zero at the X coordinates X2 to X4. Accordingly, the brightness (for example, luminance) of the optical image IML visually recognized by the left eye L becomes the luminance value BR1 at the X coordinates X1 to X2. The brightness of the optical image IML is the brightness value BR2 at the X coordinates X2 to X4 and the X coordinates X5 to X6, and the brightness obtained by combining the brightness value BR1 and the brightness value BR2 at the X coordinates X4 to X5. It becomes a certain luminance value BR3.
 次に、観察者1の左眼Lにエッジ部分が視認される仕組みについて説明する。
 図8は、本実施形態における左眼Lと右眼Rとに生じる両眼視差の一例を示すグラフである。左眼Lの網膜上に結像された光学像IMLによって、観察者1に認識される画像の明るさの分布は、図8の波形WLのようになる。ここで、観察者1は、例えば、視認される画像の明るさの変化が最大になる(つまり、波形WLの傾きが最大になる)X軸上の位置を、視認している物体のエッジ部分であると認識する。本実施形態の場合、観察者1は、例えば、左眼L側の波形WLについて、図8に示すXELの位置(つまり、X軸の原点Oから距離LELの位置)を視認している四角形の左辺側のエッジ部分であると認識する。
Next, a mechanism in which the edge portion is visually recognized by the left eye L of the observer 1 will be described.
FIG. 8 is a graph showing an example of binocular parallax that occurs in the left eye L and right eye R in the present embodiment. The distribution of brightness of the image recognized by the viewer 1 by the optical image IML formed on the retina of the left eye L is as shown by the waveform WL in FIG. Here, the observer 1 is, for example, the edge portion of the object viewing the position on the X-axis where the change in the brightness of the visually recognized image is maximized (that is, the inclination of the waveform WL is maximized). Recognize that In the present embodiment, the observer 1, for example, the waveform WL in the left eye L side, and viewing the position of the X EL shown in FIG. 8 (i.e., the distance L EL from the origin O of the X-axis position) It is recognized as an edge part on the left side of the rectangle.
 ここまで、観察者1の左眼Lに認識される光学像IMLについて説明した。次に、観察者1の右眼Rに視認される光学像IMRについての、光学像IMLとの相違点を説明し、その相違点によって立体像(3次元画像)を認識する仕組みについて、再び図6から図8を参照して説明する。図6に示すように、観察者の右眼Rにおいては、右眼Rに視認される第1の画像P11Rと、右眼用画像P12Rとが合成された光学像IMRが結像する。また、図7に示すように、右眼Rに視認される光学像IMRの明るさ(例えば、輝度値BR)は、X座標X1~X3及びX座標X4~X6において、左眼Lに視認される光学像IMLの明るさと相違している。この右眼Rの網膜上に合成された光学像IMRによって、観察者1に認識される画像の明るさの分布は、図8の波形WRのようになる。ここで、観察者1は、例えば、右眼R側の波形WRについて、図8に示すXERの位置(つまり、X軸の原点Oから距離LERの位置)を視認している四角形のエッジ部分であると認識する。これにより、観察者1は、左眼Lが視認する四角形のエッジ部分の位置XELと、右眼Rが視認する四角形のエッジ部分の位置XERとを両眼視差として認識する。そして、観察者1は、エッジ部分の両眼視差に基づいて四角形の画像を立体像(3次元画像)として認識する。 So far, the optical image IML recognized by the left eye L of the observer 1 has been described. Next, the difference between the optical image IMR visually recognized by the right eye R of the observer 1 and the optical image IML will be described, and the mechanism for recognizing a stereoscopic image (three-dimensional image) based on the difference will be described again. 6 to 8 will be described. As shown in FIG. 6, in the right eye R of the observer, an optical image IMR formed by combining the first image P11R visually recognized by the right eye R and the right eye image P12R is formed. Further, as shown in FIG. 7, the brightness (for example, the luminance value BR) of the optical image IMR visually recognized by the right eye R is visually recognized by the left eye L at the X coordinates X1 to X3 and the X coordinates X4 to X6. This is different from the brightness of the optical image IML. With the optical image IMR synthesized on the retina of the right eye R, the brightness distribution of the image recognized by the observer 1 is as shown by the waveform WR in FIG. Here, for example, the observer 1 visually recognizes the position of the XER shown in FIG. 8 (that is, the position of the distance LER from the origin O of the X axis) for the waveform WR on the right eye R side. Recognize that it is a part. Thus, the observer 1 recognizes the position X EL of the edge portion of the square left eye L is viewing, and a position X ER of the edge portion of the square right eye R is visually recognized as binocular parallax. Then, the observer 1 recognizes the quadrangular image as a stereoscopic image (three-dimensional image) based on the binocular parallax of the edge portion.
 以上説明したように、本実施形態の表示装置10は、第1の画像P11を表示する第1表示部11と、第2の画像P12を表示する第2表示部12とを備えている。この第2表示部12は、観察者1の眼の前に装着される、入射する光を透過可能な透過型の表示部であって、第1表示部11が表示する第1の画像P11の光を透過方向に透過させるとともに、第1の画像P11に対応する第2の画像P12を透過方向に表示する。このように、表示装置10は、第1表示部11に対する観察者1の位置に応じて設定される第2の画像P12をヘッドマウントディスプレイ50である第2表示部12に表示する。これにより、表示装置10は、第2の画像P12を観察者1の位置毎に設定して表示することができる。したがって、表示装置10は、観察者1の位置毎に、観察者1の位置に応じた第2の画像P12を表示することができる。つまり、表示装置10は、観察者1が立体像として認識可能な範囲を広範囲にして設定することができる。 As described above, the display device 10 according to the present embodiment includes the first display unit 11 that displays the first image P11 and the second display unit 12 that displays the second image P12. The second display unit 12 is a transmissive display unit that is attached in front of the eyes of the observer 1 and is capable of transmitting incident light. The second display unit 12 includes a first image P11 displayed by the first display unit 11. The light is transmitted in the transmission direction, and the second image P12 corresponding to the first image P11 is displayed in the transmission direction. Thus, the display device 10 displays the second image P12 set according to the position of the observer 1 with respect to the first display unit 11 on the second display unit 12 that is the head mounted display 50. Thereby, the display apparatus 10 can set and display the 2nd image P12 for every position of the observer 1. FIG. Therefore, the display device 10 can display the second image P12 corresponding to the position of the observer 1 for each position of the observer 1. That is, the display device 10 can set a wide range that the observer 1 can recognize as a stereoscopic image.
 また、一般的に、表示する画像が両眼視差を利用した立体像(3次元画像)である場合には、表示する対象の厚みが観察者1に認識されにくくなる、いわゆる「書割効果」が生じることがある。一方、表示装置10は、第1の画像P11と第2の画像P12との対応する画素間の輝度比によって第1の画像P11及び第2の画像P12を表示することにより、観察者1に立体像を表示する。つまり、表示装置10は、第1の画像P11と第2の画像P12とをいずれも平面像(2次元画像)にしても、観察者1に立体像を表示することができる。これにより、表示装置10は、書割効果を低減して、表示する対象の厚みが観察者1に認識されやすい立体像を表示することができる。 In general, when the displayed image is a three-dimensional image (three-dimensional image) using binocular parallax, the thickness of the object to be displayed is less likely to be recognized by the observer 1, so-called “scribing effect” May occur. On the other hand, the display device 10 displays the first image P11 and the second image P12 according to the luminance ratio between corresponding pixels of the first image P11 and the second image P12, thereby allowing the viewer 1 to Display an image. That is, the display device 10 can display a stereoscopic image to the observer 1 even if the first image P11 and the second image P12 are both planar images (two-dimensional images). Thereby, the display apparatus 10 can display the three-dimensional image in which the thickness of the object to be displayed is easily recognized by the observer 1 by reducing the writing effect.
 さらに、表示装置10は、第1表示部11が2次元表示装置であっても、観察者1が立体像を認識することができる画像を表示することができる。したがって、表示装置10は、第2表示部12を装着していない観察者に対して、平面像(2次元画像)としての第1の画像P11を第1表示部11によって表示することができる。つまり、表示装置10は、ヘッドマウントディスプレイ50を装着している観察者1に対しては立体像を、ヘッドマウントディスプレイ50を装着していない観察者に対しては平面像を、それぞれ同時に表示することができる。これにより、表示装置10は、例えば映画館において、立体像と平面像とを同時に表示(上映)することができる。 Furthermore, the display device 10 can display an image that allows the observer 1 to recognize a stereoscopic image even if the first display unit 11 is a two-dimensional display device. Therefore, the display device 10 can display the first image P11 as a planar image (two-dimensional image) on the first display unit 11 for an observer who is not wearing the second display unit 12. That is, the display device 10 simultaneously displays a stereoscopic image for the observer 1 wearing the head mounted display 50 and a planar image for the observer not wearing the head mounted display 50. be able to. Thereby, the display device 10 can simultaneously display (screen) a stereoscopic image and a planar image in a movie theater, for example.
 また、本実施形態の第2表示部12が表示する第2の画像P12は、第1の画像P11内のエッジ部分Eを示すエッジ画像PEを含み、第1表示部11に表示される第1の画像P11内のエッジ部分Eと当該エッジ部分Eを示すエッジ画像PEとが対応して観察者1に視認されるように設定されている。これにより、表示装置10は、第1の画像P11と第2の画像P12とのエッジ画像PE(つまり、エッジ部分)を重ねて表示することができる。つまり、本実施形態の表示装置10は、第1表示部11に表示されているエッジ部以外の画像には、第2表示部12に表示されている画像(つまり、エッジ画像PE)の影響を与えることなく、画像を表示することができる。 Further, the second image P12 displayed by the second display unit 12 of the present embodiment includes an edge image PE indicating the edge portion E in the first image P11, and is displayed on the first display unit 11. The edge portion E in the image P11 and the edge image PE indicating the edge portion E are set so as to be viewed by the observer 1 in correspondence with each other. Thereby, the display apparatus 10 can display the edge image PE (that is, the edge portion) of the first image P11 and the second image P12 in an overlapping manner. In other words, the display device 10 according to the present embodiment affects the image other than the edge portion displayed on the first display portion 11 by the influence of the image (that is, the edge image PE) displayed on the second display portion 12. An image can be displayed without giving.
 ここで、仮に、第1表示部11と第2表示部12とに、明るさ(例えば、輝度)の比を設定した画像をそれぞれ表示した場合には、第1表示部11と第2表示部12との表示条件のばらつきが、立体像(3次元画像)の表示精度に影響を及ぼす可能性がある。また、この場合、高精度に立体像(3次元画像)を表示するためには、第1表示部11と第2表示部12との表示条件(例えば、表示される画像の明るさや色彩)のばらつきを低減させて、表示条件を一致させる必要が生じる。 Here, if images having a ratio of brightness (for example, luminance) are displayed on the first display unit 11 and the second display unit 12, respectively, the first display unit 11 and the second display unit are displayed. 12 may affect the display accuracy of a three-dimensional image (three-dimensional image). In this case, in order to display a stereoscopic image (three-dimensional image) with high accuracy, display conditions (for example, brightness and color of the displayed image) of the first display unit 11 and the second display unit 12 are displayed. It is necessary to reduce the variation and to match the display conditions.
 一方、本実施形態の表示装置10は、第2表示部12にエッジ画像PEを表示するため、第1表示部11と第2表示部12との表示条件にばらつきがあっても、第1表示部11に表示されているエッジ部以外の画像に影響を与えることがない。これにより、第1表示部11と第2表示部12との表示条件を厳密に一致させなくても、立体像(3次元画像)を高精度に表示することができる。つまり、本実施形態の表示装置10は、立体像(3次元画像)を高精度に表示することができる。 On the other hand, since the display device 10 of the present embodiment displays the edge image PE on the second display unit 12, the first display is performed even if the display conditions of the first display unit 11 and the second display unit 12 vary. The image other than the edge portion displayed on the portion 11 is not affected. Thereby, even if the display conditions of the first display unit 11 and the second display unit 12 do not exactly match, a stereoscopic image (three-dimensional image) can be displayed with high accuracy. That is, the display device 10 of the present embodiment can display a stereoscopic image (three-dimensional image) with high accuracy.
 また、本実施形態の表示装置10は、第2表示部12にエッジ画像PEのみを表示させればよいため、第2表示部12にエッジ画像PE以外の画像をも表示する場合に比べて、消費電力を抑えることができる。 In addition, since the display device 10 according to the present embodiment only needs to display the edge image PE on the second display unit 12, as compared with the case where an image other than the edge image PE is also displayed on the second display unit 12. Power consumption can be reduced.
 また、図8に示すように、観察者1は画像の明るさ(例えば、輝度)の段階的な変化を波形WL及び波形WRのように滑らかな明るさの変化として認識する。このため、本実施形態の表示装置10は、エッジ画像PEの精細度が低い場合であっても、観察者1に立体像を認識させることができる。ここで、精細度とは、例えば、画像を構成する画素の数である。これにより、本実施形態の表示装置10は、第1表示部11の精細度に比べて第2表示部12の精細度を低減することができる。つまり、本実施形態の表示装置10は、第2表示部12を安価な表示デバイスによって構成することができる。 Further, as shown in FIG. 8, the observer 1 recognizes a step change in the brightness (for example, luminance) of the image as a smooth change in brightness such as the waveform WL and the waveform WR. For this reason, the display apparatus 10 of this embodiment can make the observer 1 recognize a stereoscopic image even when the definition of the edge image PE is low. Here, the definition is, for example, the number of pixels constituting an image. Thereby, the display device 10 of the present embodiment can reduce the definition of the second display unit 12 as compared with the definition of the first display unit 11. That is, the display device 10 of the present embodiment can configure the second display unit 12 with an inexpensive display device.
 また、本実施形態の表示装置10は、第1表示部11によって表示されている第1の画像P11内のエッジ部分と、エッジ画像PEとが、対応して視認されるように第1の画像P11および第2の画像P12を表示する。これにより、本実施形態の表示装置10が表示する各画像は、第1の画像P11内のエッジ部分と、エッジ画像PEとが、観察者1によって分離しないように視認される。したがって、本実施形態の表示装置10は、立体像を高精度に表示することができる。 In addition, the display device 10 of the present embodiment is configured so that the edge portion in the first image P11 displayed by the first display unit 11 and the edge image PE are visually recognized correspondingly. P11 and the second image P12 are displayed. Thereby, each image displayed by the display device 10 of the present embodiment is visually recognized so that the edge portion in the first image P11 and the edge image PE are not separated by the observer 1. Therefore, the display device 10 of the present embodiment can display a stereoscopic image with high accuracy.
 また、本実施形態の表示装置10は、第2の画像P12のうちの左眼用画像P12Lおよび右眼用画像P12Rのコントラストを低減させることなく、これらの画像を表示することにより、立体像を高精度に表示することができる。また、本実施形態の表示装置10は、左眼用画像P12Lが表示される明るさの範囲と、右眼用画像P12Rが表示される明るさの範囲とに、重複する範囲があったとしても、これらの画像を表示することができる。例えば、本実施形態の表示装置10は、左眼用画像P12Lが表示される明るさと、右眼用画像P12Rが表示される明るさとが一致していたとしても、立体像を高精度に表示することができる。すなわち、本実施形態の表示装置10は、左眼用画像P12Lと、右眼用画像P12Rとの相互の画素値の関係を考慮することなく、立体像を高精度に表示することができる。 Further, the display device 10 of the present embodiment displays a stereoscopic image by displaying these images without reducing the contrast of the left-eye image P12L and the right-eye image P12R in the second image P12. It can be displayed with high accuracy. Further, in the display device 10 of the present embodiment, even if there is an overlapping range between the brightness range in which the left-eye image P12L is displayed and the brightness range in which the right-eye image P12R is displayed. These images can be displayed. For example, the display device 10 according to the present embodiment displays a stereoscopic image with high accuracy even when the brightness at which the left-eye image P12L is displayed matches the brightness at which the right-eye image P12R is displayed. be able to. That is, the display device 10 of the present embodiment can display a stereoscopic image with high accuracy without considering the relationship between the pixel values of the left-eye image P12L and the right-eye image P12R.
 [第2の実施形態]
 以下、図9及び図10を参照して、本発明の第2の実施形態を説明する。なお、上述した第1の実施形態と同一の構成については同一の符号を付してその説明を省略する。
 図9は、左眼用視差画像P12aLの一例を示す模式図である。図10は、右眼用視差画像P12aRの一例を示す模式図である。表示装置10aは、第2表示部12に第2の画像P12aを表示する。この第2の画像P12aは、互いに両眼視差を有する左眼用画像P12Lと右眼用画像P12Rとを含んでいる。表示装置10aの第2表示部12の左眼表示部12Lは、左眼用視差画像P12aLを左眼用画像P12Lとして表示し、右眼表示部12Rは、右眼用視差画像P12aRを右眼用画像P12Rとして表示する。この左眼用画像P12Lは、第1の実施形態において説明した第2の画像P12に含まれるエッジ画像PEが示す左辺エッジ画像PE1及び右辺エッジ画像PE2から、距離LaLだけ(+X)方向にずれたエッジ画像PE1aL及びエッジ画像PE2aLを含んでいる。同様に、右眼用画像P12Rは、左辺エッジ画像PE1及び右辺エッジ画像PE2から、距離LaRだけ(-X)方向にずれたエッジ画像PE1aR及びエッジ画像PE2aRを含んでいる。このようにして、第2表示部12は、互いに両眼視差を有する左眼用画像P12Lと右眼用画像P12Rを表示する。
[Second Embodiment]
Hereinafter, a second embodiment of the present invention will be described with reference to FIGS. 9 and 10. In addition, about the same structure as 1st Embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
FIG. 9 is a schematic diagram illustrating an example of the left-eye parallax image P12aL. FIG. 10 is a schematic diagram illustrating an example of the right-eye parallax image P12aR. The display device 10a displays the second image P12a on the second display unit 12. The second image P12a includes a left-eye image P12L and a right-eye image P12R that have binocular parallax with each other. The left-eye display unit 12L of the second display unit 12 of the display device 10a displays the left-eye parallax image P12aL as the left-eye image P12L, and the right-eye display unit 12R uses the right-eye parallax image P12aR for the right eye. Displayed as an image P12R. This left-eye image P12L is shifted in the (+ X) direction by a distance LaL from the left-side edge image PE1 and the right-side edge image PE2 indicated by the edge image PE included in the second image P12 described in the first embodiment. An edge image PE1aL and an edge image PE2aL are included. Similarly, the right-eye image P12R includes an edge image PE1aR and an edge image PE2aR that are shifted in the (−X) direction by a distance LaR from the left-side edge image PE1 and the right-side edge image PE2. In this way, the second display unit 12 displays the left-eye image P12L and the right-eye image P12R that have binocular parallax.
 以上説明したように、第2の画像P12aは、互いに両眼視差を有する左眼用視差画像P12aLと右眼用視差画像P12aRとを含み、第2表示部12は、観察者1の左眼Lに左眼用視差画像P12aLを表示するとともに、右眼Rに右眼用視差画像P12aRを表示する。これにより、表示装置10aは、第2の画像P12aによって観察者1が視認するエッジ画像PEaの像の奥行き位置を、第2の画像P12aの両眼視差によって設定することができる。すなわち、表示装置10aは、交差方向の両眼視差が付された第2の画像P12aを第2表示部12に表示することによって、観察者1が視認する立体像を第1表示部11の(+Z)方向側に設定することができる。また表示装置10aは、非交差方向の両眼視差が付された第2の画像P12aを第2表示部12に表示することによって、観察者1が視認する立体像を第1表示部11の(-Z)方向側に設定することができる。このように、表示装置10aは、第2の画像P12aに両眼視差を付さない場合に比して、観察者1が視認する立体像の奥行き位置を広範囲に設定することができる。つまり、表示装置10aは、観察者1が視認する立体像の立体感を高めることができる。 As described above, the second image P12a includes the left-eye parallax image P12aL and the right-eye parallax image P12aR that have binocular parallax, and the second display unit 12 displays the left eye L of the viewer 1 The left-eye parallax image P12aL is displayed on the right eye R, and the right-eye parallax image P12aR is displayed on the right eye R. Thereby, the display device 10a can set the depth position of the image of the edge image PEa visually recognized by the observer 1 with the second image P12a based on the binocular parallax of the second image P12a. In other words, the display device 10a displays a second image P12a with binocular parallax in the cross direction on the second display unit 12 so that a stereoscopic image visually recognized by the observer 1 is displayed on the first display unit 11 ( + Z) direction side. In addition, the display device 10a displays a second image P12a with binocular parallax in the non-intersecting direction on the second display unit 12 so that a stereoscopic image visually recognized by the observer 1 is displayed on the first display unit 11 ( -Z) Can be set on the direction side. As described above, the display device 10a can set the depth position of the stereoscopic image visually recognized by the observer 1 over a wide range, as compared with the case where the binocular parallax is not added to the second image P12a. That is, the display device 10a can enhance the stereoscopic effect of the stereoscopic image visually recognized by the observer 1.
 [第3の実施形態]
 以下、図11及び図12を参照して、本発明の第3の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。
 まず、図11を参照して表示装置10bの構成の一例について説明する。図11は、本実施形態における表示装置10bを含む表示システム100bの構成の一例を示す構成図である。画像情報供給装置2は、表示装置10bに第1の画像情報を供給する。ここで、第1の画像情報は、表示装置10bの第1表示部11が表示する第1の画像P11を表示するための情報である。
[Third Embodiment]
Hereinafter, a third embodiment of the present invention will be described with reference to FIGS. 11 and 12. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
First, an example of the configuration of the display device 10b will be described with reference to FIG. FIG. 11 is a configuration diagram illustrating an example of a configuration of a display system 100b including the display device 10b according to the present embodiment. The image information supply device 2 supplies the first image information to the display device 10b. Here, the first image information is information for displaying the first image P11 displayed by the first display unit 11 of the display device 10b.
 この第1の画像情報には、表示装置10bが第1の画像P11を3次元表示する場合の奥行き位置を示す奥行き位置情報が含まれている。表示装置10bは、奥行き位置情報に基づいて、エッジ部分を示すエッジ画像PEの位置を変化させる。これにより、表示装置10bは、観察者1が第1の画像P11とエッジ画像PEを含む第2の画像P12とを重ねて見た場合に感じる第1の画像P11の奥行き感を設定する。例えば、表示装置10bは、エッジ画像PEの位置を第1の画像P11のエッジ部分の位置に対して外側や内側に設定する。これにより、表示装置10bは、エッジ部分の両眼視差を設定して、観察者1が第1の画像P11と第2の画像P12とを重ねて見た場合に感じる奥行き感を設定する。次に、表示装置10bによる奥行き感の設定について説明する。 The first image information includes depth position information indicating the depth position when the display device 10b displays the first image P11 in three dimensions. The display device 10b changes the position of the edge image PE indicating the edge portion based on the depth position information. Accordingly, the display device 10b sets the sense of depth of the first image P11 that is felt when the observer 1 sees the first image P11 and the second image P12 including the edge image PE in an overlapping manner. For example, the display device 10b sets the position of the edge image PE outside or inside the position of the edge portion of the first image P11. Thereby, the display device 10b sets binocular parallax of the edge portion, and sets a sense of depth that the viewer 1 feels when viewing the first image P11 and the second image P12 in an overlapping manner. Next, the setting of the feeling of depth by the display device 10b will be described.
 表示装置10bは、設定部13を備えている。設定部13は、画像情報供給装置2から供給される第1の画像情報に基づいて、第1の画像P11に対応する第2の画像P12を設定して、設定した第2の画像P12を示す第2の画像情報を第2表示部12に供給する。具体的には、設定部13は、画像情報供給装置2から第1の画像P11の奥行き位置情報が含まれている第1の画像情報を取得する。そして、設定部13は、取得した第1の画像情報から、既知の構成によってエッジ部分を抽出する。そして、設定部13は、抽出したエッジ部分を示すエッジ画像PEを生成して、生成したエッジ画像PEを示す第2の画像情報を第2表示部12に供給する。ここで、設定部13は、例えば、取得した画像情報に対して、例えば、ラプラシアンフィルタなどの微分フィルタを適用することによって、エッジ部分を抽出する。 The display device 10b includes a setting unit 13. The setting unit 13 sets the second image P12 corresponding to the first image P11 based on the first image information supplied from the image information supply device 2, and indicates the set second image P12. The second image information is supplied to the second display unit 12. Specifically, the setting unit 13 acquires first image information including the depth position information of the first image P11 from the image information supply device 2. Then, the setting unit 13 extracts an edge portion from the acquired first image information with a known configuration. Then, the setting unit 13 generates an edge image PE indicating the extracted edge portion, and supplies second image information indicating the generated edge image PE to the second display unit 12. Here, for example, the setting unit 13 extracts an edge portion by applying a differential filter such as a Laplacian filter to the acquired image information.
 次に、図12を参照して、本実施形態における表示装置10bの動作について説明する。図12は、本実施形態における表示装置10bの動作の一例を示すフローチャートである。
 まず、表示装置10bの第1表示部11は、画像情報供給装置2から画像情報を取得する(ステップS110)。また、表示装置10bの設定部13は、画像情報供給装置2から画像情報を取得する(ステップS120)。この、画像情報供給装置2が供給する第1の画像情報には、第1の画像P11の奥行き位置を示す位置情報が含まれている。ここで、第1の画像P11の奥行き位置を示す位置情報とは、第1の画像P11が観察者1に立体画像として認識されるために画像情報に付加される情報であって、例えば、左眼Lと右眼Rとの両眼視差を設定するための情報である。例えば、第1の画像P11の奥行き位置が第1の画像P11が表示されている位置(Z軸の原点Oの位置)から奥方向(-Z方向)に設定される場合には、Z軸の原点Oの位置の両眼視差に比べて、両眼視差が大きくなるような位置情報が、画像情報に付加されている。
Next, the operation of the display device 10b in the present embodiment will be described with reference to FIG. FIG. 12 is a flowchart showing an example of the operation of the display device 10b in the present embodiment.
First, the first display unit 11 of the display device 10b acquires image information from the image information supply device 2 (step S110). The setting unit 13 of the display device 10b acquires image information from the image information supply device 2 (step S120). The first image information supplied by the image information supply device 2 includes position information indicating the depth position of the first image P11. Here, the position information indicating the depth position of the first image P11 is information added to the image information in order for the first image P11 to be recognized as a stereoscopic image by the observer 1, for example, left This is information for setting binocular parallax between the eye L and the right eye R. For example, when the depth position of the first image P11 is set from the position where the first image P11 is displayed (the position of the origin O of the Z axis) to the back direction (−Z direction), Position information that increases the binocular parallax compared to the binocular parallax at the position of the origin O is added to the image information.
 次に、設定部13は、ステップS120において取得した第1の画像情報に基づいて、第1の画像P11内のエッジ部分を示すエッジ画像PEを含む第2の画像P12を生成して、生成した第2の画像P12を示す第2の画像情報を第2表示部12に出力する(ステップS122)。 Next, the setting unit 13 generates and generates the second image P12 including the edge image PE indicating the edge portion in the first image P11 based on the first image information acquired in step S120. Second image information indicating the second image P12 is output to the second display unit 12 (step S122).
 次に、第1表示部11は、ステップS110において取得した第1の画像情報に基づいて、第1の画像P11を生成し、生成した第1の画像P11を表示して、処理を終了する(ステップS113)。
 また、第2表示部12は、ステップS122において設定部13から出力された第2の画像情報を取得するとともに、取得した第2の画像情報に基づいて、第2の画像P12を表示して、処理を終了する(ステップS123)(以上、ステップS110からステップS123まで表示手順。)。
Next, the 1st display part 11 produces | generates the 1st image P11 based on the 1st image information acquired in step S110, displays the produced | generated 1st image P11, and complete | finishes a process ( Step S113).
The second display unit 12 acquires the second image information output from the setting unit 13 in step S122 and displays the second image P12 based on the acquired second image information. The process ends (step S123) (the display procedure from step S110 to step S123 is described above).
 以上説明したように、表示装置10bは、入力される第1の画像P11の画像情報に基づいて、第1の画像P11に対応する第2の画像P12を設定する設定部13を備えている。これにより、本実施形態の表示装置10bは、画像情報供給装置2からエッジ画像PEの供給を受けることなく、立体像(3次元画像)を表示することができる。 As described above, the display device 10b includes the setting unit 13 that sets the second image P12 corresponding to the first image P11 based on the input image information of the first image P11. Thereby, the display device 10b of the present embodiment can display a stereoscopic image (three-dimensional image) without receiving the supply of the edge image PE from the image information supply device 2.
 なお、設定部13は、第1の画像P11内のエッジ部分Eを示す画素の画素値に基づいて、第1の画像P11と第2の画像P12との相対位置を設定して、第2の画像P12を設定してもよい。ここで、表示装置10bは、第1の画像P11と第2の画像P12とを重ねて表示するため、第1の画像P11と第2の画像P12とを重ねた画像である光学像IMが明るくなりすぎることがある。この場合、光学像IMは、エッジ部分Eが目立って観察者1に視認されることになるため、立体像として認識されにくくなることがある。そこで、設定部13は、第1の画像P11内のエッジ部分Eを示す画素の画素値(例えば、画素の輝度値)に基づいて、第2の画像P12に含まれるエッジ画像PEの位置を、第1の画像P11内のエッジ部分Eに対して相対的にずらして第2の画像P12を設定する。例えば、第1の画像P11内のエッジ部分Eを示す画素の輝度値と、第2の画像P12に含まれるエッジ画像PEの画素の輝度値との和が、所定のしきい値を超える場合には、エッジ画像PEの位置を所定の距離だけずらして、第2の画像P12を設定する。これにより、本実施形態の表示装置10bは、エッジ部分のみが目立つことによる観察者1の違和感を低減することができ、立体像(3次元画像)を高精度に表示することができる。 The setting unit 13 sets the relative position between the first image P11 and the second image P12 based on the pixel value of the pixel indicating the edge portion E in the first image P11, and the second image The image P12 may be set. Here, since the display device 10b displays the first image P11 and the second image P12 in an overlapping manner, the optical image IM that is an image in which the first image P11 and the second image P12 are overlapped is bright. May be too much. In this case, the optical image IM may be difficult to be recognized as a stereoscopic image because the edge portion E is conspicuous and is visually recognized by the observer 1. Therefore, the setting unit 13 determines the position of the edge image PE included in the second image P12 based on the pixel value (for example, the luminance value of the pixel) of the pixel indicating the edge portion E in the first image P11. The second image P12 is set by being shifted relative to the edge portion E in the first image P11. For example, when the sum of the luminance value of the pixel indicating the edge portion E in the first image P11 and the luminance value of the pixel of the edge image PE included in the second image P12 exceeds a predetermined threshold value Sets the second image P12 by shifting the position of the edge image PE by a predetermined distance. Thereby, the display apparatus 10b of this embodiment can reduce the discomfort of the observer 1 by only an edge part conspicuous, and can display a three-dimensional image (three-dimensional image) with high precision.
 なお、設定部13は、第2表示部12又は画像情報供給装置2に備えられていてもよい。この場合、表示装置10は、設定部13を独立して備えなくてもよいため、表示装置10の構成を簡略化することができる。 The setting unit 13 may be provided in the second display unit 12 or the image information supply device 2. In this case, since the display device 10 does not have to include the setting unit 13 independently, the configuration of the display device 10 can be simplified.
 [第4の実施形態]
 以下、図13を参照して、本発明の第4の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。図13は、本実施形態における表示装置10cを含む表示システム100cの構成の一例を示す構成図である。表示システム100cは、第1表示部11cと、設定部13cとを備えている。
 第1表示部11cは、位置検出部14を備えている。この位置検出部14は、観察者1の顔の方向を検出する顔検出センサを備えており、観察者1が装着しているヘッドマウントディスプレイ50が備える第2表示部12の表示面の第1表示部11cの表示面(第1表示面110)との相対位置を、顔検出センサによって検出する。すなわち、位置検出部14は、第1の画像P11が表示される第1表示面110と第2の画像P12が表示される第2表示部12の表示面との相対位置を検出する。
[Fourth Embodiment]
Hereinafter, a fourth embodiment of the present invention will be described with reference to FIG. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted. FIG. 13 is a configuration diagram illustrating an example of a configuration of a display system 100c including the display device 10c according to the present embodiment. The display system 100c includes a first display unit 11c and a setting unit 13c.
The first display unit 11 c includes a position detection unit 14. The position detection unit 14 includes a face detection sensor that detects the direction of the face of the observer 1, and the first display surface of the second display unit 12 included in the head-mounted display 50 worn by the observer 1. A face detection sensor detects the relative position of the display unit 11c to the display surface (first display surface 110). That is, the position detection unit 14 detects a relative position between the first display surface 110 on which the first image P11 is displayed and the display surface of the second display unit 12 on which the second image P12 is displayed.
 一例として、相対位置とは、第1表示面110を基準位置にした第2表示部12の表示面の奥行き位置である。この例においては、位置検出部14は、第1の画像P11が表示される第1表示面110を基準位置にした第2の画像P12が表示される第2表示部12の表示面の奥行き位置を検出する。 As an example, the relative position is a depth position of the display surface of the second display unit 12 with the first display surface 110 as a reference position. In this example, the position detection unit 14 has a depth position on the display surface of the second display unit 12 on which the second image P12 is displayed with the first display surface 110 on which the first image P11 is displayed as a reference position. Is detected.
 設定部13cは、位置検出部14が検出する相対位置を示す情報に基づいて、第1の画像P11と第2の画像P12との相対位置を設定して、第2の画像P12を設定する。具体的には、設定部13は、位置検出部14によって第2表示部12の表示面の相対位置が検出された検出結果に基づいて、第1の画像P11とエッジ画像PEとが対応して表示されるように、第2の画像P12を設定する。 The setting unit 13c sets the second image P12 by setting the relative position between the first image P11 and the second image P12 based on the information indicating the relative position detected by the position detection unit 14. Specifically, the setting unit 13 corresponds to the first image P11 and the edge image PE based on the detection result obtained by detecting the relative position of the display surface of the second display unit 12 by the position detection unit 14. The second image P12 is set so as to be displayed.
 より具体的には、設定部13cは、位置検出部14が検出する第2表示部12の表示面の奥行き位置示す情報に基づいて、第1の画像P11と第2の画像P12との距離を設定して、第2の画像P12を設定する。すなわち、設定部13は、位置検出部14によって第2表示部12の表示面の奥行き位置が検出された検出結果に基づいて、第1の画像P11とエッジ画像PEとが対応して表示されるように、第2の画像P12を設定する。 More specifically, the setting unit 13c determines the distance between the first image P11 and the second image P12 based on information indicating the depth position of the display surface of the second display unit 12 detected by the position detection unit 14. Set the second image P12. That is, the setting unit 13 displays the first image P11 and the edge image PE in correspondence with each other based on the detection result obtained by detecting the depth position of the display surface of the second display unit 12 by the position detection unit 14. Thus, the second image P12 is set.
 以上、説明したように、本実施形態の表示装置10cは、設定部13cと、第1の画像P11が表示される第1表示部11cの表示面と第2の画像P12が表示される第2表示部12の表示面との相対位置を検出する位置検出部14を備えている。また、設定部13cは、位置検出部14が検出する相対位置を示す情報に基づいて、第1の画像P11と第2の画像P12との相対位置を設定して、第2の画像P12を設定する。これにより、表示装置10cは、所定の奥行き位置にいる観察者1だけでなく、所定の奥行き位置以外の奥行き位置にいる観察者1に対しても、立体像(3次元画像)を表示することができる。つまり、表示装置10cは、広範囲に立体像(3次元画像)を表示することができる。 As described above, the display device 10c of the present embodiment has the setting unit 13c, the display surface of the first display unit 11c on which the first image P11 is displayed, and the second image P12 on which the second image P12 is displayed. A position detection unit 14 that detects a relative position of the display unit 12 to the display surface is provided. The setting unit 13c sets the second image P12 by setting the relative position between the first image P11 and the second image P12 based on the information indicating the relative position detected by the position detection unit 14. To do. Thereby, the display device 10c displays a stereoscopic image (three-dimensional image) not only for the observer 1 at a predetermined depth position but also for the observer 1 at a depth position other than the predetermined depth position. Can do. That is, the display device 10c can display a stereoscopic image (three-dimensional image) over a wide range.
 さらに、本実施形態の表示装置10は、検出された観察者1の奥行き位置が移動した場合には、検出結果に基づいて、移動後の奥行き位置にいる観察者1に対して、第1の画像P11と、エッジ画像PEとを対応させて表示することができる。つまり、本実施形態の表示装置10は、移動する観察者1の奥行き位置に追従して立体像を表示することができる。 Furthermore, when the depth position of the detected observer 1 moves, the display device 10 according to the present embodiment performs the first operation on the observer 1 at the moved depth position based on the detection result. The image P11 and the edge image PE can be displayed in correspondence with each other. That is, the display device 10 of the present embodiment can display a stereoscopic image following the depth position of the moving observer 1.
 [変形例]
 次に、本実施形態の変形例について、図14を参照して説明する。
 図14は、本実施形態の変形例における設定部13cの設定の一例を示す模式図である。まず、位置検出部14が検出する第1表示面110と第2表示部12の表示面との相対位置について説明する。位置検出部14は、第1表示面110と第2表示部12の表示面との距離Lpを検出する。また、位置検出部14は、第1表示面110の基準点SP1と第2表示部12の表示面の基準点SP2とを結ぶ線分CLとZ軸とのなす角θ1を検出する。ここで、基準点SP1は、第1表示面110上の中心点である。また基準点SP2は、第2表示部12の表示面上の中心点である。また、位置検出部14は、線分CLと、第2表示部12の表示面の法線NVとのなす角θ2を検出する。つまり、位置検出部14は、距離Lp、角θ1及び角θ2を第1表示面110と第2表示部12の表示面との相対位置として検出する。
[Modification]
Next, a modification of the present embodiment will be described with reference to FIG.
FIG. 14 is a schematic diagram illustrating an example of setting of the setting unit 13c in a modification of the present embodiment. First, the relative position between the first display surface 110 and the display surface of the second display unit 12 detected by the position detection unit 14 will be described. The position detection unit 14 detects a distance Lp between the first display surface 110 and the display surface of the second display unit 12. Further, the position detection unit 14 detects an angle θ1 formed by a line segment CL connecting the reference point SP1 on the first display surface 110 and the reference point SP2 on the display surface of the second display unit 12 and the Z axis. Here, the reference point SP1 is a center point on the first display surface 110. The reference point SP2 is the center point on the display surface of the second display unit 12. Further, the position detection unit 14 detects an angle θ2 formed by the line segment CL and the normal line NV of the display surface of the second display unit 12. That is, the position detection unit 14 detects the distance Lp, the angle θ1 and the angle θ2 as relative positions between the first display surface 110 and the display surface of the second display unit 12.
 設定部13cは、例えば、位置検出部14から取得した検出結果(距離Lp、角θ1及び角θ)に基づいて、第2の画像P12に含まれているエッジ画像PEの位置と、エッジ画像PEの画像変換(例えば、射影変換やアフィン変換)の方法とを設定する。つまり、設定部13cは、検出結果に基づいて、第1の画像P11とエッジ画像PEとが対応して視認されるように、相対位置を設定する。また、表示状態には、エッジ画像PEの画像変換(例えば、射影変換やアフィン変換)が含まれ、設定部13cは、検出結果に基づいて、第1の画像P11とエッジ画像PEとが対応して視認されるように、エッジ画像PEの画像変換の方法を設定する。 The setting unit 13c, for example, based on the detection result (distance Lp, angle θ1, and angle θ) acquired from the position detection unit 14, the position of the edge image PE included in the second image P12, and the edge image PE The image conversion method (for example, projective conversion or affine conversion) is set. That is, the setting unit 13c sets the relative position based on the detection result so that the first image P11 and the edge image PE are visually recognized correspondingly. Further, the display state includes image conversion (for example, projective conversion or affine conversion) of the edge image PE, and the setting unit 13c corresponds to the first image P11 and the edge image PE based on the detection result. The image conversion method of the edge image PE is set so that it can be visually recognized.
 以上、説明したように、本変形例における位置検出部14は、奥行き方向(Z軸方向)の相対位置だけでなく、第1表示部11cの表示面に平行な方向(X軸方向およびY軸方向)の相対位置を検出する。これにより、設定部13cは、位置検出部14が検出する相対位置を示す情報に基づいて、第1の画像P11と第2の画像P12との相対位置を設定して、第2の画像P12を設定する。これにより、表示装置10cは、第1表示部11cの正面の位置にいる観察者1だけでなく、第1表示部11cの正面以外の位置にいる観察者1に対しても、立体像(3次元画像)を表示することができる。つまり、表示装置10cは、広範囲に立体像(3次元画像)を表示することができる。 As described above, the position detection unit 14 in the present modification has not only a relative position in the depth direction (Z-axis direction) but also a direction (X-axis direction and Y-axis) parallel to the display surface of the first display unit 11c. )) Relative position is detected. As a result, the setting unit 13c sets the relative position between the first image P11 and the second image P12 based on the information indicating the relative position detected by the position detection unit 14, and sets the second image P12. Set. As a result, the display device 10c provides not only the viewer 1 at the front position of the first display unit 11c but also the viewer 1 at a position other than the front of the first display unit 11c. Dimensional image) can be displayed. That is, the display device 10c can display a stereoscopic image (three-dimensional image) over a wide range.
 さらに、本変形例の表示装置10は、検出された観察者1の位置が移動した場合には、検出結果に基づいて、移動後の位置にいる観察者1に対して、第1の画像P11と、エッジ画像PEとを対応させて表示することができる。つまり、本変形例の表示装置10は、移動する観察者1の位置に追従して立体像を表示することができる。 Furthermore, when the position of the detected observer 1 moves, the display device 10 of the present modification example uses the first image P11 for the observer 1 at the moved position based on the detection result. And the edge image PE can be displayed in correspondence with each other. That is, the display device 10 of the present modification can display a stereoscopic image following the position of the moving observer 1.
 なお、本実施形態およびその変形例において、設定部13cは、位置検出部14が検出する相対位置を示す情報に基づいて、エッジ画像PEのエッジの太さを相対位置に応じた太さに設定してもよい。エッジ画像PEは、観察者1が第1表示部11cの正面以外の位置にいる場合に、観察者1にエッジ画像PEが認識されることがある。観察者1にエッジ画像PEが認識されると、第1の画像P11が立体像として視認されなくなることがある。そこで、表示装置10cは、エッジ画像PEのエッジの太さを相対位置に応じた太さに設定する。これにより、表示装置10cは、観察者1が第1表示部11cの正面以外の位置にいる場合であっても、観察者1にエッジ画像PEを視認させないようにすることができる。つまり、表示装置10cは、広範囲に立体像(3次元画像)を表示することができる。 In the present embodiment and its modification, the setting unit 13c sets the edge thickness of the edge image PE to a thickness corresponding to the relative position based on information indicating the relative position detected by the position detection unit 14. May be. The edge image PE may be recognized by the viewer 1 when the viewer 1 is at a position other than the front of the first display unit 11c. When the observer 1 recognizes the edge image PE, the first image P11 may not be visually recognized as a stereoscopic image. Therefore, the display device 10c sets the edge thickness of the edge image PE to a thickness corresponding to the relative position. Thereby, the display device 10c can prevent the observer 1 from visually recognizing the edge image PE even when the observer 1 is at a position other than the front of the first display unit 11c. That is, the display device 10c can display a stereoscopic image (three-dimensional image) over a wide range.
 なお、第2表示部12は、エッジ画像PEを含む第2の画像P12を表示する例を説明したが、これに限られない。第2表示部12は、第1の画像P11の画素に対応して観察者1に視認される第2の画像P12の画素の画素値が、観察者1に視認される立体像の奥行き位置に応じて設定されている第2の画像P12を表示してもよい。ここで、画素値には、画素の輝度値が含まれる。また、第1の画像P11の画素に対応して観察者1に視認される画像とは、第1の画像P11が第2表示部12の表示面に射影された画像である。この場合には、観察者1は、第1の画像P11が射影され、かつ各画素の画素値が第1の画像P11の各画素の画素値と異なる輝度に設定されている第2の画像P12と、第1の画像P11とを重ねて視認する。このように、第2表示部12は、観察者1が精密な立体像を視認することができる第2の画像P12を表示することができる。つまり、表示装置10は、精密な立体像を表示することができる。 In addition, although the 2nd display part 12 demonstrated the example which displays the 2nd image P12 containing edge image PE, it is not restricted to this. The second display unit 12 has the pixel value of the pixel of the second image P12 visually recognized by the observer 1 corresponding to the pixel of the first image P11 at the depth position of the stereoscopic image visually recognized by the observer 1. The second image P12 set accordingly may be displayed. Here, the pixel value includes the luminance value of the pixel. The image visually recognized by the viewer 1 corresponding to the pixels of the first image P11 is an image obtained by projecting the first image P11 onto the display surface of the second display unit 12. In this case, the observer 1 projects the first image P11, and the second image P12 in which the pixel value of each pixel is set to a luminance different from the pixel value of each pixel of the first image P11. And the first image P11 are visually recognized. Thus, the 2nd display part 12 can display the 2nd image P12 from which the observer 1 can visually recognize a precise three-dimensional image. That is, the display device 10 can display a precise stereoscopic image.
 なお、本実施形態およびその変形例における表示装置10の第1表示部11は、立体表示(3次元表示)が可能である立体表示装置であってもよい。この場合、第1表示部11は、入力される画像情報に応じた奥行き位置に第1の画像P11の立体像を表示する。つまり、表示装置10は、立体表示された第1の画像P11内のエッジ部分と、当該エッジ部分に対応しているエッジ画像PEとが、対応して観察者1に視認されるように表示することができる。これにより、表示装置10は、観察者1に認識される立体像の奥行き位置を広範囲に設定することができる。つまり、本実施形態およびその変形例の表示装置10は、第1の画像P11の立体像から観察者1が認識する立体感を、さらに高めることができる。 It should be noted that the first display unit 11 of the display device 10 in the present embodiment and its modification may be a stereoscopic display device capable of stereoscopic display (three-dimensional display). In this case, the first display unit 11 displays a stereoscopic image of the first image P11 at a depth position corresponding to the input image information. That is, the display device 10 displays the edge portion in the first image P11 that is stereoscopically displayed and the edge image PE corresponding to the edge portion so as to be visually recognized by the observer 1. be able to. Thereby, the display apparatus 10 can set the depth position of the stereoscopic image recognized by the observer 1 in a wide range. That is, the display device 10 according to the present embodiment and the modification thereof can further enhance the stereoscopic effect recognized by the observer 1 from the stereoscopic image of the first image P11.
 なお、上述した各実施形態およびその変形例において、第2の画像P12は、例えば、図4に示すような画像であるが、これに限られない。例えば、第2の画像P12は、第1の画像P11の左右のエッジ部分に加えて上下のエッジ部分を示すエッジ画像PEを含む第2の画像P12であってもよい。この第2の画像P12は、第1の画像P11が示す四角形の各辺をエッジ部分とする、エッジ部分を示す画像であってもよい。また、第2の画像P12は、エッジ部分を破線状に示すエッジ画像PE含む画像であってもよい。また、第2の画像P12は、エッジ部分を、主観的輪郭状に示すエッジ画像PE含む画像であってもよい。ここで、主観的輪郭とは、輪郭線が存在しないにもかかわらず、輪郭線が存在するように観察者1に認識される輪郭である。これにより、表示装置10の第2表示部12は、エッジ部分を示す画像のすべてを表示する必要がなくなり、すべてのエッジ部分を示す画像を表示している場合に比べて消費電力を低減することができる。
 また、第2の画像P12は、エッジ部分の内部を所定の明るさ(例えば、輝度)にして表示されるものであってもよい。これにより、表示装置10は、第1の画像P11を変化させることなく、第1の画像P11の明るさを明るくすることができる。
In each of the above-described embodiments and modifications thereof, the second image P12 is, for example, an image as illustrated in FIG. 4, but is not limited thereto. For example, the second image P12 may be a second image P12 including an edge image PE indicating upper and lower edge portions in addition to the left and right edge portions of the first image P11. The second image P12 may be an image showing an edge portion in which each side of the quadrangle indicated by the first image P11 is an edge portion. Further, the second image P12 may be an image including an edge image PE in which the edge portion is shown in a broken line shape. Further, the second image P12 may be an image including an edge image PE that shows an edge portion in a subjective contour shape. Here, the subjective contour is a contour that is recognized by the observer 1 so that the contour line exists even though the contour line does not exist. Thereby, the second display unit 12 of the display device 10 does not need to display all the images showing the edge portions, and can reduce power consumption compared to the case where the images showing all the edge portions are displayed. Can do.
The second image P12 may be displayed with a predetermined brightness (for example, luminance) inside the edge portion. Thereby, the display device 10 can increase the brightness of the first image P11 without changing the first image P11.
 [第5の実施形態]
 以下、図15を参照して、本発明の第5の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。図15は、本発明の第5の実施形態に係る表示システム100dの構成の一例を示す構成図である。表示システム100dは、表示装置10dとしての、可搬型表示装置50dを備えている。
[Fifth Embodiment]
Hereinafter, a fifth embodiment of the present invention will be described with reference to FIG. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted. FIG. 15 is a block diagram showing an example of the configuration of a display system 100d according to the fifth embodiment of the present invention. The display system 100d includes a portable display device 50d as the display device 10d.
 可搬型表示装置50dは、観察者1によって可搬される手持ち型のディスプレイ装置であり、第2表示部12dと、設定部13dと、撮像部15とを備えている。 The portable display device 50d is a handheld display device that is portable by the observer 1, and includes a second display unit 12d, a setting unit 13d, and an imaging unit 15.
 撮像部15は、左側受光部15L及び右側受光部15Rを有するステレオビデオカメラを備えており、第1の画像P11として第2表示部12dを透過する像を撮像して、撮像した画像の奥行き位置情報を含む画像情報を生成する。具体的には、左側受光部15Lは、第1の画像P11のうちの左眼Lに視認される第1の画像P11Lの光を受光する。右側受光部15Rは、第1の画像P11のうちの右眼Rに視認される第1の画像P11Rの光を受光する。撮像部15は、左側受光部15L及び右側受光部15Rがそれぞれ受光した光の像に基づいて、既知の構成により撮像した画像の奥行き位置情報を生成して、生成した奥行き位置情報を含む画像情報を生成する。 The imaging unit 15 includes a stereo video camera having a left light receiving unit 15L and a right light receiving unit 15R. The image capturing unit 15 captures an image that passes through the second display unit 12d as the first image P11, and the depth position of the captured image. Image information including information is generated. Specifically, the left light receiving unit 15L receives light of the first image P11L visually recognized by the left eye L of the first image P11. The right light receiving unit 15R receives the light of the first image P11R visually recognized by the right eye R of the first image P11. The imaging unit 15 generates depth position information of an image captured by a known configuration based on the light images received by the left light receiving unit 15L and the right light receiving unit 15R, and image information including the generated depth position information. Is generated.
 設定部13dは、撮像部15が生成した画像情報に基づいて、第1の画像P11に対応する第2の画像P12を設定して、設定した第2の画像P12を示す第2の画像情報を第2表示部12dに供給する。具体的には、設定部13dは、撮像部15が撮像した画像の情報から、奥行き位置情報を取得する。次に、設定部13dは、取得した第1の画像情報から、既知の構成によってエッジ部分を抽出する。次に、設定部13dは、抽出したエッジ部分を示すエッジ画像PEを生成して、生成したエッジ画像PEを示す第2の画像情報を第2表示部12dに供給する。ここで、設定部13dは、例えば、取得した画像情報に対して、例えば、ラプラシアンフィルタなどの微分フィルタを適用することによって、エッジ部分を抽出する。そして、設定部13dは、取得した奥行き位置情報に基づいて、抽出したエッジ部分の表示位置を設定して、第2の画像P12を設定する。 The setting unit 13d sets a second image P12 corresponding to the first image P11 based on the image information generated by the imaging unit 15, and sets the second image information indicating the set second image P12. It supplies to the 2nd display part 12d. Specifically, the setting unit 13d acquires depth position information from information on an image captured by the imaging unit 15. Next, the setting unit 13d extracts an edge portion with a known configuration from the acquired first image information. Next, the setting unit 13d generates an edge image PE indicating the extracted edge portion, and supplies second image information indicating the generated edge image PE to the second display unit 12d. Here, for example, the setting unit 13d extracts an edge portion by applying a differential filter such as a Laplacian filter to the acquired image information. And the setting part 13d sets the display position of the extracted edge part based on the acquired depth position information, and sets the 2nd image P12.
 第2表示部12dは、入射する光を透過可能な透過型ディスプレイを備えており、第1の画像P11を透過させるとともに、設定部13dが設定した第2の画像P12を示す第2の画像情報に基づいて、第2の画像P12を表示する。ここで、第2表示部12dは、第1の画像P11に対応する実像を第2の画像P12として表示する。 The second display unit 12d includes a transmissive display capable of transmitting incident light, transmits the first image P11, and displays second image information indicating the second image P12 set by the setting unit 13d. Based on the above, the second image P12 is displayed. Here, the second display unit 12d displays a real image corresponding to the first image P11 as the second image P12.
 以上、説明したように、本実施形態の表示装置10dとしての、可搬型表示装置50dは、第2表示部12dと、設定部13cと、撮像部15とを備えている。これにより、表示装置10dは、第2の画像P12を表示する位置を観察者1の眼から離れた位置にして、第2の画像P12を表示することができる。したがって、表示装置10dは、虚像表示のための構成がなくても、第2の画像P12を第1の画像P11に対応させて表示することができる。つまり、第2表示部12が虚像を表示する場合に比して、第2表示部12の構成を簡素化することができる。 As described above, the portable display device 50d as the display device 10d of the present embodiment includes the second display unit 12d, the setting unit 13c, and the imaging unit 15. Accordingly, the display device 10d can display the second image P12 with the position where the second image P12 is displayed positioned away from the eye of the observer 1. Therefore, the display device 10d can display the second image P12 in correspondence with the first image P11 without a configuration for displaying a virtual image. That is, the configuration of the second display unit 12 can be simplified as compared with the case where the second display unit 12 displays a virtual image.
 なお、撮像部15は、左側受光部15Lと右側受光部15Rとのいずれか一方を有するビデオカメラを備えていてもよい。この場合には、設定部13dは、ユーザによって入力される奥行き位置を指定する情報を取得するとともに、予め設定されている複数の奥行き位置情報のうちから、取得した奥行き位置を指定する情報に基づく奥行き位置情報を選択する。次に、設定部13dは、撮像部15から第1の画像情報を取得するとともに、取得した第1の画像情報から、既知の構成によってエッジ部分を抽出する。そして、設定部13dは、選択した奥行き位置情報に基づいて、抽出したエッジ部分の表示位置を設定して、第2の画像P12を設定する。このよう構成にしても、表示装置10dは、虚像表示のための構成がなくても第2の画像P12を表示することができる。つまり、第2表示部12が虚像を表示する場合に比して、第2表示部12の構成を簡素化することができる。 Note that the imaging unit 15 may include a video camera having one of the left light receiving unit 15L and the right light receiving unit 15R. In this case, the setting unit 13d acquires information specifying the depth position input by the user, and based on information specifying the acquired depth position from among a plurality of preset depth position information. Select depth position information. Next, the setting unit 13d acquires the first image information from the imaging unit 15, and extracts an edge portion from the acquired first image information with a known configuration. Then, the setting unit 13d sets the display position of the extracted edge portion based on the selected depth position information, and sets the second image P12. Even with this configuration, the display device 10d can display the second image P12 without a configuration for displaying a virtual image. That is, the configuration of the second display unit 12 can be simplified as compared with the case where the second display unit 12 displays a virtual image.
 [第6の実施形態]
 以下、図16から図17を参照して、本発明の第6の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。
[Sixth Embodiment]
Hereinafter, the sixth embodiment of the present invention will be described with reference to FIGS. 16 to 17. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図16は、本実施形態における表示装置10eを含む表示システム100eの構成の一例を示す構成図である。表示システム100eは、第1表示部11eと、第2表示部12eとを備えている。上述した第1の実施形態においては、第2表示部12がヘッドマウントディスプレイであるのに対し、この表示システム100eにおいては、第1表示部11eがヘッドマウントディスプレイである点において、第1の実施形態と異なる。すなわち、この表示システム100eにおいては、ヘッドマウントディスプレイである第1表示部11eが第1の画像P11を表示し、第2表示部12eがエッジ画像PEを表示する。 FIG. 16 is a configuration diagram illustrating an example of a configuration of a display system 100e including the display device 10e according to the present embodiment. The display system 100e includes a first display unit 11e and a second display unit 12e. In the first embodiment described above, the second display unit 12 is a head mounted display, whereas in the display system 100e, the first display unit 11e is a head mounted display. Different from form. That is, in the display system 100e, the first display unit 11e, which is a head-mounted display, displays the first image P11, and the second display unit 12e displays the edge image PE.
 画像情報供給装置2は、第1表示部11eに第1の画像情報を供給するとともに、第2表示部12eに第2の画像情報を供給する。ここで、第1の画像情報は、第1表示部11eに表示される第1の画像P11を表示するための情報である。第2の画像情報は、第2表示部12eに表示される第2の画像P12を表示するための情報であり、第1の画像情報に基づいて生成されているエッジ画像PEの画像情報である。このエッジ画像PEは、第1の画像P11内のエッジ部分Eを示す画像である。この第1の画像P11には、図17を参照して後述するように、左眼用画像P11Lを示す画像情報と、右眼用画像P11Rを示す画像情報とが含まれている点が、図3に示す第1の画像P11と異なる。また、このエッジ画像PEについては、図17を参照して後述する。また、これまでの説明においては、第1の画像P11Lとは、図3に示す第1の画像P11における左眼Lに視認される第1の画像P11Lであった。以下の説明において、第1の画像P11Lとは、図17に示す左眼用画像P11Lである。また、これまでの説明においては、第2の画像P12とは、図4に示す第2の画像P12であり、左眼用画像P12Lと右眼用画像P12Rとを含んでいた。以下の説明において、第2の画像P12Lとは、図17に示すように、第2の画像P12のうちの、左眼第2光束R12Lによって左眼Lに視認される第2の画像P12Lである。また、以下の説明において、第2の画像P12Rとは、図17に示すように、第2の画像P12のうちの、右眼第2光束R12Rによって右眼Rに視認される第2の画像P12Rである。 The image information supply device 2 supplies the first image information to the first display unit 11e and also supplies the second image information to the second display unit 12e. Here, the first image information is information for displaying the first image P11 displayed on the first display unit 11e. The second image information is information for displaying the second image P12 displayed on the second display unit 12e, and is image information of the edge image PE generated based on the first image information. . This edge image PE is an image showing the edge portion E in the first image P11. As will be described later with reference to FIG. 17, the first image P11 includes image information indicating the left-eye image P11L and image information indicating the right-eye image P11R. 3 is different from the first image P11 shown in FIG. The edge image PE will be described later with reference to FIG. In the description so far, the first image P11L is the first image P11L visually recognized by the left eye L in the first image P11 shown in FIG. In the following description, the first image P11L is the left-eye image P11L shown in FIG. In the description so far, the second image P12 is the second image P12 shown in FIG. 4 and includes the left-eye image P12L and the right-eye image P12R. In the following description, the second image P12L is a second image P12L visually recognized by the left eye L by the left eye second light flux R12L in the second image P12, as shown in FIG. . In the following description, as shown in FIG. 17, the second image P12R is a second image P12R that is visually recognized by the right eye R by the right eye second light flux R12R in the second image P12. It is.
 第2表示部12eは、(+Z)方向に向けて画像を表示する表示面120を備えており、画像情報供給装置2から取得した第2の画像情報に基づいて、第2の画像P12を表示面120に表示する。表示面120に表示された第2の画像P12から発せられる第2光束R12は、椅子3に座っている観察者1に光学像として視認される。この観察者1は、表示面120から(+Z)方向に所定の距離だけ離れた位置に備えられる椅子3に座って、-Z方向にある表示面120を観察している。この椅子3は、観察者1が第2表示部12eを観察する位置、つまり第1表示部11eと第2表示部12eとの相対位置を固定する。 The second display unit 12e includes a display surface 120 that displays an image in the (+ Z) direction, and displays the second image P12 based on the second image information acquired from the image information supply device 2. Display on surface 120. The second light beam R12 emitted from the second image P12 displayed on the display surface 120 is visually recognized as an optical image by the observer 1 sitting on the chair 3. This observer 1 sits on the chair 3 provided at a position away from the display surface 120 in the (+ Z) direction by a predetermined distance, and observes the display surface 120 in the −Z direction. The chair 3 fixes the position where the observer 1 observes the second display unit 12e, that is, the relative position between the first display unit 11e and the second display unit 12e.
 次に、第1表示部11eについて説明する。
 第1表示部11eとは、観察者1の頭部に装着されるヘッドマウントディスプレイ50eが備える表示部である。この第1表示部11eは、供給される第1の画像P11の虚像を不図示の接眼光学系により表示するとともに、入射する光を第1の画像P11を表示する方向に透過させる。この第1表示部11eは、左眼表示部11Lと右眼表示部11Rとを備えている。すなわち、第1表示部11eは、第2の画像P12に代えて、第1の画像P11を表示する点において、図2を参照して説明したヘッドマウントディスプレイ50と異なる。
Next, the first display unit 11e will be described.
The 1st display part 11e is a display part with which the head mounted display 50e with which the observer's 1 head is mounted | worn is equipped. The first display unit 11e displays a virtual image of the supplied first image P11 by an eyepiece optical system (not shown) and transmits incident light in a direction in which the first image P11 is displayed. The first display unit 11e includes a left eye display unit 11L and a right eye display unit 11R. That is, the first display unit 11e is different from the head mounted display 50 described with reference to FIG. 2 in that the first image P11 is displayed instead of the second image P12.
 左眼表示部11Lは、観察者1がヘッドマウントディスプレイ50eを装着した場合に、観察者1の左眼Lに視認されるように、画像情報供給装置2から供給される第1の画像情報が示す左眼用画像P11Lを表示する。同様に、第2表示部12eの右眼表示部11Rは、観察者1の右眼Rに視認されるように、画像情報供給装置2から供給される第1の画像情報が示す右眼用画像P11Rを表示する。 When the observer 1 wears the head mounted display 50e, the left eye display unit 11L receives the first image information supplied from the image information supply apparatus 2 so as to be visually recognized by the left eye L of the observer 1. The left-eye image P11L shown is displayed. Similarly, the right eye display unit 11R of the second display unit 12e is an image for the right eye indicated by the first image information supplied from the image information supply device 2 so as to be visually recognized by the right eye R of the observer 1. P11R is displayed.
 また、第1表示部11eは、観察者1が椅子3に座って第2表示部12eを観察した場合に、第2表示部12eが表示する第2の画像の第2光束R12(光)を透過方向に透過させる。ここで、透過方向は(+Z)方向である。つまり、第1表示部11eの左眼表示部11Lは、入射する第2光束R12のうちの左眼第2光束R12Lを透過させる。同様に、第1表示部11eの右眼表示部11Rは、入射する第2光束R12のうちの右眼第2光束R12Rを透過させる。 Further, the first display unit 11e emits the second light beam R12 (light) of the second image displayed by the second display unit 12e when the observer 1 sits on the chair 3 and observes the second display unit 12e. Transmit in the transmission direction. Here, the transmission direction is the (+ Z) direction. That is, the left eye display unit 11L of the first display unit 11e transmits the left eye second light beam R12L of the incident second light beam R12. Similarly, the right eye display unit 11R of the first display unit 11e transmits the right eye second light beam R12R out of the incident second light beam R12.
 このようにして左眼表示部11Lは、入射する左眼第2光束R12Lを透過させるとともに、左眼用画像P11Lを表示して左眼第1光束R11Lを発する。この左眼第1光束R11L及び左眼第2光束R12Lは、対応する光学像として(-Z)方向にある第2表示部12eを観察している観察者1の左眼Lに視認される。同様に、右眼表示部11Rは、入射する右眼第2光束R12Rを透過させるとともに、右眼用画像P11Rを表示して右眼第1光束R11Rを発する。つまり、この右眼第1光束R11R及び右眼第2光束R12Rは、対応する光学像として(-Z)方向にある第2表示部12eを観察している観察者1の右眼Rに視認される。 Thus, the left-eye display unit 11L transmits the incident left-eye second light beam R12L and displays the left-eye image P11L to emit the left-eye first light beam R11L. The left eye first light beam R11L and the left eye second light beam R12L are visually recognized by the left eye L of the observer 1 who is observing the second display unit 12e in the (−Z) direction as corresponding optical images. Similarly, the right eye display unit 11R transmits the incident right eye second light beam R12R, displays the right eye image P11R, and emits the right eye first light beam R11R. That is, the right eye first light beam R11R and the right eye second light beam R12R are visually recognized by the right eye R of the observer 1 who is observing the second display unit 12e in the (−Z) direction as a corresponding optical image. The
 次に、本実施形態の第1の画像P11と第2の画像P12を説明する。ここで、以下の図面において画像を示す場合には、便宜上、画像の明るさが明るい(例えば、輝度が高い)部分を薄く示している。 Next, the first image P11 and the second image P12 of this embodiment will be described. Here, when an image is shown in the following drawings, a portion where the brightness of the image is bright (for example, high brightness) is shown thinly for convenience.
 第1の画像P11は、例えば、図3に示した四角形のパターンを示す画像である。ここで、第1の画像P11が示す四角形のパターンにおいては、四角形を構成する4辺がそれぞれエッジ部分Eになりうるが、以下の説明においては、便宜上、四角形の左辺を示す左辺エッジ部分E1及び右辺を示す右辺エッジ部分E2をエッジ部分Eとして説明する。 1st image P11 is an image which shows the square pattern shown, for example in FIG. Here, in the quadrilateral pattern indicated by the first image P11, the four sides constituting the quadrilateral can each be the edge portion E, but in the following description, for the sake of convenience, the left side edge portion E1 indicating the left side of the quadrilateral and The right side edge portion E2 indicating the right side will be described as the edge portion E.
 第1の画像P11には、第1表示部11eの左眼表示部11Lが表示する左眼用画像P11Lと、第1表示部11eの右眼表示部11Rが表示する右眼用画像P11Rとが含まれる。すなわち、本実施形態の第1の画像P11には、左眼用画像P11Lと、右眼用画像P11Rとが含まれる点において、本実施形態の第1の画像P11は、図3を参照して説明した第1の画像P11と異なる。これら左眼用画像P11L及び右眼用画像P11Rは、同一の画像であるため、以下の説明において両画像を特に区別しない場合には、第1の画像P11と総称して説明する。 The first image P11 includes a left eye image P11L displayed by the left eye display unit 11L of the first display unit 11e and a right eye image P11R displayed by the right eye display unit 11R of the first display unit 11e. included. That is, the first image P11 of this embodiment includes the left-eye image P11L and the right-eye image P11R in that the first image P11 of the present embodiment includes the first image P11 with reference to FIG. Different from the first image P11 described. Since the left-eye image P11L and the right-eye image P11R are the same image, in the following description, the two images are collectively referred to as the first image P11 unless otherwise distinguished.
 第2の画像P12は、例えば、四角形のパターンの左辺エッジ部分E1を示す左辺エッジ画像PE1及び、右辺エッジ部分E2を示す右辺エッジ画像PE2を含む画像である。すなわち、本実施形態の第2の画像P12には、左眼用画像P12Lと、右眼用画像P12Rとが含まれない点において、本実施形態の第2の画像P12は、図4を参照して説明した第2の画像P12と異なる。ここで、エッジ部分E(単にエッジ、又はエッジ領域と表現してもよい)とは、例えば、画像内において隣接する又は近傍の画素の明るさ(例えば、輝度)が急変する部分である。例えば、エッジ部分Eは、四角形の左辺または右辺の、幅が無い理論的な線分を示すとともに、例えば、第2表示部12eの解像度に応じた有限の幅を有するエッジ周囲の領域をも示している。 The second image P12 is, for example, an image including a left-side edge image PE1 indicating the left-side edge portion E1 of the square pattern and a right-side edge image PE2 indicating the right-side edge portion E2. That is, the second image P12 of this embodiment does not include the left-eye image P12L and the right-eye image P12R, and the second image P12 of this embodiment refers to FIG. This is different from the second image P12 described above. Here, the edge portion E (which may be simply expressed as an edge or an edge region) is, for example, a portion where the brightness (for example, luminance) of adjacent or neighboring pixels in the image changes suddenly. For example, the edge portion E indicates a theoretical line segment having no width on the left side or the right side of the quadrangle, and also indicates, for example, a region around the edge having a finite width corresponding to the resolution of the second display unit 12e. ing.
 次に、図17を参照して、表示装置10eが、第1の画像P11と第2の画像P12とを対応させて表示する構成について説明する。図17は、本実施形態における表示装置10eによって表示される画像の一例を示す模式図である。 Next, a configuration in which the display device 10e displays the first image P11 and the second image P12 in correspondence with each other will be described with reference to FIG. FIG. 17 is a schematic diagram illustrating an example of an image displayed by the display device 10e according to the present embodiment.
 第1表示部11eは、観察者1に視認されるように(+Z)方向に虚像としての第1の画像P11を表示する。また、第2表示部12eは、観察者1に視認されるように第2の画像P12を(+Z)方向に表示する。そして、第1の画像P11は、第2の画像P12が表示される位置から(+Z)方向に所定の距離Lpだけ離れている位置に表示される。上述したように第1表示部11eとは、光を透過させる透過型のヘッドマウントディスプレイ50eが備える表示部である。このため、第1表示部11eに表示される第1の画像P11と第2表示部12eに表示される第2の画像P12とは、観察者1によって重なるように視認される。ここで、所定の距離Lpとは、第1の画像P11が表示されている奥行き位置と、第2の画像P12が表示されている奥行き位置の間の距離である。ここで、奥行き位置とは、Z軸方向の位置である。この所定の距離Lpは、第1の画像P11が表示されている奥行き位置と、観察者1の奥行き位置とに基づいて予め定められる。 The first display unit 11e displays the first image P11 as a virtual image in the (+ Z) direction so that the viewer 1 can visually recognize it. The second display unit 12e displays the second image P12 in the (+ Z) direction so that the viewer 1 can visually recognize it. Then, the first image P11 is displayed at a position that is a predetermined distance Lp away from the position where the second image P12 is displayed in the (+ Z) direction. As described above, the first display unit 11e is a display unit included in the transmissive head mounted display 50e that transmits light. Therefore, the first image P11 displayed on the first display unit 11e and the second image P12 displayed on the second display unit 12e are visually recognized by the observer 1. Here, the predetermined distance Lp is a distance between the depth position where the first image P11 is displayed and the depth position where the second image P12 is displayed. Here, the depth position is a position in the Z-axis direction. The predetermined distance Lp is determined in advance based on the depth position where the first image P11 is displayed and the depth position of the observer 1.
 また、図17に示すように、第2表示部12eは、第1表示部11eの左眼表示部11Lによって表示されている左眼用画像P11L内の左辺エッジ部分E1と、当該左辺エッジ部分E1に対応している左辺エッジ画像PE1とが、対応して視認されるように第2の画像P12を表示する。同様に、第2表示部12eは、第1表示部11eの右眼表示部11Rによって表示されている右眼用画像P11R内の右辺エッジ部分E2と、当該右辺エッジ部分E2に対応している右辺エッジ画像PE2とが、対応して視認されるように第2の画像P12を表示する。 As shown in FIG. 17, the second display unit 12e includes a left side edge part E1 in the left eye image P11L displayed by the left eye display unit 11L of the first display unit 11e, and the left side edge part E1. The second image P12 is displayed so that the left-side edge image PE1 corresponding to is visually recognized. Similarly, the second display unit 12e includes a right side edge portion E2 in the right eye image P11R displayed by the right eye display unit 11R of the first display unit 11e, and a right side corresponding to the right side edge portion E2. The second image P12 is displayed so that the edge image PE2 is visually recognized correspondingly.
 すなわち、第1表示部11eの左眼表示部11Lは、観察者1の左眼Lに、左眼用画像P11Lによって示される四角形の左辺エッジ部分E1の(-X)側(つまり、四角形の外側)に、左辺エッジ部分E1と第2の画像P12の左辺エッジ画像PE1とが重なって視認されるように、虚像としての左眼用画像P11Lを表示する。また、左眼表示部11Lは、観察者1の左眼Lに、左眼用画像P11Lによって示される四角形の右辺エッジ部分E2の(-X)側(つまり、四角形の内側)に、右辺エッジ部分E2と第2の画像P12の右辺エッジ画像PE2とが重なって視認されるように、虚像としての左眼用画像P11Lを表示する。同様に、右眼表示部11Rは、観察者1の右眼Rに、右眼用画像P11Rによって示される四角形の右辺エッジ部分E2の(+X)側(つまり、四角形の外側)に、右辺エッジ部分E2と第2の画像P12の右辺エッジ画像PE2とが重なって視認されるように、虚像としての右眼用画像P11Rを表示する。また、右眼表示部11Rは、観察者1の右眼Rに、右眼用画像P11Rによって示される四角形の左辺エッジ部分E1の(+X)側(つまり、四角形の内側)に、左辺エッジ部分E1と第2の画像P12の左辺エッジ画像PE1が重なって視認されるように、虚像としての右眼用画像P11Rを表示する。このようにして、第1表示部11eは、供給される第1の画像P11を接眼光学系により虚像として表示するとともに、入射する第2の画像P12の光を第1の画像P11を表示する方向に透過させる。 That is, the left-eye display unit 11L of the first display unit 11e is connected to the left eye L of the viewer 1 on the (−X) side of the left-side edge portion E1 of the quadrangle indicated by the left-eye image P11L (that is, outside the quadrangle ), The left-eye image P11L as a virtual image is displayed so that the left-side edge portion E1 and the left-side edge image PE1 of the second image P12 overlap each other. Further, the left eye display unit 11L is connected to the left eye L of the observer 1 on the (−X) side (that is, the inner side of the rectangle) of the right edge portion E2 of the rectangle indicated by the left eye image P11L. The left-eye image P11L as a virtual image is displayed so that E2 and the right-side edge image PE2 of the second image P12 are visually recognized. Similarly, the right eye display unit 11R is connected to the right eye R of the observer 1 on the (+ X) side (that is, outside the rectangle) of the right edge portion E2 of the rectangle indicated by the right eye image P11R. The right-eye image P11R as a virtual image is displayed so that E2 and the right-side edge image PE2 of the second image P12 are visually recognized. Further, the right eye display unit 11R is connected to the right eye R of the observer 1 on the (+ X) side (that is, inside the rectangle) of the left edge portion E1 of the quadrangle shown by the right eye image P11R. The right-eye image P11R as a virtual image is displayed so that the left-side edge image PE1 of the second image P12 can be visually recognized. In this way, the first display unit 11e displays the supplied first image P11 as a virtual image by the eyepiece optical system, and displays the incident light of the second image P12 in the first image P11 direction. Make it transparent.
 次に、観察者1によって、第1の画像P11と第2の画像P12とから立体像(3次元画像)が認識される仕組みについて説明する。まず、観察者1は、第1の画像P11と、第1の画像P11のエッジ部分Eに対応するエッジ画像PEとを、これらの画像の対応する部分が重なるような位置で観察する。これにより、観察者1は、第1の画像P11とエッジ画像PEとの輝度比に合わせて表示面間内の奥行き位置に像を知覚する。例えば、観察者1が四角形のパターンを観察したとき、観察者1の網膜像上では認識できないくらいの微小な輝度の段差ができる。このような場合においては、明るさ(例えば、輝度)の段差間に仮想的なエッジを知覚して1つの物体として認識する。このとき、観察者1には、左眼Lと右眼Rとにおいて認識される仮想的なエッジに微小なずれが生じて両眼視差として知覚されることにより、像の奥行き位置が変化するように認識される。次に、この仕組みについて、詳細に説明する。 Next, a mechanism in which the observer 1 recognizes a stereoscopic image (three-dimensional image) from the first image P11 and the second image P12 will be described. First, the observer 1 observes the first image P11 and the edge image PE corresponding to the edge portion E of the first image P11 at a position where the corresponding portions of these images overlap. Thereby, the observer 1 perceives an image at a depth position between the display surfaces in accordance with the luminance ratio between the first image P11 and the edge image PE. For example, when the observer 1 observes a quadrilateral pattern, there is a minute brightness step that cannot be recognized on the retina image of the observer 1. In such a case, a virtual edge is perceived between steps of brightness (for example, luminance) and recognized as one object. At this time, the observer 1 seems to change the depth position of the image by causing a slight shift in the virtual edge recognized by the left eye L and the right eye R and perceiving it as binocular parallax. Recognized. Next, this mechanism will be described in detail.
 ここで、光学像IMとは、第1の画像P11及び第2の画像P12が観察者1によって視認される像である。光学像IMには、観察者1の左眼Lに視認される光学像IMLと、観察者1の右眼Rに視認される光学像IMRとが含まれる。 Here, the optical image IM is an image in which the first image P11 and the second image P12 are visually recognized by the observer 1. The optical image IM includes an optical image IML visually recognized by the left eye L of the observer 1 and an optical image IMR visually recognized by the right eye R of the observer 1.
 まず、観察者の左眼Lに視認される光学像IMLについて説明する。図6を参照して説明したように、観察者の左眼Lにおいては、左眼用画像P11Lと、第2の画像P12のうちの左眼Lに視認される第2の画像P12Lとが合成された光学像IMLが結像する。具体的には、左眼Lにおいては、第1の画像P11によって示される四角形の左辺エッジ部分E1の(-X)側(つまり、四角形の外側)に、左辺エッジ部分E1を示す画像と左眼Lに視認される左辺エッジ画像PE1Lとが合成された光学像IMLが結像する。また、左眼Lにおいては、第1の画像P11によって示される四角形の右辺エッジ部分E2の(-X)側(つまり、四角形の内側)に、右辺エッジ部分E2を示す画像と左眼Lに視認される右辺エッジ画像PE2Lとが合成された光学像IMLが結像する。本実施形態の光学像IMLは、第1の画像P11Lが第2の画像P12Lよりも観察者に近い奥行き位置にある点において、図6を参照して説明した光学像IMLと異なる。次に、左眼Lに視認されている光学像IMLの明るさの分布について説明する。 First, the optical image IML visually recognized by the left eye L of the observer will be described. As described with reference to FIG. 6, in the left eye L of the observer, the left-eye image P11L and the second image P12L visually recognized by the left eye L of the second images P12 are combined. The formed optical image IML is formed. Specifically, in the left eye L, an image showing the left side edge portion E1 and the left eye on the (−X) side (that is, outside the square) of the left side edge portion E1 of the quadrangle shown by the first image P11. An optical image IML formed by combining the left side edge image PE1L visually recognized by L is formed. In the left eye L, an image showing the right side edge portion E2 and the left eye L are visually recognized on the (−X) side (that is, inside the square) of the right side edge portion E2 of the quadrangle shown by the first image P11. The optical image IML synthesized with the right edge image PE2L is formed. The optical image IML of this embodiment is different from the optical image IML described with reference to FIG. 6 in that the first image P11L is at a depth position closer to the observer than the second image P12L. Next, the brightness distribution of the optical image IML visually recognized by the left eye L will be described.
 ここでは、画像の明るさの一例として、輝度値BRの場合について説明する。また、左眼用画像P11Lは、X座標X1~X2において輝度値BRがゼロとして説明する。また、左眼用画像P11Lは、X座標X2~X6において輝度値BR2である。また、左眼Lに視認される第2の画像P12Lは、X座標X1~X2及びX座標X4~X5において輝度値BR1であり、X座標X2~X4においてゼロである。したがって、左眼Lに視認される光学像IMLの明るさ(例えば、輝度)は、X座標X1~X2において輝度値BR1になる。また、光学像IMLの明るさは、X座標X2~X4及びX座標X5~X6において、輝度値BR2になり、X座標X4~X5において輝度値BR1と輝度値BR2とが合成された明るさである輝度値BR3になる。なお、観察者1の左眼Lにエッジ部分Eが視認される仕組みについては、図8を参照して説明した仕組みと同様であるため、その説明を省略する。また、観察者1の右眼Rに視認される光学像IMRについての、光学像IMLとの相違点、および、その相違点によって立体像(3次元画像)を認識する仕組みについても、図6から図8を参照して説明した相違点および仕組みと同様であるため、その説明を省略する。 Here, the case of the brightness value BR will be described as an example of the brightness of the image. Further, the left-eye image P11L will be described assuming that the luminance value BR is zero at the X coordinates X1 to X2. The left-eye image P11L has the brightness value BR2 at the X coordinates X2 to X6. The second image P12L visually recognized by the left eye L has the brightness value BR1 at the X coordinates X1 to X2 and the X coordinates X4 to X5, and zero at the X coordinates X2 to X4. Accordingly, the brightness (for example, luminance) of the optical image IML visually recognized by the left eye L becomes the luminance value BR1 at the X coordinates X1 to X2. The brightness of the optical image IML is the brightness value BR2 at the X coordinates X2 to X4 and the X coordinates X5 to X6, and the brightness obtained by combining the brightness value BR1 and the brightness value BR2 at the X coordinates X4 to X5. It becomes a certain luminance value BR3. Note that the mechanism by which the edge portion E is visually recognized by the left eye L of the observer 1 is the same as the mechanism described with reference to FIG. Further, the difference between the optical image IMR visually recognized by the right eye R of the observer 1 from the optical image IML and the mechanism for recognizing a stereoscopic image (three-dimensional image) based on the difference are also shown in FIG. Since it is the same as the difference and mechanism demonstrated with reference to FIG. 8, the description is abbreviate | omitted.
 以上説明したように、本実施形態の表示システム100は、第1表示部11eと、第2表示部12eとを備えている。この第1表示部11eは、観察者1の頭部に装着され、供給される第1の画像P11を接眼光学系により表示するとともに、入射する光を第1の画像P11を表示する方向に透過させる。また、第2表示部12eは、第1表示部11eが表示する第1の画像P11のエッジ部分Eを示す第2の画像P12を表示する。このように、表示システム100は、第2表示部12eに対する観察者1の位置に応じて設定される第1の画像P11をヘッドマウントディスプレイ50eが備える第1表示部11eに表示する。これにより、表示システム100は、第1の画像P11を観察者1の位置毎に設定して、表示することができる。したがって、表示システム100は、観察者1の位置毎に、観察者1の位置に応じた第1の画像P11を表示することができる。つまり、表示システム100は、観察者1が立体像として認識可能な範囲を広範囲にして設定することができる。 As described above, the display system 100 of this embodiment includes the first display unit 11e and the second display unit 12e. The first display unit 11e is mounted on the head of the observer 1 and displays the supplied first image P11 using an eyepiece optical system, and transmits incident light in a direction in which the first image P11 is displayed. Let In addition, the second display unit 12e displays a second image P12 indicating the edge portion E of the first image P11 displayed by the first display unit 11e. In this way, the display system 100 displays the first image P11 set according to the position of the observer 1 with respect to the second display unit 12e on the first display unit 11e provided in the head mounted display 50e. Thereby, the display system 100 can set and display the first image P11 for each position of the observer 1. Therefore, the display system 100 can display the first image P11 corresponding to the position of the observer 1 for each position of the observer 1. That is, the display system 100 can set a wide range that the observer 1 can recognize as a stereoscopic image.
 また、表示システム100は、第1表示部11eにエッジ画像PEを表示している。このため、第2表示部12eを通さずに第1表示部11eのみを観察する観察者1には、エッジ画像PEのみが表示されている。このように、表示システム100は、第1表示部11eのみを観察する観察者1に対して、エッジ画像PE、つまり内容の理解が困難である低品質な画像を表示する。一方、表示システム100は、第2表示部12eを通して第1表示部11eを観察する観察者1に対しては、エッジ画像PEに加えて第2の画像P12を表示する。つまり、表示システム100は、第2表示部12eを通して第1表示部11eを観察する観察者1に対しては、高品質な画像を表示することができる。 Further, the display system 100 displays the edge image PE on the first display unit 11e. For this reason, only the edge image PE is displayed for the observer 1 who observes only the first display unit 11e without passing through the second display unit 12e. In this way, the display system 100 displays the edge image PE, that is, a low-quality image whose contents are difficult to understand, for the observer 1 who observes only the first display unit 11e. On the other hand, the display system 100 displays the second image P12 in addition to the edge image PE for the observer 1 who observes the first display unit 11e through the second display unit 12e. That is, the display system 100 can display a high-quality image for the observer 1 who observes the first display unit 11e through the second display unit 12e.
 また、一般的に、表示する画像が両眼視差を利用した立体像(3次元画像)である場合には、表示する対象の厚みが観察者1に認識されにくくなる、いわゆる「書割効果」が生じることがある。一方、表示システム100は、第1の画像P11と第2の画像P12との対応する画素間の輝度比によって第1の画像P11及び第2の画像P12を表示することにより、観察者1に立体像を表示する。つまり、表示システム100は、第1の画像P11と第2の画像P12とをいずれも平面像(2次元画像)にしても、観察者1に立体像を表示することができる。これにより、表示システム100は、書割効果を低減して、表示する対象の厚みが観察者1に認識されやすい立体像を表示することができる。 In general, when the displayed image is a three-dimensional image (three-dimensional image) using binocular parallax, the thickness of the object to be displayed is less likely to be recognized by the observer 1, so-called “scribing effect” May occur. On the other hand, the display system 100 displays the first image P11 and the second image P12 according to the luminance ratio between corresponding pixels of the first image P11 and the second image P12, so that the viewer 1 can Display an image. That is, the display system 100 can display a stereoscopic image to the observer 1 even if both the first image P11 and the second image P12 are planar images (two-dimensional images). As a result, the display system 100 can display a stereoscopic image in which the viewer 1 can easily recognize the thickness of the object to be displayed by reducing the book splitting effect.
 また、本実施形態の第2表示部12eが表示する第2の画像P12は、第1の画像P11内のエッジ部分Eを示すエッジ画像PEを含み、第1表示部11eに表示される第1の画像P11内のエッジ部分Eと当該エッジ部分Eを示すエッジ画像PEとが対応して観察者1に視認されるように設定されている。これにより、表示システム100は、第1の画像P11と第2の画像P12とのエッジ画像PE(つまり、エッジ部分E)を重ねて表示することができる。つまり、本実施形態の表示システム100は、第1表示部11eに表示されているエッジ部以外の画像には、第2表示部12eに表示されている画像(つまり、エッジ画像PE)の影響を与えることなく、画像を表示することができる。 In addition, the second image P12 displayed by the second display unit 12e of the present embodiment includes an edge image PE showing the edge portion E in the first image P11, and is displayed on the first display unit 11e. The edge portion E in the image P11 and the edge image PE indicating the edge portion E are set so as to be viewed by the observer 1 in correspondence with each other. Thereby, the display system 100 can display the edge image PE (that is, the edge portion E) of the first image P11 and the second image P12 in an overlapping manner. In other words, the display system 100 according to the present embodiment affects the image other than the edge portion displayed on the first display portion 11e by the influence of the image displayed on the second display portion 12e (that is, the edge image PE). An image can be displayed without giving.
 ここで、仮に、第1表示部11eと第2表示部12eとに、明るさ(例えば、輝度)の比を設定した画像をそれぞれ表示した場合には、第1表示部11eと第2表示部12eとの表示条件のばらつきが、立体像(3次元画像)の表示精度に影響を及ぼす可能性がある。また、この場合、高精度に立体像(3次元画像)を表示するためには、第1表示部11eと第2表示部12eとの表示条件(例えば、表示される画像の明るさや色彩)のばらつきを低減させて、表示条件を一致させる必要が生じる。 Here, if an image in which a ratio of brightness (for example, luminance) is set is displayed on the first display unit 11e and the second display unit 12e, respectively, the first display unit 11e and the second display unit are displayed. The variation in display conditions with 12e may affect the display accuracy of a stereoscopic image (three-dimensional image). In this case, in order to display a stereoscopic image (three-dimensional image) with high accuracy, the display conditions (for example, brightness and color of the displayed image) of the first display unit 11e and the second display unit 12e are used. It is necessary to reduce the variation and to match the display conditions.
 一方、本実施形態の表示システム100は、第2表示部12eにエッジ画像PEを表示するため、第1表示部11eと第2表示部12eとの表示条件にばらつきがあっても、第1表示部11eに表示されているエッジ部以外の画像に影響を与えることがない。これにより、第1表示部11eと第2表示部12eとの表示条件を厳密に一致させなくても、立体像(3次元画像)を高精度に表示することができる。つまり、本実施形態の表示システム100は、立体像(3次元画像)を高精度に表示することができる。 On the other hand, since the display system 100 of the present embodiment displays the edge image PE on the second display unit 12e, the first display is performed even if the display conditions between the first display unit 11e and the second display unit 12e vary. The image other than the edge portion displayed on the portion 11e is not affected. Thereby, even if the display conditions of the first display unit 11e and the second display unit 12e do not exactly match, a stereoscopic image (three-dimensional image) can be displayed with high accuracy. That is, the display system 100 of the present embodiment can display a stereoscopic image (three-dimensional image) with high accuracy.
 また、本実施形態の表示システム100は、第2表示部12eにエッジ画像PEのみを表示させればよいため、第2表示部12eにエッジ画像PE以外の画像をも表示する場合に比べて、消費電力を抑えることができる。 In addition, since the display system 100 of the present embodiment only needs to display the edge image PE on the second display unit 12e, compared to the case where an image other than the edge image PE is also displayed on the second display unit 12e. Power consumption can be reduced.
 また、図8に示すように、観察者1は画像の明るさ(例えば、輝度)の段階的な変化を波形WL及び波形WRのように滑らかな明るさの変化として認識する。このため、本実施形態の表示システム100は、エッジ画像PEの精細度が低い場合であっても、観察者1に立体像を認識させることができる。ここで、精細度とは、例えば、画像を構成する画素の数である。これにより、本実施形態の表示システム100は、第1表示部11eの精細度に比べて第2表示部12eの精細度を低減することができる。つまり、本実施形態の表示システム100は、第2表示部12eを安価な表示デバイスにして構成することができる。 Further, as shown in FIG. 8, the observer 1 recognizes a step change in the brightness (for example, luminance) of the image as a smooth change in brightness such as the waveform WL and the waveform WR. For this reason, the display system 100 of this embodiment can make the observer 1 recognize a stereoscopic image even when the definition of the edge image PE is low. Here, the definition is, for example, the number of pixels constituting an image. Thereby, the display system 100 of this embodiment can reduce the definition of the 2nd display part 12e compared with the definition of the 1st display part 11e. That is, the display system 100 of the present embodiment can be configured with the second display unit 12e as an inexpensive display device.
 また、本実施形態の表示システム100は、第1表示部11eによって表示されている第1の画像P11内のエッジ部分Eと、エッジ画像PEとが、対応して視認されるように第1の画像P11および第2の画像P12を表示する。これにより、本実施形態の表示システム100が表示する各画像は、第1の画像P11内のエッジ部分Eと、エッジ画像PEとが、観察者1によって分離しないように視認される。したがって、本実施形態の表示装置10eは、立体像を高精度に表示することができる。 Further, the display system 100 according to the present embodiment is configured so that the edge portion E in the first image P11 displayed by the first display unit 11e and the edge image PE are visually recognized correspondingly. The image P11 and the second image P12 are displayed. Thereby, each image displayed by the display system 100 of the present embodiment is visually recognized so that the edge portion E and the edge image PE in the first image P11 are not separated by the observer 1. Therefore, the display device 10e of the present embodiment can display a stereoscopic image with high accuracy.
 [第7の実施形態]
 以下、図面を参照して、本発明の第7の実施形態を説明する。なお、上述した第1の実施形態と同一の構成については同一の符号を付してその説明を省略する。
[Seventh Embodiment]
Hereinafter, a seventh embodiment of the present invention will be described with reference to the drawings. In addition, about the same structure as 1st Embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図18は、本発明の第7の実施形態に係る表示システム100fの構成の一例を示す構成図である。表示システム100fは、画像情報供給装置2fと、画像設定装置4とを備えている。この画像設定装置4は、検出部41と、抽出部42と、設定部43とを備えており、第2表示部12に表示する第2の画像P12を設定する。 FIG. 18 is a block diagram showing an example of the configuration of a display system 100f according to the seventh embodiment of the present invention. The display system 100 f includes an image information supply device 2 f and an image setting device 4. The image setting device 4 includes a detection unit 41, an extraction unit 42, and a setting unit 43, and sets the second image P12 to be displayed on the second display unit 12.
 画像情報供給装置2fは、画像配信サーバを備えており、第1表示部11及び画像設定装置4の抽出部42に第1の画像P11を表示するための情報である第1の画像情報を供給する。 The image information supply device 2f includes an image distribution server, and supplies first image information that is information for displaying the first image P11 on the first display unit 11 and the extraction unit 42 of the image setting device 4. To do.
 検出部41は、第1表示部11の位置を検出する位置センサを備えており、第1表示部11の位置を検出する。ここで、検出部41は、検出部41と第2表示部12との相対位置が変化しないように第2表示部12の近傍に固定されている。この検出部41は、第1表示部11の位置を検出することにより、第2表示部12に対する第1表示部11の位置を検出する。すなわち、検出部41は、第1表示部11と第2表示部12との相対位置を検出する。 The detection unit 41 includes a position sensor that detects the position of the first display unit 11, and detects the position of the first display unit 11. Here, the detection unit 41 is fixed in the vicinity of the second display unit 12 so that the relative position between the detection unit 41 and the second display unit 12 does not change. The detection unit 41 detects the position of the first display unit 11 with respect to the second display unit 12 by detecting the position of the first display unit 11. That is, the detection unit 41 detects the relative position between the first display unit 11 and the second display unit 12.
 抽出部42は、画像情報供給装置2fから供給される第1の画像情報を取得し、取得した第1の画像情報から、既知の構成によって第1の画像P11のエッジ部分Eを抽出する。具体的には、抽出部42は、取得した第1の画像情報に対してラプラシアンフィルタなどのエッジ抽出フィルタを適用してエッジ画像PEを生成する。次に、抽出部42は、生成したエッジ画像PEを設定部43に出力する。 The extraction unit 42 acquires the first image information supplied from the image information supply device 2f, and extracts the edge portion E of the first image P11 from the acquired first image information with a known configuration. Specifically, the extraction unit 42 generates an edge image PE by applying an edge extraction filter such as a Laplacian filter to the acquired first image information. Next, the extraction unit 42 outputs the generated edge image PE to the setting unit 43.
 設定部43は、検出部41が検出した相対位置に基づいて、抽出部42が抽出したエッジ部分Eを変形するとともに、変形したエッジ部分Eを示す画像を第1の画像P11のエッジ部分Eに対応する第2の画像P12として設定する。 The setting unit 43 deforms the edge portion E extracted by the extraction unit 42 based on the relative position detected by the detection unit 41, and converts the image showing the deformed edge portion E into the edge portion E of the first image P11. The corresponding second image P12 is set.
 具体的には、設定部43は、検出部41が検出した第1表示部11の位置を取得する。次に、設定部43は、検出部41と第2表示部12との相対位置を予め記憶している不図示の記憶部から、検出部41と第2表示部12との相対位置を読み出す。次に、設定部43は、検出された第1表示部11の位置と、読み出した検出部41と第2表示部12との相対位置とに基づいて、第1表示部11と第2表示部12との相対位置を算出する。また、設定部43は、抽出部42が生成したエッジ画像PEを取得する。 Specifically, the setting unit 43 acquires the position of the first display unit 11 detected by the detection unit 41. Next, the setting unit 43 reads the relative position between the detection unit 41 and the second display unit 12 from a storage unit (not illustrated) that stores the relative position between the detection unit 41 and the second display unit 12 in advance. Next, the setting unit 43 sets the first display unit 11 and the second display unit based on the detected position of the first display unit 11 and the read relative position between the detection unit 41 and the second display unit 12. The relative position with respect to 12 is calculated. In addition, the setting unit 43 acquires the edge image PE generated by the extraction unit 42.
 次に、設定部43は、相対位置と画像の変形ベクトルとを関連付けて予め記憶している不図示の記憶部から、算出した相対位置に一致する相対位置に関連付けられている画像の変形ベクトルを読み出す。ここで、画像の変形ベクトルとは、第1の画像P11の画素毎に第1の画像P11を変形する方向及び量を示す情報である。この第1の画像P11を変形する方向及び量は、観察者1が第1表示部11を介して第2表示部12を観察する場合に、第1表示部11が表示する第1の画像P11を第2表示部12に射影した画像と、観察者1が観察する第1の画像P11と基づいて設定される。例えば、観察者1が、第1表示部11を介して第2表示部12の中心を正面から観察した場合に、第1の画像P11は、第2表示部12の中心から放射状に広がって射影される。この場合、画像の変形ベクトルは、観察者1と第2表示部12との距離に応じた量と、第2表示部12の中心から放射状に広がる方向とに設定される。 Next, the setting unit 43 stores the deformation vector of the image associated with the relative position that matches the calculated relative position from a storage unit (not illustrated) that stores the relative position and the deformation vector of the image in association with each other. read out. Here, the image deformation vector is information indicating the direction and amount of deformation of the first image P11 for each pixel of the first image P11. The direction and amount of deformation of the first image P11 are the same as the first image P11 displayed by the first display unit 11 when the observer 1 observes the second display unit 12 via the first display unit 11. Is projected on the second display unit 12 and the first image P11 observed by the observer 1 is set. For example, when the observer 1 observes the center of the second display unit 12 from the front via the first display unit 11, the first image P <b> 11 is radially projected from the center of the second display unit 12. Is done. In this case, the deformation vector of the image is set to an amount corresponding to the distance between the observer 1 and the second display unit 12 and to a direction spreading radially from the center of the second display unit 12.
 また、例えば、観察者1が、第1表示部11を介して第2表示部12を斜め横方向から観察した場合には、第1の画像P11は、第2表示部12に台形状に広がって射影される。この場合、画像の変形ベクトルは、観察者1と第2表示部12との距離に応じた量と、第2表示部12に台形状に広がる方向とに設定される。このように、画像の変形ベクトルとは、観察者1が、第1表示部11を介して第2表示部12を観察した場合に、第1表示部11が表示する第1の画像P11と、第2の画像P12とが対応するように変形する方向及び量が設定された情報である。 Further, for example, when the observer 1 observes the second display unit 12 from the oblique lateral direction via the first display unit 11, the first image P <b> 11 spreads in a trapezoidal shape on the second display unit 12. And projected. In this case, the deformation vector of the image is set to an amount corresponding to the distance between the observer 1 and the second display unit 12 and a direction spreading in a trapezoidal shape on the second display unit 12. Thus, the deformation vector of the image is the first image P11 displayed on the first display unit 11 when the observer 1 observes the second display unit 12 via the first display unit 11, and This is information in which the direction and amount of deformation are set so as to correspond to the second image P12.
 次に、設定部43は、読み出した画像の変形ベクトルによって、抽出部42が生成したエッジ画像PEを変形する。さらに、設定部43は、変形したエッジ画像PEを第2の画像P12として設定し、第2表示部12に出力する。すなわち、設定部43は、第2表示部12が表示する第2の画像P12の光が第1表示部11を透過する場合に、第1の画像P11または第2の画像P12を設定する。ここで、設定部43は、第1表示部11が表示する第1の画像P11のエッジ部分Eと第2の画像P12が示すエッジ部分Eとが対応するように、第1の画像P11または第2の画像P12を設定する。 Next, the setting unit 43 deforms the edge image PE generated by the extraction unit 42 by using the deformation vector of the read image. Further, the setting unit 43 sets the deformed edge image PE as the second image P12 and outputs it to the second display unit 12. That is, the setting unit 43 sets the first image P11 or the second image P12 when the light of the second image P12 displayed by the second display unit 12 is transmitted through the first display unit 11. Here, the setting unit 43 sets the first image P11 or the first image P11 so that the edge portion E of the first image P11 displayed by the first display unit 11 corresponds to the edge portion E indicated by the second image P12. The second image P12 is set.
 次に、図19を参照して、本実施形態における表示システム100fの動作について説明する。図19は、本実施形態における表示システム100fの動作の一例を示すフローチャートである。 Next, the operation of the display system 100f in this embodiment will be described with reference to FIG. FIG. 19 is a flowchart showing an example of the operation of the display system 100f in the present embodiment.
 まず、画像設定装置4の検出部41は、第1表示部11の位置を検出する(ステップS10)。設定部43は、検出された第1表示部11の位置に基づいて、第1表示部11と第2表示部12との相対位置を算出する。 First, the detection unit 41 of the image setting device 4 detects the position of the first display unit 11 (step S10). The setting unit 43 calculates the relative position between the first display unit 11 and the second display unit 12 based on the detected position of the first display unit 11.
 次に、画像設定装置4の抽出部42は、画像情報供給装置2fから供給される第1の画像情報を取得する(ステップS20)。 Next, the extraction unit 42 of the image setting device 4 acquires the first image information supplied from the image information supply device 2f (step S20).
 次に、抽出部42は、取得した第1の画像情報から、第1の画像P11のエッジ部分Eを抽出したエッジ画像PEを生成する(ステップS30)。抽出部42は、生成したエッジ画像PEを設定部43に出力する。 Next, the extraction unit 42 generates an edge image PE from which the edge portion E of the first image P11 is extracted from the acquired first image information (step S30). The extraction unit 42 outputs the generated edge image PE to the setting unit 43.
 次に、画像設定装置4の設定部43は、ステップS10において検出された第1表示部11の位置に基づいて、ステップS30において抽出されたエッジ画像PEを変形する(ステップS40)。 Next, the setting unit 43 of the image setting device 4 deforms the edge image PE extracted in step S30 based on the position of the first display unit 11 detected in step S10 (step S40).
 さらに、設定部43は、変形したエッジ画像PEを第2の画像P12として設定し、設定した第2の画像P12を第2表示部12に出力する(ステップS50)。第2表示部12は、このようにして設定された第2の画像P12を表示する。一方、第1表示部は、画像情報供給装置2fから供給される第1の画像P11を表示する。 Further, the setting unit 43 sets the deformed edge image PE as the second image P12, and outputs the set second image P12 to the second display unit 12 (step S50). The second display unit 12 displays the second image P12 set in this way. On the other hand, the first display unit displays the first image P11 supplied from the image information supply device 2f.
 以上説明したように本実施形態の表示システム100fは、第2表示部12が表示する第2の画像P12の光が第1表示部11を透過する場合に、第1の画像P11または第2の画像P12を設定する設定部43を備えている。この設定部43は、第1表示部11が表示する第1の画像P11のエッジ部分Eと第2の画像P12が示すエッジ部分Eとが対応するように、第1の画像P11または第2の画像P12を設定する。この場合に、表示システム100fは、第1表示部11と第2表示部12との相対位置(つまり、ヘッドマウントディスプレイ50を装着している観察者1の位置)に応じて、第1の画像P11または第2の画像P12を設定する。これにより、表示システム100fは、観察者1の位置が変化しても、観察者1の位置に応じて第2の画像P12を設定することができる。つまり、表示システム100fは、観察者1が立体像として認識可能な範囲を広範囲にして表示することができる。 As described above, in the display system 100f of the present embodiment, when the light of the second image P12 displayed by the second display unit 12 is transmitted through the first display unit 11, the first image P11 or the second image P12 is displayed. A setting unit 43 for setting the image P12 is provided. The setting unit 43 sets the first image P11 or the second image so that the edge portion E of the first image P11 displayed by the first display unit 11 corresponds to the edge portion E indicated by the second image P12. An image P12 is set. In this case, the display system 100f performs the first image according to the relative position between the first display unit 11 and the second display unit 12 (that is, the position of the observer 1 wearing the head mounted display 50). P11 or the second image P12 is set. Thereby, even if the position of the observer 1 changes, the display system 100f can set the second image P12 according to the position of the observer 1. That is, the display system 100f can display a wide range that the observer 1 can recognize as a stereoscopic image.
 また、表示システム100fは、第1表示部11と第2表示部12との相対位置を検出する検出部41と、第1表示部11が表示する第1の画像P11のエッジ部分Eを抽出する抽出部42とを備えている。また、表示システム100fの設定部43は、検出部41が検出した相対位置に基づいて、抽出部42が抽出したエッジ部分Eを変形する。さらに、設定部43は、変形したエッジ部分Eを示す画像を第1の画像P11のエッジ部分Eに対応する第2の画像P12として設定する。これにより、表示システム100fは、第1の画像情報から第2の画像情報を生成する。この場合に、表示システム100fは、第1表示部11と第2表示部12との相対位置(つまり、ヘッドマウントディスプレイ50を装着している観察者1の位置)に応じた第2の画像P12を示す第2の画像情報を生成する。これにより、表示システム100fは、観察者1の位置が変化しても、観察者1の位置に応じて第2の画像P12を示す第2の画像情報を生成することができる。つまり、表示システム100fは、観察者1が立体像として認識可能な範囲を広範囲にして表示することができる。 In addition, the display system 100f extracts a detection unit 41 that detects a relative position between the first display unit 11 and the second display unit 12, and an edge portion E of the first image P11 displayed by the first display unit 11. And an extraction unit 42. Further, the setting unit 43 of the display system 100f deforms the edge portion E extracted by the extraction unit 42 based on the relative position detected by the detection unit 41. Further, the setting unit 43 sets an image showing the deformed edge portion E as the second image P12 corresponding to the edge portion E of the first image P11. Thereby, the display system 100f generates second image information from the first image information. In this case, the display system 100f displays the second image P12 according to the relative position between the first display unit 11 and the second display unit 12 (that is, the position of the observer 1 wearing the head mounted display 50). The second image information indicating is generated. Thereby, even if the position of the observer 1 changes, the display system 100f can generate the second image information indicating the second image P12 according to the position of the observer 1. That is, the display system 100f can display a wide range that the observer 1 can recognize as a stereoscopic image.
 さらに、表示システム100fは、画像情報供給装置2fから第2の画像情報を取得することなく、第1の画像P11に対応する第2の画像P12を第2表示部12に表示することができる。つまり、表示システム100fは、第1の画像情報のみを供給する汎用的な画像情報供給装置2fから画像情報の供給を受けることができる。 Furthermore, the display system 100f can display the second image P12 corresponding to the first image P11 on the second display unit 12 without acquiring the second image information from the image information supply device 2f. That is, the display system 100f can receive supply of image information from the general-purpose image information supply device 2f that supplies only the first image information.
 なお、この表示システム100fにおいて、ヘッドマウントディスプレイ50が、画像設定装置4を備えている構成であってもよい。また、この表示システム100fにおいて、第2表示部12が、画像設定装置4を備えている構成であってもよい。このように構成することにより、表示システム100fは、画像設定装置4をいずれかの装置に内蔵することができるため、表示システム100fを小型化することができる。 In the display system 100f, the head mounted display 50 may include the image setting device 4. In the display system 100f, the second display unit 12 may include the image setting device 4. With this configuration, the display system 100f can incorporate the image setting device 4 in any of the devices, and thus the display system 100f can be downsized.
 [第8の実施形態]
 以下、図面を参照して、本発明の第8の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。
[Eighth Embodiment]
Hereinafter, an eighth embodiment of the present invention will be described with reference to the drawings. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図20は、本発明の第8の実施形態に係る表示システム100gの構成の一例を示す構成図である。表示システム100gは、画像設定装置4gを備えている。この画像設定装置4gは、判定部44を備えており、供給される第1の画像P11に基づいて、第2の画像P12を設定する。 FIG. 20 is a block diagram showing an example of the configuration of a display system 100g according to the eighth embodiment of the present invention. The display system 100g includes an image setting device 4g. The image setting device 4g includes a determination unit 44, and sets the second image P12 based on the supplied first image P11.
 判定部44は、観察者1が第1表示部11を介して第2表示部12を観察できる位置にいるか否かを判定する。判定部44は、観察者1が第1表示部11を介して第2表示部12を観察できる位置にいると判定した場合に、設定部43が設定した第2の画像P12を第2表示部12に供給する。すなわち、判定部44は、第2表示部12が第2の画像P12を表示した場合に、第2の画像P12の光が第1表示部11に入射するか否かを、検出部41が検出した相対位置に基づいて判定する。具体的には、判定部44は、上述した設定部43と同様にして、第1表示部11と第2表示部12との相対位置を算出する。次に、判定部44は、算出した相対位置に基づいて、第1表示部11が、予め設定されている第2表示部12の表示範囲内にあるか否かを判定する。次に、判定部44は、第1表示部11が第2表示部12の表示範囲内にあると判定した場合には、設定部43が設定した第2の画像P12を表示するための第2の画像情報を第2表示部12に出力する。一方、判定部44は、第1表示部11が第2表示部12の表示範囲内にないと判定した場合には、設定部43が設定した第2の画像P12を表示するための第2の画像情報を第2表示部12に出力しない。 The determination unit 44 determines whether or not the observer 1 is at a position where the second display unit 12 can be observed via the first display unit 11. When the determination unit 44 determines that the observer 1 is in a position where the second display unit 12 can be observed via the first display unit 11, the determination unit 44 displays the second image P12 set by the setting unit 43 as the second display unit. 12 is supplied. That is, in the determination unit 44, when the second display unit 12 displays the second image P12, the detection unit 41 detects whether or not the light of the second image P12 is incident on the first display unit 11. The determination is made based on the relative position. Specifically, the determination unit 44 calculates the relative position between the first display unit 11 and the second display unit 12 in the same manner as the setting unit 43 described above. Next, the determination unit 44 determines whether or not the first display unit 11 is within the preset display range of the second display unit 12 based on the calculated relative position. Next, when the determination unit 44 determines that the first display unit 11 is within the display range of the second display unit 12, the determination unit 44 displays the second image P <b> 12 set by the setting unit 43. Is output to the second display unit 12. On the other hand, if the determination unit 44 determines that the first display unit 11 is not within the display range of the second display unit 12, the determination unit 44 displays the second image P12 set by the setting unit 43. The image information is not output to the second display unit 12.
 以上説明したように本実施形態の表示システム100gは、第2表示部12が第2の画像P12を表示した場合に第2の画像P12の光が第1表示部11に入射するか否かを、検出部41が検出した相対位置に基づいて判定する判定部44を備えている。また、表示システム100gの第2表示部12は、第2の画像P12の光が第1表示部11に入射すると判定部44が判定した場合に、第2の画像P12を表示する。ここで、第2の画像P12は、第1の画像P11のエッジ部分Eを示す画像である。このため、ヘッドマウントディスプレイ50を装着していない観察者1が第2の画像P12を観察した場合には、観察者1は、内容の理解が困難な画像を観察する。観察者1は、このような内容の理解が困難なエッジ部分Eを示す画像を観察した場合に不快を感じることがある。この表示システム100gは、ヘッドマウントディスプレイ50を装着して、第2表示部12を観察している観察者1がいない場合には、第2表示部12に第2の画像P12を表示しない。したがって、表示システム100gは、ヘッドマウントディスプレイ50を装着していない観察者1が第2の画像P12(つまり、内容の理解が困難な画像)を観察することによって感じる不快感を低減することができる。さらに、この場合には、表示システム100gは、第2の画像P12を表示しないことにより消費電力を低減することができる。 As described above, the display system 100g of the present embodiment determines whether or not the light of the second image P12 enters the first display unit 11 when the second display unit 12 displays the second image P12. The determination unit 44 is configured to make a determination based on the relative position detected by the detection unit 41. The second display unit 12 of the display system 100g displays the second image P12 when the determination unit 44 determines that the light of the second image P12 is incident on the first display unit 11. Here, the second image P12 is an image showing the edge portion E of the first image P11. For this reason, when the observer 1 who does not wear the head mounted display 50 observes the second image P12, the observer 1 observes an image whose contents are difficult to understand. The observer 1 may feel uncomfortable when observing an image showing the edge portion E where it is difficult to understand the content. The display system 100g does not display the second image P12 on the second display unit 12 when the observer 1 wearing the head mounted display 50 and observing the second display unit 12 is not present. Therefore, the display system 100g can reduce discomfort felt by the observer 1 who is not wearing the head mounted display 50 by observing the second image P12 (that is, an image whose contents are difficult to understand). . Furthermore, in this case, the display system 100g can reduce power consumption by not displaying the second image P12.
 [第9の実施形態]
 以下、図面を参照して、本発明の第9の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。
[Ninth Embodiment]
The ninth embodiment of the present invention will be described below with reference to the drawings. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図21は、本発明の第9の実施形態に係る表示システム100hの構成の一例を示す構成図である。表示システム100hは、画像情報供給装置2hと、画像設定装置4hとを備えている。画像情報供給装置2hは、第1の画像P11を表示するための第1の画像情報を、第1表示部11及び画像設定装置4hに供給する。また、画像情報供給装置2hは、第3の画像P13を表示するための第3の画像情報を、画像設定装置4hに供給する。ここで、第3の画像P13とは、第2表示部12に表示される画像であって、第2の画像P12とは異なる画像である。この第3の画像P13は、ヘッドマウントディスプレイ50を装着していない観察者1が観察した場合に、エッジ画像PEのように表示されている内容の理解が困難な画像ではなく、その内容の理解が容易な画像である。つまり、第3の画像P13は、観察者1が第2の画像P12を観察した場合に比して、観察した観察者1が不快を感じる程度が低い画像である。なお、第3の画像P13は、第1の画像P11と同一の画像であってもよい。また、画像設定装置4hは、判定部44hを備えている。 FIG. 21 is a block diagram showing an example of the configuration of a display system 100h according to the ninth embodiment of the present invention. The display system 100h includes an image information supply device 2h and an image setting device 4h. The image information supply device 2h supplies the first image information for displaying the first image P11 to the first display unit 11 and the image setting device 4h. The image information supply device 2h supplies third image information for displaying the third image P13 to the image setting device 4h. Here, the third image P13 is an image displayed on the second display unit 12 and is an image different from the second image P12. This third image P13 is not an image in which it is difficult to understand the content displayed like the edge image PE when the observer 1 who is not wearing the head mounted display 50 observes. Is an easy image. That is, the third image P13 is an image that is less likely to cause the observer 1 to feel uncomfortable than when the observer 1 observes the second image P12. Note that the third image P13 may be the same image as the first image P11. The image setting device 4h includes a determination unit 44h.
 判定部44hは、第2の画像P12の光が第1表示部11に入射しないと判定した場合に、第3の画像P13を第2表示部12に供給する。具体的には、判定部44hは、判定部44と同様にして、ヘッドマウントディスプレイ50を装着した観察者1が第1表示部11を介して第2表示部12を観察できる位置にいるか否かを判定する。次に、判定部44hは、第1表示部11が第2表示部12の表示範囲内にあると判定した場合には、設定部43が設定した第2の画像P12を表示するための第2の画像情報を第2表示部12に出力する。一方、判定部44hは、第1表示部11が第2表示部12の表示範囲内にないと判定した場合には、供給される第3の画像P13を表示するための第3の画像情報を第2表示部12に供給する。 When the determination unit 44h determines that the light of the second image P12 does not enter the first display unit 11, the determination unit 44h supplies the third image P13 to the second display unit 12. Specifically, in the same manner as the determination unit 44, the determination unit 44 h determines whether or not the observer 1 wearing the head mounted display 50 is at a position where the second display unit 12 can be observed via the first display unit 11. Determine. Next, when the determination unit 44h determines that the first display unit 11 is within the display range of the second display unit 12, the determination unit 44h displays a second image P12 for displaying the second image P12 set by the setting unit 43. Is output to the second display unit 12. On the other hand, when the determination unit 44h determines that the first display unit 11 is not within the display range of the second display unit 12, the determination unit 44h displays third image information for displaying the supplied third image P13. This is supplied to the second display unit 12.
 以上説明したように本実施形態の表示システム100hの判定部44hは、第2の画像P12の光が第1表示部11に入射しないと判定した場合に、第3の画像P13を第2表示部12に供給する。また、第2表示部12は、第2の画像P12の光が第1表示部11に入射しないと判定部44hが判定した場合に、第2の画像P12とは異なる第3の画像を表示する。これにより、表示システム100hは、ヘッドマウントディスプレイ50を装着している観察者1が第2表示部12を観察していない場合には、第2表示部12に第2の画像P12に代えて、第3の画像P13を表示することができる。ここで、第2の画像P12は、第1の画像P11のエッジ部分Eを示す画像である。このため、ヘッドマウントディスプレイ50を装着していない観察者1が第2の画像P12を観察した場合には、観察者1は、内容の理解が困難な画像を観察する。また、上述したように、第3の画像P13は、第2の画像P12とは異なる画像であって、ヘッドマウントディスプレイ50を装着していない観察者1が観察した場合に、内容の理解が容易な画像である。例えば、第3の画像P13は、第1の画像P11と同じ画像である。この表示システム100hは、ヘッドマウントディスプレイ50を装着して、第2表示部12を観察している観察者1がいない場合には、内容の理解が困難な画像の表示に代えて、内容の理解が容易な第3の画像P13を表示することができる。これにより、表示システム100hは、ヘッドマウントディスプレイ50を装着していない観察者1が第2の画像P12(つまり、内容の理解が困難な画像)を観察することによって感じる不快感を低減することができる。 As described above, when the determination unit 44h of the display system 100h according to the present embodiment determines that the light of the second image P12 does not enter the first display unit 11, the third image P13 is displayed on the second display unit. 12 is supplied. The second display unit 12 displays a third image different from the second image P12 when the determination unit 44h determines that the light of the second image P12 does not enter the first display unit 11. . Thereby, the display system 100h replaces the second image P12 with the second display unit 12 when the observer 1 wearing the head mounted display 50 is not observing the second display unit 12. A third image P13 can be displayed. Here, the second image P12 is an image showing the edge portion E of the first image P11. For this reason, when the observer 1 who does not wear the head mounted display 50 observes the second image P12, the observer 1 observes an image whose contents are difficult to understand. Further, as described above, the third image P13 is an image different from the second image P12, and when the observer 1 who does not wear the head mounted display 50 observes, the content is easy to understand. It is an image. For example, the third image P13 is the same image as the first image P11. In the display system 100h, when there is no observer 1 who is wearing the head-mounted display 50 and observing the second display unit 12, the display system 100h understands the content instead of displaying an image whose content is difficult to understand. Can display the third image P13. Thereby, the display system 100h can reduce discomfort felt by the observer 1 who is not wearing the head mounted display 50 by observing the second image P12 (that is, an image whose contents are difficult to understand). it can.
 なお、第2表示部12は、第2の画像P12を第3の画像P13に重ねて表示してもよい。また、第2表示部12は、第2の画像P12を第3の画像P13に重ねて表示する場合には、第2の画像P12の明るさ(例えば、輝度)第3の画像P13の明るさよりも低減させて、これらの画像を表示してもよい。このように構成しても、表示システム100hは、第1の画像P11と第2の画像P12とを対応させて表示することができるため、ヘッドマウントディスプレイ50を装着している観察者1に対して、広範囲に立体像を表示することができる。また、これにより表示システム100hは、ヘッドマウントディスプレイ50を装着していない観察者1が、第2の画像P12を観察することによって感じる不快感を低減させつつ、第2の画像P12とは異なる第3の画像P13を表示することができる。 It should be noted that the second display unit 12 may display the second image P12 so as to overlap the third image P13. In addition, when the second display unit 12 displays the second image P12 so as to overlap the third image P13, the brightness (for example, luminance) of the second image P12 is determined based on the brightness of the third image P13. These images may be displayed at a reduced level. Even with this configuration, the display system 100h can display the first image P11 and the second image P12 in correspondence with each other, so that the observer 1 wearing the head-mounted display 50 can display it. Thus, a stereoscopic image can be displayed over a wide range. Accordingly, the display system 100h is different from the second image P12 while reducing the discomfort felt by the observer 1 who is not wearing the head mounted display 50 by observing the second image P12. 3 images P13 can be displayed.
 [第10の実施形態]
 以下、図面を参照して、本発明の第10の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。
[Tenth embodiment]
Hereinafter, a tenth embodiment of the present invention will be described with reference to the drawings. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図22は、本発明の第10の実施形態に係る表示システム100jの構成の一例を示す構成図である。表示システム100jは、画像情報供給装置2jと、画像設定装置4jとを備えている。画像情報供給装置2jは、立体像の奥行き位置情報を含む第1の画像情報を画像設定装置4jに供給する。つまり、この第1の画像情報とは、立体像を表示するための画像情報である。画像設定装置4jは、視差設定部45を備えている。 FIG. 22 is a block diagram showing an example of the configuration of a display system 100j according to the tenth embodiment of the present invention. The display system 100j includes an image information supply device 2j and an image setting device 4j. The image information supply device 2j supplies the first image information including the depth position information of the stereoscopic image to the image setting device 4j. That is, the first image information is image information for displaying a stereoscopic image. The image setting device 4j includes a parallax setting unit 45.
 視差設定部45は、両眼視差により設定される三次元画像の奥行き位置の範囲を、第2表示部12が第2の画像P12を表示する奥行き位置を含む奥行き位置の範囲にして、両眼視差を設定する。具体的には、視差設定部45は、設定部43と同様にして、第1表示部11と第2表示部12との相対位置を算出する。次に、視差設定部45は、画像情報供給装置2jから第1の画像P11を表示するための第1の画像情報を取得する。次に、視差設定部45は、算出した相対位置に基づいて、第1表示部11を基準にした第2表示部12の奥行き位置を算出する。この視差設定部45が算出する第2表示部12の奥行き位置について、図23を参照して説明する。 The parallax setting unit 45 sets the range of the depth position of the three-dimensional image set by the binocular parallax to the range of the depth position including the depth position where the second display unit 12 displays the second image P12. Set the parallax. Specifically, the parallax setting unit 45 calculates the relative position between the first display unit 11 and the second display unit 12 in the same manner as the setting unit 43. Next, the parallax setting unit 45 acquires first image information for displaying the first image P11 from the image information supply device 2j. Next, the parallax setting unit 45 calculates the depth position of the second display unit 12 with respect to the first display unit 11 based on the calculated relative position. The depth position of the second display unit 12 calculated by the parallax setting unit 45 will be described with reference to FIG.
 図23は、本実施形態における三次元画像の奥行き位置の一例を示す模式図である。視差設定部45は、第1表示部11を基準にした第2表示部12の奥行き位置としての、第1表示部11と第2表示部12との距離Lpを算出する。次に、視差設定部45は、算出した距離Lpを中心にした奥行き方向の距離Lipの範囲を、立体像を表示する奥行き位置の範囲に設定する。次に、視差設定部45は、取得した第1の画像情報に含まれる立体像の奥行き位置情報に基づいて、設定した奥行き位置の範囲(つまり、距離Lipの範囲)に、左眼用画像P11L及び右眼用画像P11Rの両眼視差を設定する。つまり、視差設定部45は、第2の画像P12を表示する奥行き位置を含む奥行き位置の範囲に、立体像が表示されるように、左眼用画像P11L及び右眼用画像P11Rの両眼視差を設定する。このようにして、第1表示部11は、第1の画像P11を観察する観察者1に対して、距離Lipの範囲の奥行き位置に第1の画像P11に対応する立体像IP11を表示する。 FIG. 23 is a schematic diagram illustrating an example of the depth position of the three-dimensional image in the present embodiment. The parallax setting unit 45 calculates the distance Lp between the first display unit 11 and the second display unit 12 as the depth position of the second display unit 12 with respect to the first display unit 11. Next, the parallax setting unit 45 sets the range of the distance Lip in the depth direction around the calculated distance Lp as the range of the depth position where the stereoscopic image is displayed. Next, the parallax setting unit 45 sets the left-eye image P11L within the set depth position range (that is, the distance Lip range) based on the depth position information of the stereoscopic image included in the acquired first image information. And binocular parallax of the right-eye image P11R. That is, the parallax setting unit 45 causes the binocular parallax of the left-eye image P11L and the right-eye image P11R so that the stereoscopic image is displayed in the range of the depth position including the depth position where the second image P12 is displayed. Set. In this way, the first display unit 11 displays the stereoscopic image IP11 corresponding to the first image P11 at the depth position within the range of the distance Lip for the observer 1 who observes the first image P11.
 以上説明したように本実施形態の表示システム100jの第1表示部11は、観察者1の左眼に左眼用画像P11Lを表示する左眼表示部11Lと、観察者1の右眼に右眼用画像P11Rを表示する右眼表示部11Rとを備えている。また、第1の画像P11には、互いに両眼視差を有する左眼用画像P11Lと右眼用画像P11Rとが含まれている。これにより、表示システム100jは、両眼視差を有さない第1の画像P11を表示する場合に比して、観察者1が観察する立体像の立体感を向上させることができる。 As described above, the first display unit 11 of the display system 100j of the present embodiment includes the left eye display unit 11L that displays the left-eye image P11L on the left eye of the viewer 1, and the right eye of the viewer 1 on the right side. A right-eye display unit 11R that displays the eye image P11R. Further, the first image P11 includes a left-eye image P11L and a right-eye image P11R that have binocular parallax. Thereby, the display system 100j can improve the stereoscopic effect of the stereoscopic image observed by the observer 1 as compared with the case of displaying the first image P11 having no binocular parallax.
 さらに、表示システム100jは、両眼視差により設定される三次元画像の奥行き位置の範囲を、第2表示部12が第2の画像P12を表示する奥行き位置を含む奥行き位置の範囲にして、両眼視差を設定する視差設定部45を備えている。これにより、表示システム100jは、第1の画像P11が有する両眼視差による立体像の奥行き位置と、第1の画像P11のエッジ部分Eを示す第2の画像P12の奥行き位置とを近接させて各画像を表示する。ここで、第1の画像P11が有する両眼視差による立体像の奥行き位置と第2の画像P12の奥行き位置とが近接していない場合には、観察者1が観察する第1の画像P11と第2の画像P12とが対応しにくくなることがある。例えば、第1の画像P11のエッジ部分と、第2の画像P12のエッジ画像PEとの位置が一致しにくくなることがある。一方、表示システム100jは、両眼視差による立体像の奥行き位置と、第2の画像P12の奥行き位置とを近接させることにより、観察者1が観察する第1の画像P11と第2の画像P12とが対応しやすくする。つまり、表示システム100jは、観察者1が、両眼視差を有する第1の画像P11と、第2の画像P12とを観察した場合に生じる違和感を低減することができる。 Furthermore, the display system 100j changes the depth position range of the three-dimensional image set by binocular parallax to a depth position range including the depth position at which the second display unit 12 displays the second image P12. A parallax setting unit 45 for setting eye parallax is provided. Accordingly, the display system 100j brings the depth position of the stereoscopic image due to binocular parallax included in the first image P11 close to the depth position of the second image P12 indicating the edge portion E of the first image P11. Display each image. Here, when the depth position of the stereoscopic image due to the binocular parallax included in the first image P11 and the depth position of the second image P12 are not close to each other, the first image P11 observed by the observer 1 is The second image P12 may become difficult to correspond. For example, the position of the edge portion of the first image P11 and the position of the edge image PE of the second image P12 may be difficult to match. On the other hand, the display system 100j brings the depth position of the stereoscopic image due to binocular parallax and the depth position of the second image P12 close to each other, whereby the first image P11 and the second image P12 that the observer 1 observes. And make it easier to deal with. That is, the display system 100j can reduce a sense of incongruity that occurs when the observer 1 observes the first image P11 having the binocular parallax and the second image P12.
 [第11の実施形態]
 以下、図面を参照して、本発明の第11の実施形態を説明する。なお、上述した各実施形態と同一の構成については同一の符号を付してその説明を省略する。
[Eleventh embodiment]
The eleventh embodiment of the present invention will be described below with reference to the drawings. In addition, about the structure same as each embodiment mentioned above, the same code | symbol is attached | subjected and the description is abbreviate | omitted.
 図24は、本発明の第11の実施形態に係る表示システム100kの構成の一例を示す構成図である。表示システム100kは、画像情報供給装置2kと、画像設定装置4kと、ヘッドマウントディスプレイ50kとを備えている。画像情報供給装置2kは、画像情報を画像設定装置4kとヘッドマウントディスプレイ50kとに供給する。 FIG. 24 is a block diagram showing an example of the configuration of a display system 100k according to the eleventh embodiment of the present invention. The display system 100k includes an image information supply device 2k, an image setting device 4k, and a head mounted display 50k. The image information supply device 2k supplies image information to the image setting device 4k and the head mounted display 50k.
 画像設定装置4kは、画像設定装置4と同様にして、入力される画像情報と検出した第1表示部11及び第2表示部の相対位置とに基づいて、第2の画像情報を設定する。 The image setting device 4k sets the second image information based on the input image information and the detected relative positions of the first display unit 11 and the second display unit in the same manner as the image setting device 4.
 ヘッドマウントディスプレイ50kは、画像検出部51と、画像設定部52とを備えている。 The head mounted display 50k includes an image detection unit 51 and an image setting unit 52.
 画像検出部51は、不図示の撮像素子を備えており、第2表示部12が表示する第2の画像P12を撮像素子によって検出する。また、画像検出部51は、検出した第2の画像P12を示す画像情報を画像設定部52に出力する。 The image detection unit 51 includes an image sensor (not shown), and detects the second image P12 displayed by the second display unit 12 by the image sensor. Further, the image detection unit 51 outputs image information indicating the detected second image P12 to the image setting unit 52.
 画像設定部52は、画像検出部51が検出した第2の画像P12を示す画像情報を取得する。次に、画像設定部52は、画像情報供給装置2kが供給する画像のうちから、検出された第2の画像P12に対応する画像を選択して、選択した画像の画像情報を画像情報供給装置2kから取得する。次に、画像設定部52は、画像情報供給装置2kから取得した画像情報を、第1の画像P11を示す第1の画像情報として第1表示部11に出力する。このようにして、第1表示部11は、画像設定部52が設定した第1の画像P11を表示する。また、第2表示部12は、画像設定装置4kが設定した第2の画像P12を表示する。 The image setting unit 52 acquires image information indicating the second image P12 detected by the image detection unit 51. Next, the image setting unit 52 selects an image corresponding to the detected second image P12 from the images supplied by the image information supply device 2k, and sets the image information of the selected image as the image information supply device. Get from 2k. Next, the image setting unit 52 outputs the image information acquired from the image information supply device 2k to the first display unit 11 as the first image information indicating the first image P11. In this way, the first display unit 11 displays the first image P11 set by the image setting unit 52. The second display unit 12 displays the second image P12 set by the image setting device 4k.
 この表示システム100kの具体的な構成の一例について、図25を参照して説明する。図25は、本実施形態における表示システム100kの構成の一例を示す模式図である。 An example of a specific configuration of the display system 100k will be described with reference to FIG. FIG. 25 is a schematic diagram illustrating an example of the configuration of the display system 100k in the present embodiment.
 画像情報供給装置2kは、画像配信事業者などが運用する画像サーバを備えている。画像情報供給装置2kは、この画像サーバが記憶している複数の画像情報をアンテナA1及びアンテナA2を介して、画像設定装置4kに供給する。 The image information supply device 2k includes an image server that is operated by an image distributor. The image information supply device 2k supplies the plurality of image information stored in the image server to the image setting device 4k via the antenna A1 and the antenna A2.
 この画像設定装置4k及び第2表示部12とは、例えば、駅構内などに設置されるデジタルサイネージシステムである。画像設定装置4kは、アンテナA1及びアンテナA2を介して画像情報供給装置2kから取得した複数の画像情報のうちから、第2の画像P12として第2表示部12に表示する画像を示す画像情報を選択する。そして、画像設定装置4kは、上述したように、選択した画像情報に含まれる画像のエッジ部分Eを抽出して第2の画像P12を設定し、設定した第2の画像P12を表示する第2の画像情報を第2表示部12に供給する。第2表示部12は、画像設定装置4kから供給される第2の画像情報に基づいて、第2の画像P12を表示する。 The image setting device 4k and the second display unit 12 are, for example, a digital signage system installed in a station premises. The image setting device 4k displays image information indicating an image to be displayed on the second display unit 12 as the second image P12 among the plurality of pieces of image information acquired from the image information supply device 2k via the antenna A1 and the antenna A2. select. Then, as described above, the image setting device 4k extracts the edge portion E of the image included in the selected image information, sets the second image P12, and displays the set second image P12. Is supplied to the second display unit 12. The second display unit 12 displays the second image P12 based on the second image information supplied from the image setting device 4k.
 一方、画像情報供給装置2kは、画像サーバが記憶している複数の画像情報をネットワーク5及びアンテナA3を介して、観察者1が装着するヘッドマウントディスプレイ50kに供給する。ヘッドマウントディスプレイ50kは、不図示のアンテナを備えており、画像情報供給装置2kがネットワーク5及びアンテナA3を介して供給する複数の画像情報を受信する。 On the other hand, the image information supply device 2k supplies a plurality of image information stored in the image server to the head mounted display 50k worn by the observer 1 via the network 5 and the antenna A3. The head mounted display 50k includes an antenna (not shown), and receives a plurality of pieces of image information supplied from the image information supply device 2k via the network 5 and the antenna A3.
 ここで、ヘッドマウントディスプレイ50kを装着している観察者1が第2表示部12を観察する。画像設定装置4kの検出部41は、第1表示部11の相対位置を検出する。画像設定装置4kの設定部43は、上述したように、検出された相対位置に基づいてエッジ部分Eを変形した第2の画像P12を設定する。 Here, the observer 1 wearing the head mounted display 50k observes the second display unit 12. The detection unit 41 of the image setting device 4 k detects the relative position of the first display unit 11. As described above, the setting unit 43 of the image setting device 4k sets the second image P12 obtained by deforming the edge portion E based on the detected relative position.
 一方、ヘッドマウントディスプレイ50kの画像検出部51は、第2表示部12が表示する第2の画像P12を検出する。次に、画像設定部52は、画像情報供給装置2kから供給される複数の画像情報のうちから、検出された第2の画像P12に対応する画像情報を選択する。そして、画像設定部52は、選択した画像情報を第1の画像P11を示す第1の画像情報として第1表示部11に供給する。第1表示部11は、供給される画像情報に基づいて、第1の画像P11を表示する。 On the other hand, the image detection unit 51 of the head mounted display 50k detects the second image P12 displayed by the second display unit 12. Next, the image setting unit 52 selects image information corresponding to the detected second image P12 from among a plurality of pieces of image information supplied from the image information supply device 2k. Then, the image setting unit 52 supplies the selected image information to the first display unit 11 as first image information indicating the first image P11. The first display unit 11 displays the first image P11 based on the supplied image information.
 このようにして、表示システム100kは、デジタルサイネージシステムが表示する画像、つまり第2表示部12が表示する第2の画像P12に対応した画像を、第1の画像P11として、ヘッドマウントディスプレイ50kの第1表示部11に表示する。 In this way, the display system 100k uses the image displayed by the digital signage system, that is, the image corresponding to the second image P12 displayed by the second display unit 12 as the first image P11. Displayed on the first display unit 11.
 以上説明したように表示システム100kは、画像検出部51と、画像設定部52とを備えている。この画像検出部51は、第2表示部12が表示する第2の画像P12を検出する。また、画像設定部52は、画像検出部51が検出した第2の画像P12に基づいて第1の画像P11を設定するとともに、設定した第1の画像P11を第1表示部11に供給する。さらに、表示システム100kの画像設定部52は、画像検出部51が検出した第2の画像P12に基づいて、入力される複数の画像のうちから第1の画像P11として表示する画像を選択することにより、第1の画像P11を設定する。これにより、表示システム100kは、第2表示部12に表示されている第2の画像P12に基づいて、第1の画像P11を設定することができる。つまり、表示システム100kは、第1表示部11に表示する第1の画像P11を、第2の画像P12によって設定することができる。 As described above, the display system 100k includes the image detection unit 51 and the image setting unit 52. The image detection unit 51 detects the second image P12 displayed by the second display unit 12. The image setting unit 52 sets the first image P11 based on the second image P12 detected by the image detection unit 51, and supplies the set first image P11 to the first display unit 11. Furthermore, the image setting unit 52 of the display system 100k selects an image to be displayed as the first image P11 from among a plurality of input images based on the second image P12 detected by the image detection unit 51. Thus, the first image P11 is set. Thereby, the display system 100k can set the first image P11 based on the second image P12 displayed on the second display unit 12. That is, the display system 100k can set the first image P11 to be displayed on the first display unit 11 by using the second image P12.
 例えば、上述したようなデジタルサイネージシステムにおいて、表示システム100kは、第2表示部12にサイネージ画像のエッジ部分Eを示す画像を表示する。ここで、ヘッドマウントディスプレイ50kを装着している観察者1が第2表示部12を観察した場合に、観察者1が装着しているヘッドマウントディスプレイ50kが備える第1表示部11に、サイネージ画像を表示する。これにより、表示システム100kは、観察者1に対して、第2表示部12を観察したことに応じてサイネージ画像を表示するため、強い印象を有するサイネージ画像を表示することができる。つまり、表示システム100kは、表示した画像を観察者1が観察することにより、観察者1が受ける印象の程度を向上させることができる。 For example, in the digital signage system as described above, the display system 100k displays an image indicating the edge portion E of the signage image on the second display unit 12. Here, when the observer 1 wearing the head mounted display 50k observes the second display unit 12, the signage image is displayed on the first display unit 11 provided in the head mounted display 50k worn by the observer 1. Is displayed. Thereby, since the display system 100k displays the signage image in response to the observation of the second display unit 12 with respect to the observer 1, the signage image having a strong impression can be displayed. That is, the display system 100k can improve the degree of impression received by the observer 1 when the observer 1 observes the displayed image.
 なお、画像設定部52は、第1の画像情報を生成することによって第1の画像P11を設定してもよい。この場合、画像設定部52は、生成した第1の画像情報を第1表示部11に供給する。このように構成しても、表示システム100kは、第2の画像P12に対応する第1の画像P11を表示することができる。これにより、表示システム100kは、ヘッドマウントディスプレイ50kがアンテナA3を備えていなくても、第1の画像P11を第1表示部11に表示することができる。 Note that the image setting unit 52 may set the first image P11 by generating the first image information. In this case, the image setting unit 52 supplies the generated first image information to the first display unit 11. Even if comprised in this way, the display system 100k can display the 1st image P11 corresponding to the 2nd image P12. Accordingly, the display system 100k can display the first image P11 on the first display unit 11 even if the head mounted display 50k does not include the antenna A3.
 以上、本発明の実施形態を図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、本発明の趣旨を逸脱しない範囲で適宜変更を加えることができる。また、上述した実施形態を適宜組み合わせてもよい。 The embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and appropriate modifications can be made without departing from the spirit of the present invention. . Moreover, you may combine embodiment mentioned above suitably.
 なお、上記の実施形態における第1表示部11、第2表示部12、設定部13、位置検出部14、検出部41、抽出部42、設定部43、判定部44、視差設定部45、画像検出部51、画像設定部52(以下、これらを総称して制御部CONTと記載する)又はこの制御部CONTが備える各部は、専用のハードウェアにより実現されるものであってもよく、また、メモリおよびマイクロプロセッサにより実現させるものであってもよい。 The first display unit 11, the second display unit 12, the setting unit 13, the position detection unit 14, the detection unit 41, the extraction unit 42, the setting unit 43, the determination unit 44, the parallax setting unit 45, and the image in the above embodiment. The detection unit 51, the image setting unit 52 (hereinafter collectively referred to as the control unit CONT) or each unit included in the control unit CONT may be realized by dedicated hardware. It may be realized by a memory and a microprocessor.
 なお、この制御部CONTが備える各部は、メモリおよびCPU(中央演算装置)により構成され、制御部CONTが備える各部の機能を実現するためのプログラムをメモリにロードして実行することによりその機能を実現させるものであってもよい。 Each unit included in the control unit CONT includes a memory and a CPU (central processing unit), and the function is realized by loading a program for realizing the function of each unit included in the control unit CONT into the memory and executing the program. It may be realized.
 また、制御部CONTが備える各部の機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することにより、制御部CONTが備える各部による処理を行ってもよい。なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものとする。 Further, by recording a program for realizing the function of each unit included in the control unit CONT on a computer-readable recording medium, causing the computer system to read and execute the program recorded on the recording medium, the control unit You may perform the process by each part with which CONT is provided. Here, the “computer system” includes an OS and hardware such as peripheral devices.
 また、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含むものとする。
 また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間の間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリのように、一定時間プログラムを保持しているものも含むものとする。また上記プログラムは、前述した機能の一部を実現するためのものであってもよく、さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるものであってもよい。
Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
The “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system. Furthermore, the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included. The program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
1…観察者、10…表示装置、11…第1表示部、12…第2表示部、13…設定部、14…位置検出部、41…検出部、42…抽出部、43…設定部、44…判定部、45…視差設定部、51…画像検出部、52…画像設定部 DESCRIPTION OF SYMBOLS 1 ... Observer 10 ... Display apparatus 11 ... 1st display part 12 ... 2nd display part 13 ... Setting part 14 ... Position detection part 41 ... Detection part 42 ... Extraction part 43 ... Setting part, 44 ... determination unit, 45 ... parallax setting unit, 51 ... image detection unit, 52 ... image setting unit

Claims (23)

  1.  第1被写体を含む第1画像を表示する第1表示部と、
     前記第1表示部が表示する前記第1画像の光を透過させ、前記第1被写体に対応する第2被写体を含む第2画像を表示し、前記第1被写体と前記第2被写体との少なくとも一部が重畳するように表示する、ユーザに可搬される第2表示部とを備える
     ことを特徴とする表示装置。
    A first display for displaying a first image including a first subject;
    Transmitting light of the first image displayed by the first display unit, displaying a second image including a second subject corresponding to the first subject, and at least one of the first subject and the second subject. A display device comprising: a second display unit that is displayed so as to overlap the unit and is carried by a user.
  2.  前記第2表示部は、
     前記第2画像としての虚像を表示する
     ことを特徴とする請求項1に記載の表示装置。
    The second display unit
    The display device according to claim 1, wherein a virtual image as the second image is displayed.
  3.  前記第2画像は、
     前記ユーザに前記第1画像の画素に対応して視認される前記第2画像の画素の画素値が、前記ユーザに視認させる立体像の奥行き位置に応じて設定されている
     ことを特徴とする請求項1または請求項2に記載の表示装置。
    The second image is
    The pixel value of the pixel of the second image visually recognized corresponding to the pixel of the first image by the user is set according to the depth position of the stereoscopic image to be visually recognized by the user. Item 3. A display device according to item 1 or item 2.
  4.  前記第2被写体は、前記第1被写体のエッジ部分を含み、前記第1表示部に表示される前記第1被写体のエッジ部分と前記第2表示部に表示される前記第2被写体に含まれる前記第1被写体のエッジ部分とが対応して前記ユーザに視認されるように設定されている
     ことを特徴とする請求項1から請求項3のいずれか一項に記載の表示装置。
    The second subject includes an edge portion of the first subject, and is included in the edge portion of the first subject displayed on the first display unit and the second subject displayed on the second display unit. The display device according to any one of claims 1 to 3, wherein the display device is set so that the edge portion of the first subject is visually recognized by the user.
  5.  前記第1被写体は、前記第2被写体のエッジ部分を含み、前記第2表示部に表示される前記第2被写体のエッジ部分と前記第1表示部に表示される前記第1被写体に含まれる前記第2被写体のエッジ部分とが対応して前記ユーザに視認されるように設定されている
     ことを特徴とする請求項1から請求項3のいずれか一項に記載の表示装置。
    The first subject includes an edge portion of the second subject, and is included in the edge portion of the second subject displayed on the second display unit and the first subject displayed on the first display unit. The display device according to any one of claims 1 to 3, wherein the display device is set so that the edge portion of the second subject is visually recognized by the user.
  6.  前記第2画像は、
     互いに両眼視差を有する左眼用画像と右眼用画像とを含み、
     前記第2表示部は、
     前記ユーザの左眼に前記左眼用画像を表示するとともに、右眼に前記右眼用画像を表示する
     ことを特徴とする請求項1から請求項5のいずれか一項に記載の表示装置。
    The second image is
    A left-eye image and a right-eye image having binocular parallax with each other,
    The second display unit
    6. The display device according to claim 1, wherein the left eye image is displayed on the left eye of the user and the right eye image is displayed on the right eye.
  7.  入力される前記第1画像の画像情報に基づいて、前記第1画像に対応する前記第2画像を設定する設定部を備える
     ことを特徴とする請求項1から請求項6のいずれか一項に記載の表示装置。
    7. The apparatus according to claim 1, further comprising: a setting unit configured to set the second image corresponding to the first image based on image information of the input first image. The display device described.
  8.  前記第1画像が表示される前記第1表示部の表示面と、
     前記第2画像が表示される前記第2表示部の表示面との相対位置を検出する検出部と
     を備え、
     前記設定部は、前記検出部が検出する前記相対位置を示す情報に基づいて、前記第1画像と前記第2画像との相対位置を設定して、前記第2画像を設定する
     ことを特徴とする請求項7に記載の表示装置。
    A display surface of the first display unit on which the first image is displayed;
    A detection unit for detecting a relative position with respect to a display surface of the second display unit on which the second image is displayed,
    The setting unit sets the second image by setting a relative position between the first image and the second image based on information indicating the relative position detected by the detection unit. The display device according to claim 7.
  9.  前記第2表示部が前記第2画像を表示した場合に前記第2画像の光が前記第1表示部に入射するか否かを、前記検出部が検出した前記相対位置に基づいて判定する判定部
     を備え、
     前記第2表示部は、
     前記第2画像の光が前記第1表示部に入射すると前記判定部が判定した場合に、前記第2画像を表示する
     ことを特徴とする請求項8に記載の表示装置。
    Determining whether the light of the second image is incident on the first display unit based on the relative position detected by the detection unit when the second display unit displays the second image With parts
    The second display unit
    The display device according to claim 8, wherein the second image is displayed when the determination unit determines that the light of the second image is incident on the first display unit.
  10.  前記第2表示部は、
     前記第2画像の光が前記第1表示部に入射しないと前記判定部が判定した場合に、前記第2画像とは異なる第3画像を表示する
     ことを特徴とする請求項9に記載の表示装置。
    The second display unit
    10. The display according to claim 9, wherein when the determination unit determines that the light of the second image does not enter the first display unit, a third image different from the second image is displayed. apparatus.
  11.  前記設定部は、前記第1画像内のエッジ部分を示す画素の画素値に基づいて、前記第1画像と前記第2画像との相対位置を設定して、前記第2画像を設定する
     ことを特徴とする請求項7から請求項10のいずれか一項に記載の表示装置。
    The setting unit sets the second image by setting a relative position between the first image and the second image based on a pixel value of a pixel indicating an edge portion in the first image. The display device according to claim 7, wherein the display device is characterized.
  12.  前記第2表示部によって表示される前記第2画像を観察する観察者の頭部に装着される装着部とを備え、
     前記第2表示部は、前記第2画像を接眼光学系によって表示する
     ことを特徴とする請求項1から請求項11のいずれか一項に記載の表示装置。
    A mounting unit mounted on the head of an observer who observes the second image displayed by the second display unit;
    The display device according to any one of claims 1 to 11, wherein the second display unit displays the second image by an eyepiece optical system.
  13.  両眼視差により設定される三次元画像の奥行き位置の範囲を、前記第2表示部が前記第2画像を表示する奥行き位置を含む奥行き位置の範囲にして、前記両眼視差を設定する視差設定部
     を備えることを特徴とする請求項12に記載の表示装置。
    Parallax setting for setting the binocular parallax by setting the range of the depth position of the three-dimensional image set by binocular parallax to a range of depth positions including the depth position at which the second display unit displays the second image The display device according to claim 12, further comprising: a unit.
  14.  前記第2表示部が表示する前記第2画像を検出する画像検出部と、
     前記画像検出部が検出した前記第2画像に基づいて前記第1画像を設定するとともに、設定した前記第1画像を前記第1表示部に供給する画像設定部と
     を備えることを特徴とする請求項11から請求項13のいずれか一項に記載の表示装置。
    An image detection unit for detecting the second image displayed by the second display unit;
    An image setting unit that sets the first image based on the second image detected by the image detection unit and supplies the set first image to the first display unit. The display device according to any one of claims 11 to 13.
  15.  前記画像設定部は、
     前記画像検出部が検出した前記第2画像に基づいて、入力される複数の画像のうちから前記第1画像として表示する画像を選択することにより、前記第1画像を設定する
     ことを特徴とする請求項14に記載の表示装置。
    The image setting unit
    The first image is set by selecting an image to be displayed as the first image from a plurality of input images based on the second image detected by the image detection unit. The display device according to claim 14.
  16.  第1被写体からの光を含む第1の光を透過させ、前記第1被写体に対応する第2被写体を含む第2画像を表示し、前記第1被写体と前記第2被写体との少なくとも一部が重畳するように表示する、ユーザに可搬される表示部とを備える
     ことを特徴とする表示装置。
    Transmitting a first light including light from the first subject, displaying a second image including a second subject corresponding to the first subject, wherein at least a part of the first subject and the second subject is A display device comprising: a display unit that is displayed so as to be superposed and carried by a user.
  17.  前記表示部は、前記第2画像としての虚像を表示する
     ことを特徴とする請求項16に記載の表示装置。
    The display device according to claim 16, wherein the display unit displays a virtual image as the second image.
  18.  第1被写体を含む第1画像を表示する第1表示部と、前記第1表示部が表示する前記第1画像の光を透過させ、前記第1被写体に対応する第2被写体を含む第2画像を表示する、ユーザに可搬される第2表示部とを有する表示装置のコンピュータに、
     前記第1被写体と前記第2被写体との少なくとも一部が重畳するように表示する表示手順を実行させるためのプログラム。
    A first display unit that displays a first image including the first subject; and a second image that transmits light of the first image displayed by the first display unit and includes a second subject corresponding to the first subject. In a computer of a display device having a second display unit that is portable to the user,
    A program for executing a display procedure for displaying so that at least a part of the first subject and the second subject overlap each other.
  19.  観察者の頭部に装着される装着部と、
     前記装着部に有し、第1被写体からの光を含む第1の光を透過させ、前記第1被写体に対応する第2画像を表示する表示部と
     を備え、
     前記表示部は、前記表示部を介して前記第1被写体と前記第2画像とを観察した前記観察者に対して、前記表示部を介さずに前記第1被写体を観察した場合とは異なる立体感を視認させる
     ことを特徴とする表示装置。
    A mounting part to be mounted on the observer's head;
    A display unit that is provided in the mounting unit and transmits first light including light from the first subject and displays a second image corresponding to the first subject;
    The display unit is a three-dimensional object different from the case of observing the first subject without the display unit for the observer who has observed the first subject and the second image through the display unit. A display device characterized by visually recognizing a feeling.
  20.  観察者の頭部に装着される表示装置であって、
     第1画像を接眼光学系により表示し、表示される前記第1画像のエッジ部分を示す第2画像の光を前記第1画像を表示する方向に透過させる第1表示部を備える
     ことを特徴とする表示装置。
    A display device mounted on the observer's head,
    A first display unit configured to display the first image by an eyepiece optical system and to transmit light of a second image indicating an edge portion of the displayed first image in a direction in which the first image is displayed. Display device.
  21.  前記第2画像を検出する画像検出部と、
     前記画像検出部が検出した前記第2画像に基づいて、前記第2画像の光が前記第1表示部を透過する場合に、前記第2画像が示すエッジ部分と前記第1画像のエッジ部分とが対応するように前記第1画像を設定して、設定した前記第1画像を前記第1表示部に供給する設定部と
     をさらに備えることを特徴とする請求項20に記載の表示装置。
    An image detection unit for detecting the second image;
    Based on the second image detected by the image detection unit, when light of the second image is transmitted through the first display unit, an edge portion indicated by the second image and an edge portion of the first image The display device according to claim 20, further comprising: a setting unit that sets the first image so as to correspond to each other and supplies the set first image to the first display unit.
  22.  第1画像を表示し、入射する光を透過させる第1表示部と、
     前記第1画像のエッジ部分を示す第2画像を検出する画像検出部と、
     前記画像検出部が検出した前記第2画像を第2表示部に供給する供給部と、
     前記第2表示部に対する前記第1表示部の位置を検出する検出部と、
     前記供給部が供給する前記第2画像と前記検出部が検出する前記位置とに基づいて、前記第1画像のエッジ部分を示す前記第2画像を設定する設定部と
     を備えることを特徴とする表示装置。
    A first display for displaying a first image and transmitting incident light;
    An image detection unit for detecting a second image indicating an edge portion of the first image;
    A supply unit for supplying the second image detected by the image detection unit to a second display unit;
    A detection unit for detecting a position of the first display unit with respect to the second display unit;
    A setting unit configured to set the second image indicating an edge portion of the first image based on the second image supplied by the supply unit and the position detected by the detection unit. Display device.
  23.  第1画像を表示し、入射する光を透過させる第1表示部に、前記第1画像を供給する供給部と、
     前記第1表示部の位置を検出する検出部と、
     前記供給部が供給する前記第1画像と前記検出部が検出する前記位置とに基づいて、前記第1画像のエッジ部分を示す第2画像を設定する設定部と、
     前記設定部が設定した前記第2画像を表示する第2表示部と
     を備えることを特徴とする表示装置。
    A supply unit that displays the first image and supplies the first image to a first display unit that transmits incident light;
    A detection unit for detecting a position of the first display unit;
    A setting unit configured to set a second image indicating an edge portion of the first image based on the first image supplied by the supply unit and the position detected by the detection unit;
    And a second display unit that displays the second image set by the setting unit.
PCT/JP2013/057511 2012-03-27 2013-03-15 Display apparatus and program WO2013146385A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2012-071236 2012-03-27
JP2012-071237 2012-03-27
JP2012071236 2012-03-27
JP2012071237 2012-03-27

Publications (1)

Publication Number Publication Date
WO2013146385A1 true WO2013146385A1 (en) 2013-10-03

Family

ID=49259646

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/057511 WO2013146385A1 (en) 2012-03-27 2013-03-15 Display apparatus and program

Country Status (1)

Country Link
WO (1) WO2013146385A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021184636A (en) * 2017-04-11 2021-12-02 ドルビー ラボラトリーズ ライセンシング コーポレイション Layered extended entertainment experience

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11142784A (en) * 1997-11-04 1999-05-28 Shimadzu Corp Head mount display with position detecting function
JP2000194467A (en) * 1998-12-24 2000-07-14 Matsushita Electric Ind Co Ltd Information display device, information processor, device controller and range finding instrument
JP2001112025A (en) * 1999-10-05 2001-04-20 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional display method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11142784A (en) * 1997-11-04 1999-05-28 Shimadzu Corp Head mount display with position detecting function
JP2000194467A (en) * 1998-12-24 2000-07-14 Matsushita Electric Ind Co Ltd Information display device, information processor, device controller and range finding instrument
JP2001112025A (en) * 1999-10-05 2001-04-20 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional display method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HIDENORI KURIBAYASHI ET AL.: "Effect on Depth Perception by a Blur in a Depth-fused 3-D Display", THE JOURNAL OF THE INSTITUTE OF IMAGE INFORMATION AND TELEVISION ENGINEERS, vol. 60, no. 3, 1 March 2006 (2006-03-01), pages 431 - 438 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021184636A (en) * 2017-04-11 2021-12-02 ドルビー ラボラトリーズ ライセンシング コーポレイション Layered extended entertainment experience
JP7263451B2 (en) 2017-04-11 2023-04-24 ドルビー ラボラトリーズ ライセンシング コーポレイション Layered Enhanced Entertainment Experience
US11893700B2 (en) 2017-04-11 2024-02-06 Dolby Laboratories Licensing Corporation Layered augmented entertainment experiences

Similar Documents

Publication Publication Date Title
US10019831B2 (en) Integrating real world conditions into virtual imagery
KR101978896B1 (en) Stereo rendering system
US9398290B2 (en) Stereoscopic image display device, image processing device, and stereoscopic image processing method
US10204452B2 (en) Apparatus and method for providing augmented reality-based realistic experience
US10187633B2 (en) Head-mountable display system
WO2011036827A1 (en) 3d image display device and 3d image display method
US9710955B2 (en) Image processing device, image processing method, and program for correcting depth image based on positional information
US10613405B2 (en) Pi-cell polarization switch for a three dimensional display system
US10672311B2 (en) Head tracking based depth fusion
JP2010153983A (en) Projection type video image display apparatus, and method therein
JPH10234057A (en) Stereoscopic video device and computer system including the same
JP2017102696A (en) Head mounted display device and computer program
KR100751290B1 (en) Image system for head mounted display
KR101888082B1 (en) Image display apparatus, and method for operating the same
JP6591667B2 (en) Image processing system, image processing apparatus, and program
JPWO2019229906A1 (en) Image display system
WO2013146385A1 (en) Display apparatus and program
US20140362197A1 (en) Image processing device, image processing method, and stereoscopic image display device
JP2013168781A (en) Display device
JP2013024910A (en) Optical equipment for observation
JP2006253777A (en) Stereoscopic image display device
JP6198157B2 (en) Program, recording medium, image processing apparatus, and image processing method
JP2014150402A (en) Display apparatus and program
JP2012133179A (en) Stereoscopic device and control method of stereoscopic device
WO2013089249A1 (en) Display device, display control device, display control program, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13767325

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13767325

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP